Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2 | 4 | 0 | 0 | 8 | 1 | 0 | 0 | I have 5 test suites which are independent of each other.
I have to run it against the same environment. Most of my test suites consist of API calls.
The test cases inside the suites should run in sequence as they are dependent on each other.
Is there any way we can run all the test suites in parallel via the pybot command? | 0 | python,testing,robotframework | 2014-04-04T06:03:00.000 | 0 | 22,854,756 | The simple solution is using Jekins:
You could install Jeknins with robotframework plugin.You can have
two job run in parallel by default without any slave node.
Or you have have multiple slave node, then using tag in robot and node label to distribute job.
Just set the parameter in Jenkins job build section, such as:
pybot --include tag1 test.robot for job1
then set pybot --include tag2 test.robot for job2.
Then trigger upstream job. you will get them running in parallel.
But still you need to make sure the file you accesing is locked by one of the testing job. | 0 | 8,580 | false | 0 | 1 | Is there any way to run robot framework test suites in parallel? | 41,011,635 |
2 | 4 | 0 | 0 | 8 | 1 | 0 | 0 | I have 5 test suites which are independent of each other.
I have to run it against the same environment. Most of my test suites consist of API calls.
The test cases inside the suites should run in sequence as they are dependent on each other.
Is there any way we can run all the test suites in parallel via the pybot command? | 0 | python,testing,robotframework | 2014-04-04T06:03:00.000 | 0 | 22,854,756 | When the tests are completely stand-alone and can run completely parallel I have had some success with just writing an execution script that iterates through all the IP addresses of the units on which I would want to run a test in parallel and then calling the test with that IP address as an argument. I also tell it to only create the output.xml files, naming them based on the hostname or IP address, and then the scrip does post-processing with rebot which creates an aggregated report with all the units. | 0 | 8,580 | false | 0 | 1 | Is there any way to run robot framework test suites in parallel? | 28,816,002 |
1 | 1 | 0 | 89 | 87 | 0 | 1.2 | 0 | I've started working on a rather big (multithreaded) Python project, with loads of (unit)tests. The most important problem there is that running the application requires a preset environment, which is implemented by a context manager. So far we made use of a patched version of the unit test runner that would run the tests inside this manager, but that doesn't allow switching context between different test modules.
Both nose and pytest do support such a thing because they support fixtures at many granularities, so we're looking into switching to nose or pytest. Both these libraries would also support 'tagging' tests and run only these tagged subsets, which is something we also would like to do.
I have been looking through the documentation of both nose and pytest a bit, and as far as I can see the bigger part of those libraries essentially support the same functionality, except that it may be named differently, or require slightly different syntax. Also, I noted some small differences in the available plugins (nose has multiprocess-support, pytest doesn't seem to for instance)
So it seems, the devil is in the detail, which means (often at least) in personal taste and we better go with the library that fits our personal taste best.
So I'd to ask for a subjective argumentation why I should be going with nose or pytest in order to choose the library/community combo that best fits our needs. | 0 | python,pytest,nose | 2014-04-04T07:45:00.000 | 0 | 22,856,638 | I used to use Nose because it was the default with Pylons. I didn't like it at all. It had configuration tendrils in multiple places, virtually everything seemed to be done with an underdocumented plugin which made it all even more indirect and confusing, and because it did unittest tests by default, it regularly broke with Unicode tracebacks, hiding the sources of errors.
I've been pretty happy with py.test the last couple years. Being able to just write a test with assert out of the box makes me hate writing tests way less, and hacking whatever I need atop the core has been pretty easy. Rather than a fixed plugin interface it just has piles of hooks, and pretty understandable source code should you need to dig further. I even wrote an adapter for running Testify tests under py.test, and had more trouble with Testify than with py.test.
That said, I hear nose has plugins for classless tests and assert introspection nowadays, so you'll probably do fine with either. I still feel like I can hit the ground running with py.test, though, and I can understand what's going on when it breaks. | 0 | 36,739 | true | 0 | 1 | nose vs pytest - what are the (subjective) differences that should make me pick either? | 22,856,817 |
1 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 0 | I am running my scripts on python 2.6. The requirement is as mentioned below.
There are some 100 test scripts (all python scripts) in one directory. I have to create one master python script which will start running all the 100 test scripts one by one and then I have to display whether test case is failed or not. Every script will call sys.exit() to finish the execution of script. Currently I am reading the sys.exit() value from the master script and I am determining whether the particular test case is failed or not.
But now there is a requirement change that I have to display the log file name also (log files will be created when I run scripts). So can I send a tuple as argument (which contains status as well as log file name) to sys.exit() instead of sending integer value?
I have read on net that if we pass an argument other than integer, None is equivalent to passing zero, and any other object is printed to stderr and results in an exit code of 1. So if I pass a tuple as an argument, will os consider as failure even in success case also as I am not passing None?
I am using subprocess.popen() in my master script to run the scripts and I am using format() to read the sys.exit() value. | 0 | python | 2014-04-04T19:03:00.000 | 1 | 22,871,051 | Yes, you are correct. Passing a tuple will print the tuple to stderr and return with a exit code of 1. You must return None to denote success.
Notice this is a convention of shells and the like and is not required. That being said the conventions are in place for a very, very good reason. | 0 | 991 | false | 0 | 1 | Can we send a tuple as an argument to sys.exit() in python | 22,871,157 |
1 | 2 | 0 | 1 | 6 | 0 | 0.099668 | 0 | In C, C++, and Java, an integer has a certain range. One thing I realized in Python is that I can calculate really large integers such as pow(2, 100). The same equivalent code, in C, pow(2, 100) would clearly cause an overflow since in 32-bit architecture, the unsigned integer type ranges from 0 to 2^32-1. How is it possible for Python to calculate these large numbers? | 0 | python,architecture,integer | 2014-04-05T00:21:00.000 | 0 | 22,875,067 | How is it possible for Python to calculate these large numbers?
How is it possible for you to calculate these large numbers if you only have the 10 digits 0-9? Well, you use more than one digit!
Bignum arithmetic works the same way, except the individual "digits" are not 0-9 but 0-4294967296 or 0-18446744073709551616. | 0 | 1,781 | false | 0 | 1 | How does python represent such large integers? | 22,875,190 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | Ok, so I just installed mechanize with easy_install from the command prompt, but now when I try to write a little snippet of code to test it, Python is telling me it can't import mechanize, any idea what might be going wrong? I'm at a loss and unfamiliar with mechanize. | 0 | python,windows,mechanize-python | 2014-04-05T17:50:00.000 | 0 | 22,884,502 | Bah, I had placed my .py file in a new folder within the Python27 folder and apparently that was the issue. I moved it to Python27 and it's correctly importing. | 0 | 42 | true | 0 | 1 | Python and mechanize issues (Windows) | 22,884,534 |
1 | 1 | 0 | 2 | 2 | 0 | 1.2 | 0 | Is there any inbuilt way/ or a hack by which I can know which key is being evicted from memcache ?
There is one solution of polling for all possible keys inserted into memcache (e.g. get multi), but that is inefficient and certainly not implementable for large number of keys.
The functionality is not needed to be run in production, but during some benchmarking and optimization runs. | 0 | memcached,python-memcached | 2014-04-07T11:21:00.000 | 1 | 22,910,946 | Not possible AFAIK, but a really good (and simple) solution is to modify your memcached library and do a print (or whatever you want) in the delete and multidelete methods. You can then get the keys that are being deleted (both by your app and by the library itself). I hope that helps | 0 | 220 | true | 0 | 1 | How to find the keys being evicted from memcache? | 23,104,270 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a large set of tests that have not been maintained for a while. Some percentage of the tests pass, and some fail. I'd like to ask nose to show me only the succeeding tests. Is there a way to do this? | 0 | python,testing,nose | 2014-04-07T19:00:00.000 | 0 | 22,920,782 | I think this feature goes against very strong fundamentals of testing, but you can always output test results into a file by using --with-xunit and --xunit-file=my_test_results.xml and write a short script that does what you want. | 0 | 22 | false | 0 | 1 | Given a set of python nose tests, is there a way to run or display only the succeeding tests with nose? | 22,944,568 |
1 | 2 | 0 | 1 | 2 | 1 | 0.099668 | 0 | I have both Python 2.7 and 3.3 installed on a Mac. I'm trying to get Pytest to run my tests under Python 3.3.
When I run python3 –m py.test it halts, looking for the libs under the 3.3 path.
When I run pip install –U pytest it installs to the 2.7 path.
I've seen the write-ups for Virtualenv, but I'm not ready to go there yet.
Is there another way? | 0 | python,pytest | 2014-04-07T21:18:00.000 | 0 | 22,923,303 | Apart from the genscript option the normal way to go about this is to intall py.test into the python3.3 interpreter's environment. The problem you have is that the pip you invoke is also a py27 version, so it will install into py27.
So you can start with installing pip into py33 (usually under the alias pip3) and then invoking that pip or you can simply install py and pytest in the py33 environment the old fashioned way: download the packages and run python3.3 setup.py install --user.
You will then still want to make sure you can invoke the correct version of py.test however, either making sure you can call py.test and py.test3 using aliases or so. Or simply by using pythonX.Y -m pytest. | 0 | 3,058 | false | 0 | 1 | running pytest with python 3.3 | 22,936,892 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I'm using IDEA CE 13.1.1 and tried to install the Python plugin version 3.4.Beta.135.1 from file because my development PC has no access to internet for security reasons. But get following warning and the plugin get not activated:
Plugin Python depends on unknown plugins org.jetbrains.plugins.yaml, org.jetbrains.plugins.remote-run, Coverage
I searched for these plugins in the repository but did not find them, only references in other plugin details that depend on them.
How are they really called? How can I find them?
Thanks | 0 | python,plugins,intellij-idea,jetbrains-ide | 2014-04-08T12:08:00.000 | 1 | 22,936,567 | You can't use Python plugin with Idea Community edition, sorry. It requires IntelliJ IDEA Ultimate. | 0 | 2,069 | false | 0 | 1 | Plugin Python depends on unknown plugins org.jetbrains.plugins.yaml, org.jetbrains.plugins.remote-run, Coverage | 22,990,376 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I'm trying to test out some periodic tasks I'm running in Celery, which are supposed to run at midnight of the first day of each month. To test these, I have a cron job running every few minutes which bumps the system time up to a few minutes before midnight on the last day of the month. When the clock strikes midnight (every few minutes), the tasks are not run.
All the times are UTC, and celery is set to UTC mode.
Celery itself is working fine, I can run the tasks manually. What might be going on here? Also, how does celery keep track of the system time for its scheduling, how does it handle a system time update? Could it be that celery's time and the system time get out of sync somehow?
This is Celery 3.1.0 with redis as broker/backend | 0 | python,celery | 2014-04-09T17:08:00.000 | 1 | 22,969,365 | The solution for me was to restart redis after the time update, and also restart celerybeat. That combination seems to work. | 0 | 283 | true | 0 | 1 | Celery periodic tasks: testing by modifying system time | 23,090,632 |
1 | 11 | 0 | 3 | 46 | 1 | 0.054491 | 0 | I really enjoy using the Option and Either monads in Scala. Are there any equivalent for these things in Python? If there aren't, then what is the pythonic way of handling errors or "absence of value" without throwing exceptions? | 0 | python,scala,functional-programming | 2014-04-10T15:30:00.000 | 0 | 22,992,433 | A list that happens to always be of length zero or one fulfills some of the same goals as optional/maybe types. You won't get the benefits of static typing in Python, but you'll probably get a run-time error even on the happy path if you write code that tries to use the "maybe" without explicitly "unwrapping" it. | 0 | 21,367 | false | 0 | 1 | Is there a Python equivalent for Scala's Option or Either? | 52,065,113 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I'm trying to install pycrypto for python 3.x.x on raspberry pi
but when i run python setup.py install
from the command line, it is by default installed to python 2.7.x
i have installed python-dev and still with no luck, i have read that using a PIP might help, but unfortunately i don't know how to use it. all my codes are written for python 3.3.x and it would take me a very long time to re-write them all for 2.7.
so how can i fix it without re-writing my codes | 0 | python,python-3.x,raspbian,pycrypto | 2014-04-12T13:42:00.000 | 0 | 23,031,149 | Having looked into it there does not seem to be a pycrypto version for python3 at the moment. I think you're options are to look for an alternative package or to convert your code to python 2. There are tools available which can do this automatically, for example 3to2 is available in pip. | 0 | 419 | false | 0 | 1 | how to install python package in Raspbian? | 23,031,224 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a Python script on my Dreamhost shared server. When I access my script via SSH (using the UNIX Shell) my script executes fine and is able to import the Pycrypto module Crypto.Cipher.
But if I access my script via HTTP using my websites url. The script fails when it goes to import the Pycrypto module Crypto.Cipher. It gives the error ImportError: No module named Crypto.Cipher.
Do you know what might be causing this weird error? And how I can fix it.
Some important information:
- I have installed a custom version of python on my shared server. Its just Python 2.7 with Pycrypto and easy_install installed.
- I am certain that the script is running under Python 2.7 and not Dreamhosts default 2.6 version. I know this because the script prints sys.version_info(major=2, minor=7, micro=0, releaselevel='final', serial=0) both in the UNIX shell and HTTP.
- I installed Pycrypto manually (using tar, and running setup.py) as opposed to using easy_install or pip.
- I have editted my .bash_profile's PATH variable correctly (well I believe I have done it correctly because the script is run under Python 2.7 not 2.6).
Any advice would be extremely helpful. | 0 | python,shell,dreamhost,pycrypto | 2014-04-13T09:31:00.000 | 1 | 23,041,079 | Your web server does not read your .bash_profile. | 0 | 132 | false | 0 | 1 | Shared Server: Python Script run under UNIX Shell vs HTTP | 23,041,133 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I have a script that utilizes OpenCV to track an object and communicate the location to an arduino. Essentially all it's doing is passing an integer to the arduino and the arduino interprets the integer as left/middle/right and turns on the appropriate LED. It works fine for ~30 seconds after which CPU usage jumps to 95%+ and the process begins to lag like crazy. If I remove the ser.write command and print left/middle/right to terminal then it runs fine. What might be getting backed up causing the high CPU usage? I've tried different baud rates and there is a 0.01 second delay after each ser.write command. | 0 | python,serial-port,arduino | 2014-04-13T17:19:00.000 | 0 | 23,045,812 | It was a buffer issue on the Arduino side. There was a line that kept printing a blank character out for every character it read in, causing the buffer to overflow. I removed that line and it's working fine now. | 0 | 285 | false | 0 | 1 | Serial communication in python causing higher CPU usage over time | 23,051,290 |
1 | 3 | 0 | -3 | 0 | 1 | -0.197375 | 0 | I’m considering learning Python with the idea of letting go of MatLab, although I really like MatLab. However, I’m concerned that getting all of the moving and independent pieces to fit together may be a challenge and one that may not be worth it in the end. I’ve also thought about getting into Visual Basic or Visual C++. In the end, I keep coming back to the ease of MatLab. Any thoughts or comments regarding the difficulty of getting going in Python? Is it worth it? | 0 | python,c++,matlab | 2014-04-14T00:23:00.000 | 0 | 23,050,106 | Python is great for Firmware programming like with Arduino's. C++ is great and very powerful for Programming software & applications. If you want to program hardware, go with python. If you want to program software, go with C++. Im learning C++ and its great. | 0 | 320 | false | 0 | 1 | MatLab user thinking of learning Python | 23,050,146 |
1 | 3 | 0 | 1 | 0 | 1 | 0.066568 | 0 | We have a large project that is entirely coded in ASCII. Is it worth putting coding statements at the beginning of each source file (e.g. #coding=utf-8) for some reason if the source doesn't have any unicode in it?
Thanks,
--Peter | 0 | python,unicode | 2014-04-14T17:23:00.000 | 0 | 23,066,353 | You should do one of two things (at least):
Add a hook to your repository making it verify on checkin that all python files are still pure ASCII.
Put the explicit ASCII-encoding tag in the files.
You might want to check if you get significantly better startup when the explicit tag is UTF-8 though. Anyway, I would consider that a bug of the interpreter.
This way, if anyone slips and mistakenly adds some non-ASCII characters, you won't have to chase that (potential) bug. Explicitly restricting to ASCII has one advantage: You actually can reliably see what each string contains and there are no equal-seeming distinct names. | 0 | 90 | false | 0 | 1 | If your source is ASCII, should you specify coding? | 23,066,522 |
1 | 1 | 0 | 0 | 1 | 1 | 1.2 | 0 | I have downloaded pymunk module on my computer. When I typed in "python setup.py install" in terminal, it says "no such file or directory", then I typed in the complete path of setup.py instead of setup.py, and it still could not run since the links to other files in the code of setup.py are not complete paths. (Like README.txt, terminal said "no such file or directory". Sorry I'm a python newbie. Someone tell me how can I fix it?
Thanks!!!! | 0 | python,chipmunk,pymunk | 2014-04-14T21:37:00.000 | 1 | 23,070,922 | Try and go to the folder where setup.py is first and then do python setup.py install. As you have noticed, it assumes that you run it from the same folder as where its located. | 0 | 118 | true | 0 | 1 | Compile pymunk on mac OS X | 23,200,199 |
1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | I apologize in advance for this being a bit vague, but I'm trying to figure out what the best way is to write my program from a high-level perspective. Here's an overview of what I'm trying to accomplish:
RasPi takes input from altitude sensor on serial port at 115000 baud.
Does some hex -> dec math and updates state variables (pitch, roll, heading, etc)
Uses pygame library to do some image manipulation based on the state variables on a simulated heads up display
Outputs the image to a projector at 30 fps.
Note that there's no user input (for now).
The issue I'm running into is the framerate. The framerate MUST be constant. I'd rather skip a data packet than drop a frame.
There's two ways I could see structuring this:
Write one function that, when called, grabs data from the serial bus and spits out the state variables as the output. Then write a pygame loop that calls this function from inside it. My concern with this is that if the serial port starts being read at the end of an attitude message, it'll have to pause and wait for the message to start again (fractions of a second, but could result in a dropped frame)
Write two separate modules, both to be running simultaneously. One continuously reads data from the serial port and updates the state variables as fast as possible. The other just does the image manipulation, and grabs the latest state variables when it needs them. However, I'm not actually sure how to write a multithreaded program like this, and I don't know how well the RasPi will handle such a program. | 0 | python,multithreading,buffer,raspberry-pi | 2014-04-16T01:51:00.000 | 0 | 23,097,604 | I don't think that RasPi would work that well running multithreaded programs. Try the first method, though it would be interesting to see the results of a multithreaded program. | 0 | 73 | false | 0 | 1 | How to structure my Python code? | 23,097,644 |
1 | 1 | 0 | 2 | 2 | 1 | 1.2 | 0 | Working on Gentoo (on the robot Nao) that has no make and no gcc on it, it is really hard for me to install portaudio. I managed to put pyaudio in the right location so that python can detect it but whenever I try "import pyaudio" it asks me to install portaudio first.
i have a virtual machine running gentoo emulating the robot where gcc and make are available. I could compile portaudio on that machine but then after copying its content to the robot I cannot run make install. Where should I put each library file exactly so that pyAudio can find it?
Thanks | 0 | python,makefile,gentoo,portaudio,pyaudio | 2014-04-18T01:45:00.000 | 1 | 23,146,168 | Finally I could find the source of the problem. Somehow portaudio is installing itself to /usr/local/ but the robot I'm working on uses the folders in /usr i.e /usr/lib /usr/include and not /usr/local/lib etc.
Putting the libraries in /usr/lib and also transferring manually some portaudio libs you can find in python site-packages folder solved the problem. | 0 | 976 | true | 0 | 1 | Where should I put portaudio so that Pyaudio can find it | 23,233,317 |
1 | 2 | 0 | 17 | 13 | 0 | 1 | 0 | I have a python script that basically runs forever and checks a webpage every second and notifies me if any value changes. I placed it on an AWS EC2 instance and ran it through ssh. The script was running fine when I checked after half an hour or so after I started it.
The problem is that after a few hours when I checked again, the ssh had closed. When I logged back in, there was no program running. I checked all running processes and nothing was running.
Can anyone teach me how to make it run forever (or until I stop it) on AWS EC2 instances? Thanks a lot.
Edit: I used the Java SSH Client provided by AWS to run the script | 0 | python,amazon-web-services,ssh,amazon-ec2 | 2014-04-19T05:16:00.000 | 1 | 23,166,158 | You can run the program using the nohup command, so that even when the SSH session closes your program continues running.
Eg: nohup python yourscriptname.py &
For more info you can check the man page for it using
man nohup. | 0 | 8,060 | false | 1 | 1 | Make python script to run forever on Amazon EC2 | 23,166,196 |
1 | 1 | 0 | 9 | 9 | 1 | 1.2 | 0 | So, I want the long_description of my setup script to be the contents from my README.md file. But when I do this, the installation of the source distribution will fail since python setup.py sdist does not copy the readme file.
Is there a way to let distutils.core.setup() include the README.md file with the sdist command so that the installation will not fail?
I have tried a little workaround where I default to some shorter text when the README.md file is not available, but I actually do want that not only PyPi gets the contents of the readme file but also the user that installs the package. | 0 | python,setuptools,distutils,setup.py | 2014-04-19T19:30:00.000 | 1 | 23,174,516 | To manually include files in a distribution do the following:
set include_package_data = True
Create a MANIFEST.in file that has a list of include <glob> lines for each file you want to include from the project root. You can use recursive-include <dirname> <glob> to include from sub-directories of the project root.
Unfortunately the documentation for this stuff is really fragmented and split across the Python distutils, setuptools, and old distribute docs so it can be hard to figure out what you need to do. | 0 | 3,112 | true | 0 | 1 | read README in setup.py | 23,174,731 |
1 | 1 | 0 | 1 | 2 | 1 | 0.197375 | 0 | On my Raspberry Pi I have installed Paramiko. When I installed it, it came up with an error, something like "pycrypto didn't install". I then used pip and easy_install to try and install pycrypto, but an error comes up with that, something like failed with error code 1 in /root/build/crypto
How can I install pycrypto?
I am using a Raspberry Pi with Raspbian Wheezy. | 0 | python,raspberry-pi,pycrypto | 2014-04-19T20:48:00.000 | 0 | 23,175,354 | Fixed it!
I did: sudo apt-get install python-dev and then installed pycrypto again with pip. That worked! | 0 | 1,338 | false | 0 | 1 | No module named pycrypto with Paramiko | 23,175,972 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I want to pass a variable in one python cgi script to other cgi script? how can i do this as in php. using url or something...?
i saved variable in text file, thus read and get saved variable when other page load
Is this method good? | 0 | python,variables,cgi,text-files | 2014-04-21T04:42:00.000 | 0 | 23,190,913 | Traditionally this is done using cookies or hidden form fields. | 0 | 241 | true | 0 | 1 | How to pass variable in one .py cgi to other python cgi script | 23,190,978 |
1 | 1 | 0 | 3 | 2 | 0 | 0.53705 | 0 | I do have the following problem. I'm writing a script which searches a folder for repositories, looks up the remotes on the net and pulls all new data into the repository, notifying me about new changes. The main idea is clear. I'm using python 2.7 on Windows 7 x64, using pygit2 to access the git features. The command-line supports the simple command "git pull 'origin'", but the git api is more complicated and I don't see the way. Okay, I came that far:
import pygit2
orepository=pygit2.Repository("path/to/repository/.git")
oremote=repo.remotes[0]
result=oremote.fetch()
This code retrieves the new objects and downloads it into the repository, but doesn't update the master branch or check the new data out. By inspecting the repository with TortoiseGit I see that nothing way checked out , even the new log messages don't appear when showing the log. I need to use the git pull command to refresh the repository and working copy at all. Now my question: What do I need to do to do all that by using pygit2? I mean, I download the changes by fetching them, but what do I need to do then? I want to update the master branch and working copy too...
Thank you in advance for helping me with my problem.
Best Regards. | 0 | python,git,fetch,pull,pygit2 | 2014-04-21T15:51:00.000 | 0 | 23,200,789 | Remote.fetch() does not update the files in the workdir because that's very far from its job. If you want to update the current branch and checkout those files, you need to also perform those steps, via Repository.create_reference() or Reference.target= depending on what data you have at the time, and then e.g. Repository.checkout_head() if you did decide to update.
git-pull is a script that performs very many different steps depending on the configuration and flags passed. When you're writing a tool to simulate it over multiple repositories, you need to figure out what it is that you want to do, rather than hoping everything is set up just so that git-pull won't surprise you. | 0 | 2,018 | false | 0 | 1 | pulling and integrating remote changes with pygit2 | 23,750,194 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | I need to validate phone numbers and there is a very good python library that will do this. My stack however is Go and I'm really not looking forward to porting a very large library. Do you think it would be better to use the python library by running a shell command from within the Go codebase or by running a daemon that I then have to communicate with somehow? | 0 | python,linux,go | 2014-04-22T18:04:00.000 | 1 | 23,227,044 | Python, being an interpreted language, requires the system to load the interpreter each time a script is run from the command line. Also
On my particular system, after disk caching, it takes the system 20ms to execute a script with import string (which is plausible for your use case). If you're processing a lot information, and can't submit it all at once, you should consider setting up a daemon to avoid this kind of overhead.
On the other hand, a daemon is more complex to write and test, so you should probably see if a script suits your needs before optimizing prematurely.
There's no answer to your question that fits every possible case. Ultimately, you always have to try the performance with your data and in your system, | 0 | 206 | true | 0 | 1 | Run daemon server or shell command? | 23,227,250 |
2 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 0 | This is a dumb question but please help me.
Q. How do I run Python script that is saved in my local machine?
after vagrant up and vagrant ssh, I do not see any Python file in the VM. Then what if I want to run Python scripts that are saved in my Mac? I do not want to copy and paste them manually using vim.
How would you run Python script in Vagrant ssh? | 0 | python,ssh,virtual-machine,vagrant,vagrantfile | 2014-04-22T23:35:00.000 | 1 | 23,232,172 | You have two options:
You can go the classic route of using the shell provisioner using vagrant
config.vm.provision "shell", inline: $script
And in your script run the python script
All files are pushed in /tmp, you can possible use this to run your python script | 0 | 1,918 | false | 0 | 1 | Run Python script in Vagrant | 29,586,100 |
2 | 2 | 0 | 2 | 1 | 0 | 1.2 | 0 | This is a dumb question but please help me.
Q. How do I run Python script that is saved in my local machine?
after vagrant up and vagrant ssh, I do not see any Python file in the VM. Then what if I want to run Python scripts that are saved in my Mac? I do not want to copy and paste them manually using vim.
How would you run Python script in Vagrant ssh? | 0 | python,ssh,virtual-machine,vagrant,vagrantfile | 2014-04-22T23:35:00.000 | 1 | 23,232,172 | On your Guest OS there will be a folder under / called /vagrant/ this will be all the files and directories under the directory on your host machine that contains the .VagrantFile
If you put your script in that folder they will be shared with the VM.
Additionally if you are using chef as your provisioner you can use a script resource to run external scripts during the provisioning step. | 0 | 1,918 | true | 0 | 1 | Run Python script in Vagrant | 23,232,231 |
1 | 2 | 0 | -1 | 1 | 0 | -0.099668 | 0 | I'm learning to program in Python now in a course via Coursera website. We are using an environment called "CodeSkulptor" and mainly using a module called "SimpleGUI".
I was wondering if there's any way to get the module sources and to attach them to eclipse so I can write in Python using this module in Eclipse instead of using CodeSkulptor all the time...
Thanks in advance | 0 | python,eclipse,codeskulptor | 2014-04-23T04:49:00.000 | 0 | 23,234,969 | It is not possible without getting the source of the library.
First of all, you should contact the developers and ask them to provide you a copy of the library "simplegui".
Furthermore, "Codeskulptor" is a tool which compile python and run it in the browser which make me think that simplegui is based on javascript. | 0 | 837 | false | 0 | 1 | How to use simplegui module when programming in python in eclipse? | 23,235,258 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I Wrote an application in vs2012 in python and I want to see the messages that are being sent and recieved to the application.
When I open wireshark I see a lot of messages go through.
Is there a way to focus wireshark on only my application?
Thank you! | 0 | python,visual-studio-2012,wireshark | 2014-04-23T17:15:00.000 | 0 | 23,251,063 | If you know the port number used by the application you can filter by that port by putting tcp.port == 1234 in the filter toolbar. | 0 | 128 | false | 0 | 1 | WireShark messages | 23,254,680 |
1 | 1 | 0 | 0 | 4 | 1 | 0 | 0 | So my question today is about the translation process of Java. I understand the general translation process itself but I am not too sure how it applies to Java.
Where does the lexical analysis take place? When is symbol table created? When is the syntax analysis and how is the syntax tree created?
From what I have already research and able to understand is that the Java source code is then translated into a independent byte-code through a JVM or Java Virtual Machine. Is this when it undergoes a lexical analysis?
I also know that after it is translated into byte-code it is translated into machine code but I don't know how it progress after that.
Last but not least, is the Translation process of Java and different from C++ or Python? | 0 | java,python,c++,compilation,translation | 2014-04-24T01:28:00.000 | 0 | 23,258,176 | All of the translation process is done when you compile a Java program. This is no different than compiling a C++ program or any other compiled language. The biggest difference is that this translation is targeted to the Java Byte Code language rather than assembly or machine language. The Byte Code undergoes its own translation process (including many of the same stages) when the program is run. | 0 | 954 | false | 0 | 1 | What is the Translastion Process of Java? | 23,258,361 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | When I logon to my company's computer with the AD username/password, I find that my Outlook will launch successfully. That means the AD authentication has passed.
In my opinion, outlook retrieves the AD user information, then sends it to an LDAP server to verify.
But I don't know how it retrieves the information, or by some other methods? | 0 | python,ldap | 2014-04-24T07:41:00.000 | 0 | 23,262,767 | You are right, there is an ongoing communication between your workstation and the Active Directory server, which can use LDAP protocol.
Since I don't know what you tried so far, I suggest that you look into the python module python-ldap. I have used it in the past to connect, query and modify information on Active-Directory servers. | 0 | 104 | false | 0 | 1 | How does auto-login Outlook successfully when in AD environment? | 23,263,099 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I have a list of 1,000,000 ints stored as a binary file. How do I load this quickly into a Python list? In C I would just read the file into a char array and cast that array as an int array. Is there a way to do something equivalent to this in Python? I know about Python's struct module, but as far as I can tell, that would require an extremely long format string to convert all the ints at once. | 0 | python,c,arrays,python-2.7,io | 2014-04-24T19:03:00.000 | 0 | 23,277,623 | struct.unpack('1000000I',f.read()) doesn't seem too long to me. – roippi | 0 | 152 | false | 0 | 1 | Binary file to python integer list | 28,697,765 |
2 | 3 | 0 | 1 | 2 | 0 | 0.066568 | 0 | I have been given a task to complete: Deploy my pre-existing Pyramid application onto our EC2 Linux server. I would like to do this with a minimal amount of stress and error, especially considering am I totally new to AWS.
What I have done so far:
Setup the EC2 instance which I can SSH into.
Locally develop my Pyramid application
And, we version control the application with GitHub.
We are using: Pyramid (latest), along with Python 2.7.5 and Postgresql (via SQLAlchemy and Alembic.)
What is a basic, high-level list of steps to ensure that my application is deployed appropriately?
Where, if at all, does something like Elastic Beanstalk come into play?
And, considering my project is currently in a Git repo, what steps or considerations must be taken to accommodate this?
I'm not looking for opinions on how to tweak my setup or anything like that. I am looking for a non-debatable, comprehensible set of steps or considerations to deploy my application in the most basic form. This server is for development purposes only, so I am not looking for a full-blown solution.
I have researched this topic for Django projects, and frankly, I am a bit overwhelmed with the amount of different possible options. I am trying to boil this situation down to its minimal components.
I appreciate the time and help. | 0 | python,amazon-web-services,amazon-ec2,pyramid | 2014-04-25T16:35:00.000 | 0 | 23,298,546 | I would suggest to run two instances and use Elastic Load Balancer.
Never run anything important on a single EC2 instance, EC2 instances are not durable, they can suddenly vanish, taking whatever data you had stored on it.
Everything else should work as in Pyramid Cookbook description. | 0 | 1,592 | false | 1 | 1 | Deploying Pyramid application on AWS EC2 | 24,533,996 |
2 | 3 | 0 | 2 | 2 | 0 | 1.2 | 0 | I have been given a task to complete: Deploy my pre-existing Pyramid application onto our EC2 Linux server. I would like to do this with a minimal amount of stress and error, especially considering am I totally new to AWS.
What I have done so far:
Setup the EC2 instance which I can SSH into.
Locally develop my Pyramid application
And, we version control the application with GitHub.
We are using: Pyramid (latest), along with Python 2.7.5 and Postgresql (via SQLAlchemy and Alembic.)
What is a basic, high-level list of steps to ensure that my application is deployed appropriately?
Where, if at all, does something like Elastic Beanstalk come into play?
And, considering my project is currently in a Git repo, what steps or considerations must be taken to accommodate this?
I'm not looking for opinions on how to tweak my setup or anything like that. I am looking for a non-debatable, comprehensible set of steps or considerations to deploy my application in the most basic form. This server is for development purposes only, so I am not looking for a full-blown solution.
I have researched this topic for Django projects, and frankly, I am a bit overwhelmed with the amount of different possible options. I am trying to boil this situation down to its minimal components.
I appreciate the time and help. | 0 | python,amazon-web-services,amazon-ec2,pyramid | 2014-04-25T16:35:00.000 | 0 | 23,298,546 | Deploying to an EC2 server is just like deploying to any other Linux server.
If you want to put it behind a load balancer, you can do which is fully documented.
You can also deploy to Elastic Beanstalk. Where as EC2 is a normal Linux sever, Beanstalk is more like deploying to an environment, you just push all your git changes into an S3 repo, your app then gets built and deployed onto beanstalk.
Meaning no server setups, no configuration (other than the very basics) and all new changes you push to S3, get built and update each version of your app that may have been launched on beanstalk.
You don't want to host your database server on EC2, use Amazons RDS database server, dead simple and takes about two minutes to setup and configure.
As far as file storage goes, move everything to S3.
EC2 and beanstalk should not be used for any form of storage. | 0 | 1,592 | true | 1 | 1 | Deploying Pyramid application on AWS EC2 | 23,324,088 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | How do I determine which type of account a person has when using the permissions api? I need to make a different decision if they have a pro account versus a standard business account. Thanks! | 0 | python,paypal | 2014-04-26T03:29:00.000 | 0 | 23,306,296 | I'm not aware of any way to see that via the API. That's typically something you'd leave up to the end-user to know when they're signing up. Ask them if they have Pro or not, and based on that, setup your permissions request accordingly. | 0 | 124 | false | 0 | 1 | PayPal Classic APIs determine account type | 23,308,018 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 0 | I am working on a small project which involves displaying and recording (for later processing) data received through a serial port connection from some sort of measurement device. I am using a Raspberry Pi to read and store the received information: this is done with a small program written in Python which opens the serial device, reads a frame and stores the data in a MySQL database (there is no need to poll or interact with the device, data is sent automatically).
The serial data is formatted into frames about 2.5kbits long, which are sent repeatedly at 1200baud, which means that a new frame is received about every 2 seconds.
Now, even though the useful data is just a portion of the frame, that is way too much information to store for what I need, so what I'm currently doing is "downsampling" the data by reading a frame only once per minute. Currently this is done via a cron task which calls my logging script every minute.
The problem with my setup is that the PHP webpage used to display (and process) the received data (pulled from the MySQL database) cannot show new data more than once per minute.
Thus here come my question:
How would you do to make the webpage show the live data (which doesn't need to be saved), while keeping the logging to the MySQL database @ once per minute?
I guess the solution would involve some sort of daemon, which stores the data at the specified frequency (once per minute), while keeping the latest received data available for the php webpage (how?). What do you think? Do you have any examples of similar code/applications which I could use as a starting point?
Thanks! | 0 | php,python,mysql,logging,serial-port | 2014-04-26T14:20:00.000 | 0 | 23,312,182 | I don't know if I understand your problem correctly, but it appears you want to show a non-stop “stream” of data with your PHP script. If that's the case, I'm afraid this won't be so easy.
The basic idea of the HTTP protocol is request/response based. Your browser sends a request and receives a (static) response.
You could build some sort of “streaming” server, but streaming (such as done by youtube.com) is also not much more than periodically sending static chunks of a video file, and the player re-assembles them into a video or audio “stream”.
You could, however, look into concepts like “web sockets” and “long polling”. For example, you could create a long-running PHP script that reads a certail file once every two seconds and outputs the value. (Remember to use flush(), or output will be buffered.)
A smart solution could even output a JavaScript snippet every two seconds, which again would update some sort of <div> container displaying charts and what not.
There are for example implementations of progress meters implemented with this type of approach. | 0 | 1,126 | false | 1 | 1 | Receiving serial port data: real-time web display + logging (with downsampling) | 25,128,746 |
1 | 3 | 0 | 0 | 3 | 0 | 0 | 0 | I'm working on converting an existing Drupal site to Pyramid. The Drupal site has urls that are SEO friendly example: "testsite.com/this-is-a-page-about-programming". In Drupal they have a system which maps that alias to a path like "testsite.com/node/33" without redirecting the user to that path. So the user sees "testsite.com/this-is-a-page-about-programming" but Drupal loads node/33 internally. Also if the user lands on "testsite.com/node/33" they would be redirected to "testsite.com/this-is-a-page-about-programming".
How can this be achieved in Pyramid without a major performance hit? | 0 | python,url-rewriting,pyramid | 2014-04-26T18:10:00.000 | 0 | 23,314,745 | mod_rewrite is a webserver module that is independent of the framework your application uses. If it is configured on the server, it should operate the same regardless of whether you are using Drupal or Pyramid. Since the module is the same for each framework, the overhead is precisely the same in both cases. | 0 | 801 | false | 1 | 1 | How to mimic the url aliasing functionality of mod_rewrite with Pyramid (Python Framework)? | 23,315,196 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I would like to know , how an ECMP and hash mapping are used in load balancing or routing of a tcp packet .Any help with links,examples or papers would be really useful. Sorry for the inconvinience , as I am completely new to this type of scenario.
Thanks for your time and consideration. | 0 | python,hash,routing | 2014-04-27T03:43:00.000 | 1 | 23,319,138 | Typical algorithms split the traffic into semi-even groups of N pkts, where N is the number of ECMP links. So if the pkt sizes differ, or if some "streams" have more pkts than others, the overall traffic rates will not be even. Some algorithms factor for this. Breaking up or moving strean is bad (for many reasons). ECMP can be tiered --at layers1,2,3, and above; or at different physical pts. Typically, the src & dst ip-addr & protocol/port are used to define each stream. Sometimes it is configurable. Publishing the details can create "DoS/"IP"(Intellectual Property) vulnerabilities. Using the same algorithm at different "tiers" with certain numbers of links at each tier can lead to "polarization" (some links getting no traffic). To address this, a configurable or random input can be added to the algorithm. BGP ECMP requires IGP cost to be the same, else routing loops can happen(link/info @ cisco). Multicast adds more issues(link/info @ cisco). There are 3 basic types (link/info @ cisco). This is a deep subject. | 0 | 460 | false | 0 | 1 | Hash Mapping and ECMP | 69,093,482 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I am getting an error
unicodedecodeerror 'ascii' codec can't decode byte 0xc3 in position 1
ordinal not in range(128)
while performing the below mentioned operation.
I have a program that reads files from remote machine(Ubuntu) using grep and cat command for the same to fetch values and stores the value in a variable via robot framework builtin keyword export command from client.
Following are the versions i am using:-
Robot Framework: 2.8.11
Ride: 0.55
Putty: 0.63
Pyhton: 2.7.3
I am doing a SSH session on Linux machine and on that machine their is a file in which the data is having accented characters for eg: Õ Ü Ô Ý .
While reading the text from the file containing accented characters using 'grep' and 'cat' command i am facing this issue.
unicodedecodeerror 'ascii' codec can't decode byte 0xc3 in position 1
ordinal not in range(128)
Thank you. | 0 | python,unix,wxpython,robotframework | 2014-04-28T06:07:00.000 | 1 | 23,333,669 | I think the problem is that the file contains UTF-8, not ASCII. Robot Framework appears to be expecting ASCII text. ASCII text only contains values in the range 0-127, when the ascii codec sees a byte 0xC3 it throws an error. (If the text was using the Western European Windows 8-bit encoding, 0xC3 would be Ã. If it was using the MacOS encoding, 0xC3 would be ∑. In fact, it is the first of two bytes which define a single character in the range of most of the interesting accented characters.)
Somehow, you need to teach Robot Framework to use the correct encoding. | 0 | 992 | false | 0 | 1 | unicodedecodeerror 'ascii' codec error in wxPython | 25,013,664 |
1 | 2 | 0 | 0 | 5 | 1 | 0 | 0 | I have some C# code that needs to call a Python script several thousand times, each time passing a string, and then expecting a float back. The python script can be run using ANY version of Python, so I cannot use Iron python. It's been recommended that I use IPC named pipes. I have no experience with this, and am having trouble figuring out how to do this between C# and Python. Is this a simple process, or am I looking at a decent amount of work? Is this the best way to solve my problem? | 0 | c#,python,ipc | 2014-04-29T20:42:00.000 | 0 | 23,374,854 | Based on the what you have said, you can connect to the python process and catch standard output text. Easy, fast and reliable! | 0 | 9,440 | false | 0 | 1 | Simplest way to communicate between Python and C# using IPC? | 62,980,335 |
1 | 1 | 0 | 3 | 2 | 0 | 1.2 | 0 | I am developing a tcp/ip server whose purpose is receive packets from client, parse them, do some computation(on data arriving in packet) and store it in database. Till now, everything was being done by single server application written using twisted python. Now I am across RabbitMQ so my question is, if it is possible and if it will lead to better performance if my twisted server application just receives the packets from clients and pass it another c++ application using RabbitMQ. The c++ application will in turn parse packets, do computation on it etc.. Everything will be done on single server. | 0 | python,rabbitmq,twisted | 2014-04-30T07:20:00.000 | 1 | 23,381,990 | If your server is does not receive packets often, it will not improve much - only gain some tiny overhead on inter server communication. Still it is a very good design idea, because it scales well and once you finally get many packets you will just add an instance of data processing server. | 0 | 545 | true | 0 | 1 | Using rabbitmq with twisted | 23,382,374 |
1 | 2 | 0 | 2 | 2 | 0 | 0.197375 | 0 | I'd like to output a series of dircmp report_full_disclosure()'s to a text file. However the report_full_disclosure() format is one blob of text, it doesn't play nicely with file.write(comparison_object.report_full_disclosure()) because file.write() expects a single line to write to the file.
I've tried iterating through the report_full_disclosure() report but it doesn't work either. Has anyone else had this particular problem before? Is there a different method to write out to files? | 0 | python,file-comparison | 2014-04-30T18:40:00.000 | 0 | 23,395,679 | the "report generating" methods of dircmp.filecmp don't accept a file object, they just use the print statement (or, in the Python 3 version, the print() function)
You could create a subclass of dircmp.filecmp which accepts a file argument to methods report, report_full_closure and report_partial_closure (if needed), at each print ... site writing print >>dest, .... Where report_*_closure call recursively, pass the dest argument down to the recursive call.
The lack of ability to print output to a specific file seems to me to be an oversight, so having added an optional file argument to these methods and tested it thoroughly you may wish to offer your contribution to the Python project.
If your program is single threaded, you could temporarily replace sys.stdout with your destination file before calling the report methods. But this is a dirty and fragile method, and it is probably foolish to imagine your program will be forever single threaded. | 0 | 1,224 | false | 0 | 1 | How do write filecmp's report_full_disclosure() to a text file? | 23,396,192 |
1 | 2 | 0 | 7 | 8 | 1 | 1 | 0 | This may be really obvious but just wanted to make sure I understand what the columns are in runsnakerun.
Name, Calls, RCalls, Local, /Call, Cum, /Call, File, Line, Directory
Here are some that I think I understand
Name - name of function being called
Calls - number of calls?
File - file where the function is stored
Line - Line in File where the function is defined
Directory - directory of file with function definition
The ones I don't feel comfortable venturing a guess are the rest:
RCalls
Local
/Call
Cum
/Call
Thanks | 0 | python,profiling | 2014-05-02T02:46:00.000 | 0 | 23,419,967 | Here is my understanding:
RCalls number of recursive calls
Local Total time spent on local execution (without calling another method)
/Call Local time per call
Cum Total cumulative time
/Call Cumulative time per call | 0 | 1,257 | false | 0 | 1 | Python profiling - What are the columns in runsnakerun output? | 24,132,022 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I have a ZMQ server listening on port 12345 TCP. When another server connects on that port locally or via VM it works fine, but if I try from a remote server that has to go through port forwarding on my Fios firewall it just bombs. The packets are showing up in Wireshark but ZMQ just ignores them. Is there anyway to get past this? | 0 | python,zeromq | 2014-05-04T07:16:00.000 | 0 | 23,453,650 | You shouldn't be able to bind more than once to the same port number, either from the same process or another.
ZMQ should give a failure when you issue bind with a port number already in use. Are you checking return codes? | 0 | 144 | false | 0 | 1 | Incoming ZeroMQ traffic dropped by server due to NAT? | 23,468,773 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I am using Python-LDAP module to interact with my LDAP server. How can I remove an objectClass from an entry using python-ldap? When I generated a modlist with modlist.modifyModlist({'objectClass':'inetLocalMailRecipient},{'objectClass' : ''}) , it just generates (1, 'objectClass', None) which obviously doesn't seem correct. What am I doing wrong here? I want to remove one objectClass from a given entry using python ldap. | 0 | python,ldap,python-ldap | 2014-05-04T07:27:00.000 | 0 | 23,453,735 | As stated by Guntram Blohm, it is not possible to delete object classes on existing objects because doing that would invalidate the schema checks that the server did when creating the object. So the way to do it will be to delete the object and create a new one. This is a property of the server and the client libraries cannot do anything about it. | 0 | 615 | true | 0 | 1 | How to remove an objectClass from an entry using python-ldap? | 23,841,869 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have created a web-app using Python Flask framework on Raspberry Pi running Raspbian. I want to control the hardware and trigger some sudo tasks on the Pi through web.
The Flask based server runs in non-sudo mode listening to port 8080. When a web client sends request through HTTP, I want to start a subprocess with sudo privileges. (for ex. trigger changes on gpio pins, turn on camera etc.). What is the best practice for implementing this kind of behavior?
The webserver can ask for sudo password to the client, which can be used to raise the privileges. I want some pointers on how to achieve this. | 0 | python,linux,web,flask,raspberry-pi | 2014-05-04T09:16:00.000 | 0 | 23,454,521 | Best practice is to never do this kind of thing. If you are giving sudo access to your pi from internet and then executing user input you are giving everyone in the internet the possibility of executing arbitrary commands in your system. I understand that this is probably your pet project, but still imagine someone getting access to your computer and turning camera when you don't really expect it. | 0 | 166 | false | 1 | 1 | How to start a privileged process through web on the server? | 23,454,864 |
1 | 3 | 0 | 2 | 5 | 1 | 0.132549 | 0 | I am making a programming framework (based on Django) that is intended for students with limited programming experience. Students are supposed to inherit from my base classes (which themselves are inherited from Django models, forms, and views).
I am testing this out now with some students, and the problem is that when they write code in their IDE (most of them are using PyCharm), autocomplete gives them a ton of suggestions, since there are so many inherited methods and attributes, 90% of which are not relevant to them.
Is there some way to hide these inherited members? At the moment I am primarily thinking of how to hide them in auto-complete (in PyCharm and other IDEs). They can (and probably should) still work if called, but just not show up in places like auto-complete.
I tried setting __dict__, but that did not affect what showed up in autocomplete. Another idea I have is to use composition instead of inheritance, though I would have to think this through in more detail.
Edit: This framework is not being used in CS classes; rather, students will be using it to build apps for non-CS domain. So my priority is to keep it simple as possible, perhaps even if it's not a "pure" approach. (Nevertheless, I am considering those arguments as they do have merit.) | 0 | python,pycharm | 2014-05-04T14:43:00.000 | 0 | 23,457,532 | I'd suggest you to use composition instead of inheritance. Then you design class's interface and decide which methods are available. | 0 | 1,650 | false | 1 | 1 | In Python, can I hide a base class's members? | 23,457,554 |
1 | 1 | 0 | 2 | 0 | 1 | 0.379949 | 0 | I am learning how to run Python through C++ and I'm having a hard time getting a handle on things. Is there a way to output the code that is generated by the various PyObjects? I'm not very experienced with embedding, so the documentation went a bit over my head. | 0 | python,c++ | 2014-05-04T23:39:00.000 | 0 | 23,462,833 | No, the PyObjects don't generate code -- it is just a c struct that holds the information of that particular Python object. You can inspect them in your C++ debugger.
I'm experienced with py embedding. The docs seem clear. Don't really get what your problem is. | 0 | 35 | false | 0 | 1 | Debugging Python extended on C++ | 23,462,868 |
1 | 1 | 0 | 4 | 3 | 0 | 1.2 | 0 | I have a long-running daily cron on OpenShift. It takes a couple hours to run. I've added nohup and I'm running it in the background. It still seems to timeout at the default 5 minutes (It works appropriately for this time). I'm receiving no errors and it works perfectly fine locally.
nohup python ${OPENSHIFT_REPO_DIR}wsgi/manage.py do_something >> \
${OPENSHIFT_DATA_DIR}do_something_data.log 2> \
${OPENSHIFT_DATA_DIR}do_something_error.log &
Any suggestions is appreciated. | 0 | python,cron,flask,openshift,nohup | 2014-05-05T16:40:00.000 | 1 | 23,477,570 | I'm lazy. Cut and paste :)
I have been told 5 minutes is the limit for the free accounts. That includes all background processes. I asked a similar question here on SO. | 0 | 629 | true | 1 | 1 | Long-running Openshift Cron | 23,485,693 |
3 | 3 | 1 | 1 | 4 | 0 | 0.066568 | 0 | I'm implementing an algorithm into my Python web application, and it includes doing some (possibly) large clustering and matrix calculations. I've seen that Python can use C/C++ libraries, and thought that it might be a good idea to utilize this to speed things up.
First: Are there any reasons not to, or anything I should keep in mind while doing this?
Second: I have some reluctance against connecting C to MySQL (where I would get the data the calculations). Is this in any way justified? | 0 | python,c++,mysql,c | 2014-05-06T10:04:00.000 | 0 | 23,491,608 | Not the answer you expected, but i have been down that road and advise KISS:
First make it work in the most simple way possible.
Only than look into speeding things up later / complicating the design.
There are lots of other ways to phrase this such as "do not fix hypothetical problems unless resources are unlimited". | 0 | 1,084 | false | 0 | 1 | Using C/C++ for heavy calculations in Python (Also MySQL) | 23,493,503 |
3 | 3 | 1 | 1 | 4 | 0 | 1.2 | 0 | I'm implementing an algorithm into my Python web application, and it includes doing some (possibly) large clustering and matrix calculations. I've seen that Python can use C/C++ libraries, and thought that it might be a good idea to utilize this to speed things up.
First: Are there any reasons not to, or anything I should keep in mind while doing this?
Second: I have some reluctance against connecting C to MySQL (where I would get the data the calculations). Is this in any way justified? | 0 | python,c++,mysql,c | 2014-05-06T10:04:00.000 | 0 | 23,491,608 | Use the ecosystem.
For matrices, using numpy and scipy can provide approximately the same range of functionality as tools like Matlab. If you learn to write idiomatic code with these modules, the inner loops can take place in the C or FORTRAN implementations of the modules, resulting in C-like overall performance with Python expressiveness for most tasks. You may also be interested in numexpr, which can further accelerate and in some cases parallelize numpy/scipy expressions.
If you must write compute-intensive inner loops in Python, think hard about it first. Maybe you can reformulate the problem in a way more suited to numpy/scipy. Or, maybe you can use data structures available in Python to come up with a better algorithm rather than a faster implementation of the same algorithm. If not, there’s Cython, which uses a restricted subset of Python to compile to machine code.
Only as a last resort, and after profiling to identify the absolute worst bottlenecks, should you consider writing an extension module in C/C++. There are just so many easier ways to meet the vast majority of performance requirements, and numeric/mathematical code is an area with very good existing library support. | 0 | 1,084 | true | 0 | 1 | Using C/C++ for heavy calculations in Python (Also MySQL) | 23,497,013 |
3 | 3 | 1 | 1 | 4 | 0 | 0.066568 | 0 | I'm implementing an algorithm into my Python web application, and it includes doing some (possibly) large clustering and matrix calculations. I've seen that Python can use C/C++ libraries, and thought that it might be a good idea to utilize this to speed things up.
First: Are there any reasons not to, or anything I should keep in mind while doing this?
Second: I have some reluctance against connecting C to MySQL (where I would get the data the calculations). Is this in any way justified? | 0 | python,c++,mysql,c | 2014-05-06T10:04:00.000 | 0 | 23,491,608 | cython support for c++ is much better than what it was. You can use most of the standard library in cython seamlessly. There are up to 500x speedups in the extreme best case.
My experience is that it is best to keep the cython code extremely thin, and forward all arguments to c++. It is much easier to debug c++ directly, and the syntax is better understood. Having to maintain a code base unnecessarily in three different languages is a pain.
Using c++/cython means that you have to spend a little time thinking about ownership issues. I.e. it is often safest not to allocate anything in c++ but prepare the memory in python / cython. (Use array.array or numpy.array). Alternatively, make a c++ object wrapped in cython which has a deallocation function. All this means that your application will be more fragile than if it is written only in python or c++: You are abandoning both RAII / gc.
On the other hand, your python code should translate line for line into modern c++. So this reminds you not to use old fashioned new or delete etc in your new c++ code but make things fast and clean by keeping the abstractions at a high level.
Remember too to re-examine the assumptions behind your original algorithmic choices. What is sensible for python might be foolish for c++.
Finally, python makes everything significantly simpler and cleaner and faster to debug than c++. But in many ways, c++ encourages more powerful abstractions and better separation of concerns.
When you programme with python and cython and c++, it slowly comes to feel like taking the worse bits of both approaches. It might be worth biting the bullet and rewriting completely in c++. You can keep the python test harness and use the original design as a prototype / testbed. | 0 | 1,084 | false | 0 | 1 | Using C/C++ for heavy calculations in Python (Also MySQL) | 23,497,054 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I realise this question may already exist, but the answers I've found haven't worked and I have a slightly different setup.
I have a python file /home/pi/python_games/frontend.py that I am trying to start when lxde loads by placing @python /home/pi/python_games/frontend.py in /etc/xdg/lxsession/LXDE/autostart.
It doesn't run and there are no error messages.
When trying to run python /home/pi/python_games/frontend.py, python complains about not being able to find the files that are loaded using relative links eg: /home/pi/python_games/image.png is called with image.png. Obviously one solution would be to give these resources absolute paths, but the python program also calls other python programs in its directory that also have relative paths, and I don't want to go changing all them.
Anyone got any ideas?
Thanks
Tom | 0 | python,path,absolute-path | 2014-05-06T10:48:00.000 | 1 | 23,492,589 | you could change your current working directory inside the script before you start calling your relative imports, use os.chdir("absolute path on where your script lives"). | 0 | 751 | true | 0 | 1 | Starting a python script on boot (startx) with an absolute path, in which there are relative paths | 23,492,856 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I realise this question may already exist, but the answers I've found haven't worked and I have a slightly different setup.
I have a python file /home/pi/python_games/frontend.py that I am trying to start when lxde loads by placing @python /home/pi/python_games/frontend.py in /etc/xdg/lxsession/LXDE/autostart.
It doesn't run and there are no error messages.
When trying to run python /home/pi/python_games/frontend.py, python complains about not being able to find the files that are loaded using relative links eg: /home/pi/python_games/image.png is called with image.png. Obviously one solution would be to give these resources absolute paths, but the python program also calls other python programs in its directory that also have relative paths, and I don't want to go changing all them.
Anyone got any ideas?
Thanks
Tom | 0 | python,path,absolute-path | 2014-05-06T10:48:00.000 | 1 | 23,492,589 | Rather than change your current working directory, in yourfrontend.pyscript you could use the value of the predefined__file__module attribute, which will be the absolute pathname of the script file, to determine absolute paths to the other files in the same directory.
Functions in theos.pathmodule, such assplit()andjoin(), will make doing this fairly easy. | 0 | 751 | false | 0 | 1 | Starting a python script on boot (startx) with an absolute path, in which there are relative paths | 23,496,772 |
1 | 4 | 0 | 3 | 7 | 0 | 0.148885 | 0 | I can test the rank of a matrix using np.linalg.matrix_rank(A) . But how can I test if all the rows of A are orthogonal efficiently?
I could take all pairs of rows and compute the inner product between them but is there a better way?
My matrix has fewer rows than columns and the rows are not unit vectors. | 0 | python,math,numpy,scipy | 2014-05-06T19:49:00.000 | 0 | 23,503,667 | Approach #3: Compute the QR decomposition of AT
In general, to find an orthogonal basis of the range space of some matrix X, one can compute the QR decomposition of this matrix (using Givens rotations or Householder reflectors). Q is an orthogonal matrix and R upper triangular. The columns of Q corresponding to non-zero diagonal entries of R form an orthonormal basis of the range space.
If the columns of X=AT, i.e., the rows of A, already are orthogonal, then the QR decomposition will necessarily have the R factor diagonal, where the diagonal entries are plus or minus the lengths of the columns of X resp. the rows of A.
Common folklore has it that this approach is numerically better behaved than the computation of the product A*AT=RT*R. This may only matter for larger matrices. The computation is not as straightforward as the matrix product, however, the amount of operations is of the same size. | 1 | 7,018 | false | 0 | 1 | How to detect if all the rows of a non-square matrix are orthogonal in python | 23,552,362 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I have two questions about using crontab file:
1.I am using a service. When it runs, a new log file created everyday in a log directory. i want to delete all files that already exist greater 5 day in that log directory
2.I want to delete all the infomation that exist greater than 5 days in a log file( /var/log/syslog)
I don't know how to do that with crontab in linux. Please help me! Thanks in advance! | 0 | python,linux | 2014-05-07T10:26:00.000 | 1 | 23,515,224 | If you are using logrotate for log rotation then it has options to remove old files, if not you could run something as simple as this once a day in your cron:
find /path/to/log/folder -mtime +5 -type f -exec rm {} \;
Or more specific match a pattern in the filename
find . -mtime +5 -type f -name *.log -exec ls -l {} \;
Why not set up logrotate for syslog to rotate daily then use its options to remove anything older than 5 days.
Other options involve parsing log file and keeping certain aspect etc removing other bits etc which involved writing to another file and back etc and when it comes to live log files this can end up causing other issues such as a requirement to restart service to relog back into files. so best option would be logrotate for the syslog | 0 | 374 | true | 0 | 1 | How to delete some file with crontab in linux | 23,515,529 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I'm running into a few issues on my Emacs + Org mode + Python setup. I thought I'd put this out there to see if the community had any suggestions.
Virtualenv:
I'm trying to execute a python script within a SRC block using a virtual environment instead of my system's python implementation. I have a number of libraries in this virtual environment that I don't have on my system's python (e.g. Matplotlib). Now, I set python-shell-virtualenv-path to my virtualenv's root directory. When I run M-x run-python the shell runs from my virtual environment. That is, I can import Matplotlib with no problems. But when I import Matplotlib within a SRC block I get an import error.
How can I have it so the SRC block uses the python in my virtual
environment and not my system's python?
Is there any way I can set
the path to a given virtual environment automatically when I load an
org file?
HTML5 Export:
I'm trying to export my org-files in 'html5', as opposed to the default 'xhtml-strict'. The manual says to set org-html-html5-fancy to t. I tried searching for org-html-html5-fancy in M-x org-customize but I couldn't find it. I tried adding (setq org-html-html5-fancy t) to my init.el, but nothing happened. I'm not at all proficient in emacs-lisp so my syntax may be wrong. The manual also says I can set html5-fancy in an options line. I'm not really sure how to do this. I tried #+OPTIONS html5-fancy: t but it didn't do anything.
How can I export to 'html5' instead of 'xhtml-strict' in org version
7.9.3f and Emacs version 24.3.1?
Is there any way I can view and customize the back-end that parses
the org file to produce the html?
I appreciate any help you can offer. | 0 | python,emacs,virtualenv,org-mode | 2014-05-08T02:09:00.000 | 0 | 23,531,555 | Reads like a bug, please consider reporting it at [email protected]
As a workaround try setting the virtualenv at the Python-side, i.e. give PYTHONPATH as argument.
Alternatively, mark the source-block as region and execute it the common way, surpassing org | 0 | 510 | false | 1 | 1 | Run python from virtualenv in org file & HTML5 export in org v.7.9.3 | 23,557,258 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | How can I see if a file-like object is in universal newline mode or not (or any details related to that)?
Both Python 2 and/or 3 answers are okay.
Hint: No, the newlines attribute does not reflect this. It is always there when the Python interpreter has universal newlines support. | 0 | python,file-io,newline | 2014-05-08T06:42:00.000 | 0 | 23,534,594 | The mode attribute is left intact when opening files. I expect you could check if newlines exists and mode contains 'U'. Other translations are indicated by encoding. | 0 | 54 | false | 0 | 1 | How to see whether a file was opened in universal newlines mode? (Or whatever newline translations it is doing?) | 23,535,129 |
1 | 1 | 0 | 3 | 0 | 1 | 1.2 | 0 | I use Ruby on a daily basis and know it is a purely object oriented language. As far as I know, pure object oriented languages' distinguishable characteristic is that all variables are objects, even ints, floats, chars, etc that would be found as primitive types in other languages like Java.
Is Python the same way? I always knew Python as a general purpose object oriented/functional/procedural language that is also good for scripting, but I never thought that it could be purely OO.
Anyone have any explanations? | 0 | python | 2014-05-08T21:11:00.000 | 0 | 23,552,613 | Yes, all values in Python are objects, including integers, floats, functions, classes, and None. I've never heard it described as a "Pure" Object-oriented language, but it seems to meet your description of one. | 0 | 5,949 | true | 0 | 1 | Is Python a pure object-oriented language | 23,552,656 |
2 | 5 | 0 | 0 | 7 | 1 | 0 | 0 | I've noticed a few Python packages that use config files written in Python. Apart from the obvious privilege escalation, what are the pros and cons of this approach?
Is there much of a precedence for this? Are there any guides as to the best way to implement this?
Just to clarify: In my particular use case, this will only be used by programmers or people who know what they're doing. It's not a config file in a piece of software that will be distributed to end users. | 0 | python,configuration-files | 2014-05-10T23:54:00.000 | 0 | 23,587,542 | I've done this frequently in company internal tools and games. Primary reason being simplicity: you just import the file and don't need to care about formats or parsers. Usually it has been exactly what @zmo said, constants meant for non programmers in the team to modify (say the size of the grid of the game level. or the display resolution).
Sometimes it has been useful to be able to have logic in the configuration. For example alternative functions that populate the initial configuration of the board in the game. I've found this a great advantage actually.
I acknowledge that this could lead to hard to debug problems. Perhaps in these cases those modules have been more like game level init modules than typical config files. Anyhow I've been really happy about the straightforward way to make clear textual config files with the ability to have logic there too and haven't gotten bit by it. | 0 | 1,283 | false | 0 | 1 | Using config files written in Python | 23,587,747 |
2 | 5 | 0 | 0 | 7 | 1 | 0 | 0 | I've noticed a few Python packages that use config files written in Python. Apart from the obvious privilege escalation, what are the pros and cons of this approach?
Is there much of a precedence for this? Are there any guides as to the best way to implement this?
Just to clarify: In my particular use case, this will only be used by programmers or people who know what they're doing. It's not a config file in a piece of software that will be distributed to end users. | 0 | python,configuration-files | 2014-05-10T23:54:00.000 | 0 | 23,587,542 | This is yet another config file option. There are several quite adequate config file formats available.
Please take a moment to understand the system administrator's viewpoint or some 3rd party vendor supporting your product. If there is yet another config file format they might drop your product. If you have a product that is of monumental importance then people will go through the hassle of learning the syntax just to read your config file. (like X.org, or apache)
If you plan on another programming language accessing/writing the config file info then a python based config file would be a bad idea. | 0 | 1,283 | false | 0 | 1 | Using config files written in Python | 23,587,749 |
3 | 3 | 0 | 4 | 0 | 0 | 1.2 | 0 | I was wondering why scripting languages (python,php,ruby,perl) don't have pointers such as C/C++,Objective-C so on? | 0 | php,python,ruby,perl,pointers | 2014-05-13T00:16:00.000 | 0 | 23,620,930 | Because pointers, while very versatile, are a pain-in-the-ass source of bugs. The whole point of higher-order languages is abstraction of dangerous or verbose constructs into safer and shorter ones: you trade power for ease of development. Thus, for example, arrays in dynamic languages all know how to allocate themselves, free themselves, and even resize themselves, so the programmer does not need to worry about it (and can't mess it up). It is the same reason why we don't normally program in assembly unless we really really want to control every cycle of the processor: too verbose, too easy to make a mistake (which is why C/C++, Objective-C and so on exist in the first place). Dynamic languages are a step further in the same direction. | 0 | 370 | true | 0 | 1 | Why Scripting (Dynamic) Languages Don't Have Pointers? | 23,620,955 |
3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I was wondering why scripting languages (python,php,ruby,perl) don't have pointers such as C/C++,Objective-C so on? | 0 | php,python,ruby,perl,pointers | 2014-05-13T00:16:00.000 | 0 | 23,620,930 | A major difference between the two categories is that scripting languages don't necessarily deal with things like, say, memory in a direct manner. In your C languages you have memory and have to allocate and manage it. In PHP, however, you don't generally manage your memory directly (and in most cases, the memory usage is transparent to the programmer). The underlying software does this for you. So it's entirely possible to write software without knowing a thing about machine level code, malloc, etc. | 0 | 370 | false | 0 | 1 | Why Scripting (Dynamic) Languages Don't Have Pointers? | 23,620,990 |
3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I was wondering why scripting languages (python,php,ruby,perl) don't have pointers such as C/C++,Objective-C so on? | 0 | php,python,ruby,perl,pointers | 2014-05-13T00:16:00.000 | 0 | 23,620,930 | Question is too general too be answered on Stackoverflow.
But the answer is - scripting languages try to be as hardware independent as they can. And that includes no knowledge of RAM structure and content. | 0 | 370 | false | 0 | 1 | Why Scripting (Dynamic) Languages Don't Have Pointers? | 23,620,951 |
5 | 5 | 1 | 5 | 28 | 0 | 0.197375 | 0 | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice? | 0 | python,c++,unit-testing | 2014-05-13T04:40:00.000 | 0 | 23,622,923 | It really depends on what it is you are trying to test. It almost always makes sense to write unit tests in the same language as the code you are testing so that you can construct the objects under test or invoke the functions under test, both of which can be most easily done in the same language, and verify that they work correctly. There are, however, cases in which it makes sense to use a different language, namely:
Integration tests that run a number of different components or applications together.
Tests that verify compilation or interpretation failures which could not be tested in the language, itself, since you are validating that an error occurs at the language level.
An example of #1 might be a program that starts up multiple different servers connected to each other, issues requests to the server, and verifies those responses. Or, as a simpler example, a program that simply forks an application under test as a subprocess and verifies that it produces the expected outputs for a given input.
An example of #2 might be a program that verifies that a certain piece of C++ code will produce a static assertion failure or that a particular template instantiation which is intentionally disallowed will result in a compilation failure if someone attempts to use it.
To answer your larger question, it is not bad practice per-se to write tests in a different language. Whatever makes the tests more convenient to write, easier to understand, more robust to changes in implementation, more sensitive to regressions, and better on any one of the properties that define good testing would be a good justification to write the tests one way vs another. If that means writing the tests in another language, then go for it. That being said, small unit tests typically need to be able to invoke the item under test directly which, in most cases, means writing the unit tests in the same language as the component under test. | 0 | 4,198 | false | 0 | 1 | Is it acceptable practice to unit-test a program in a different language? | 23,623,089 |
5 | 5 | 1 | 4 | 28 | 0 | 0.158649 | 0 | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice? | 0 | python,c++,unit-testing | 2014-05-13T04:40:00.000 | 0 | 23,622,923 | Why not, it's an awesome idea because you really understand that you are testing the unit like a black box.
Of course there may be technical issues involved, what if you need to mock some parts of the unit under test, that may be difficult in a different language.
This is a common practice for integration tests though, I've seen lots of programs driven from external tools such as a website from selenium, or an application from cucumber. Both those can be considered the same as a custom python script.
If you consider the difference between integration testing and unit testing is the number of things under test at any given time, the only reason why you shouldn't do this is tool support. | 0 | 4,198 | false | 0 | 1 | Is it acceptable practice to unit-test a program in a different language? | 23,623,087 |
5 | 5 | 1 | 4 | 28 | 0 | 0.158649 | 0 | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice? | 0 | python,c++,unit-testing | 2014-05-13T04:40:00.000 | 0 | 23,622,923 | I would say it depends on what you're actually trying to test. For true unit testing, it is, I think, best to test in the same language, or at least a binary-compatible language (i.e. testing Java with Groovy -- I use Spock in this case, which is Groovy based, to unit-test my Java code, since I can intermingle the Java with the Groovy), but if you are testing results, then I think it's fair to switch languages.
For example, I have tested the expected results when given a specific set of a data when running a Perl application via nose in Python. This works because I'm not unit testing the Perl code, per se, but the outcomes of that Perl code.
In that case, to unit test actual Perl functions that are part of the application, I would use a Perl-based test framework such as Test::More. | 0 | 4,198 | false | 0 | 1 | Is it acceptable practice to unit-test a program in a different language? | 23,623,031 |
5 | 5 | 1 | 9 | 28 | 0 | 1 | 0 | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice? | 0 | python,c++,unit-testing | 2014-05-13T04:40:00.000 | 0 | 23,622,923 | A few things to keep in mind:
If you are writing tests as you code, then, by all means, use whatever language works best to give you rapid feedback. This enables fast test-code cycles (and is fun as well). BUT.
Always have well-written tests in the language of the consumer. How is your client/consumer going to call your functions? What language will they be using? Using the same language minimizes integration issues later on in the life-cycle. | 0 | 4,198 | false | 0 | 1 | Is it acceptable practice to unit-test a program in a different language? | 23,623,093 |
5 | 5 | 1 | 28 | 28 | 0 | 1.2 | 0 | I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice? | 0 | python,c++,unit-testing | 2014-05-13T04:40:00.000 | 0 | 23,622,923 | I'd say that it's best to test the API that your users will be exposed to. Other tests are good to have as well, but that's the most important aspect.
If your users are going to write C/C++ code linking to your library, then it would be good to have tests making use of your library the same way.
If you are going to ship a Python wrapper (why not?) then you should have Python tests.
Of course, there is a convenience aspect to this, as well. It may be easier to write tests in Python, and you might have time constraints that make it more appealing, etc.
I guess what I'm saying is: There's nothing inherently wrong with tests being in a different language from the code under test (that's totally normal for testing a REST API, for instance), but make sure you have tests for the public-facing API at a minimum.
Aside, on terminology:
I don't think the types of tests you are describing are "unit tests" in the usual sense of the term. Probably "functional test" would be more accurate.
A unit test typically tests a very small component - such as a function call - that might be one piece of larger functionality. Unit tests like these are often "white box" tests, so you can see the inner workings of your code.
Testing something from a user's point-of-view (such as your professor's commandline tests) are "black box" tests, and in these examples are at a more functional level rather than "unit" level.
I'm sure plenty of people may disagree with that, though - it's not a rigidly-defined set of terms. | 0 | 4,198 | true | 0 | 1 | Is it acceptable practice to unit-test a program in a different language? | 23,623,088 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm trying to find a way to interrupt a save and return a HttpResponse from it. So far I've only managed to do this with raising an Exception, but I want to return a propper HttpResponse.
Any ideas? Also looking for any other ways of stopping a save. | 0 | python,django | 2014-05-13T10:38:00.000 | 0 | 23,628,936 | If you catch that raised exception at the place in your view where you triggered the signal you can then respond with whatever you want in the view. | 0 | 78 | false | 0 | 1 | Return HttpResponse from pre_save() | 23,638,267 |
1 | 2 | 1 | 1 | 0 | 0 | 0.099668 | 0 | I've a strange issue. pserve --reload has stopped reloading the templates. It is reloading if some .py-file is changing, but won't notice .mak-file changes anymore.
I tried to fix it by:
Checking the filepermissions
Creating the new virtualenv, which didn't help.
Installing different version of mako without any effect.
Checking that the python is used from virtualenv
playing with the development.ini. It has the flag: pyramid.reload_templates = true
Any idea how to start debugging the system?
Versions:
Python 2.7
pyramid 1.5
pyramid_mako 1.02
mako 0.9.1
Yours
Heikki | 0 | python,pyramid,mako,waitress | 2014-05-14T05:42:00.000 | 0 | 23,646,485 | Oh my,
I found the thing... I had <%block cached="True" cache_key="${self.filename}+body"> and the file inclusion was inside of that block.
Cheerious:) | 0 | 216 | false | 1 | 1 | Pyramid Mako pserver --reload not reloading in Mac | 23,654,584 |
1 | 3 | 0 | 0 | 139 | 1 | 0 | 0 | What is the difference between setUp() and setUpClass() in the Python unittest framework? Why would setup be handled in one method over the other?
I want to understand what part of setup is done in the setUp() and setUpClass() functions, as well as with tearDown() and tearDownClass(). | 0 | python,unit-testing,python-unittest | 2014-05-15T01:04:00.000 | 0 | 23,667,610 | As they mentioned above, setup() and teardown() will run after and before each test. however SetupClass() and TearDownClass() will run only once for the whole class. | 0 | 61,754 | false | 0 | 1 | What is the difference between setUp() and setUpClass() in Python unittest? | 72,185,121 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have written a python tkinter program which runs on my Raspberry Pi, which does a number of things, including interfacing with my google calendar (read only access). I can navigate to the directory it is in and run it there - it works fine.
I would like the program to start at boot-up, so I added it to the autostart file in /etc/xdg/lxsession/LXDE, as per advice from the web. However it does not start at boot. So I try running the line of code I put in that file manually, and I get this.
(code I run) python /home/blahblah/MyScript.py
WARNING: Please configure OAuth 2.0
To make this sample run you will need to download the client_secrets.json file and save it at:
/home/blahblah/client_secrets.json
The thing is, that file DOES exist. But for some reason the google code doesn't realise this when I run the script from elsewhere.
How then can I get my script to run at bootup? | 0 | python,raspberry-pi,google-calendar-api,autostart | 2014-05-15T12:30:00.000 | 0 | 23,678,292 | Figured this out now. It's tough, not knowing whether it's a Python, Linux or Google issue, but it was a Google one. I found that other people across the web have had issues with client_secrets.json as well, and the solution is to find where its location is stored in the Python code, and instead of just having the name of the file, include the path as well, like this.
CLIENT_SECRETS = '/home/blahblahblah/client_secrets.json'
Then it all works fine - calling it from another folder and it starting on startup. :) | 0 | 381 | true | 0 | 1 | Can't auto-start Python program on Raspberry Pi due to Google Calendar | 23,686,241 |
1 | 1 | 0 | 4 | 6 | 1 | 0.664037 | 0 | I'm about to write a program for a racecar, that creates a txt and continuously adds new lines to it. Unfortunately I can't close the file, because when the car shuts off the raspberry (which the program is running on) gets also shut down. So I have no chance of closing the txt.
Is this a problem? | 0 | python,file-io,raspberry-pi | 2014-05-17T16:14:00.000 | 0 | 23,713,527 | Yes and no. Data is buffered at different places in the process of writing: the file object of python, the underlying C-functions, the operating system, the disk controller. Even closing the file, does not guarantee, that all these buffers are written physically. Only the first two levels are forced to write their buffers to the next level. The same can be done by flushing the filehandle without closing it.
As long as the power-off can occur anytime, you have to deal with the fact, that some data is lost or partially written.
Closing a file is important to give free limited resources of the operating system, but this is no concern in your setup. | 0 | 479 | false | 0 | 1 | What happens if I don't close a txt file | 23,713,635 |
1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | so I trying to build project for cocos2d-x. I'm currently at cmd and when I type python android-build.py -p 19 cpp-tests it start making project but then I get error that build failed. Problem is bescause it can't find javac compiler.
"Perhaps JAVA_HOME does not point to the JDk. It is currently set to
"c:/Program Files/Java/jre7"
Problem is bescause in system variables I made new variable called JAVA_HOME and it is pointed to C:\android\Java\jdk1.8.0_05\bin but still I getting that error. What to do guys? | 0 | java,android,python,c++ | 2014-05-17T21:04:00.000 | 0 | 23,716,064 | You have to point JAVA_HOME to this path:
C:\android\Java\jdk1.8.0_05 | 0 | 90 | false | 1 | 1 | JAVA_HOME "bug" | 31,828,819 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | I have access to a set of cloud machines. Each of these machine is responsible for specific tasks and has a set of tools responsible for the task.
Now these tools are updated weekly adding new functions. All the tools are implemented on the python language.
The problem is that I need to upload every time my code to all of these machines. I want to have a common place for the tools for all the VMs. How can I do that?
My initial idea is to just mount on every VM a service like dropbox. However, I dont know if this is the correct approach for the problem.
Could you please give some suggestions? | 0 | python,cloud,virtual-machine,cloud-storage | 2014-05-18T18:52:00.000 | 0 | 23,725,667 | Assuming you want to maintain performance then you probably still want to keep the tools on the machines that actually have to use them. In other words whatever you are doing will probably run slower if you have to access some 'off machine' location to get any tool required.
If what you are looking for is a way to more easily manage and distribute your tool updates to multiple machines, you could store all your tools in a repository (like SVN or GIT etc or even a home made one) and have a script on your machines which runs every day (or hour or whatever you require) to update the tools to the latest release.
Ideally you want your update to only include changes since the last update, but most distributed repositories will support this automatically. | 0 | 18 | false | 0 | 1 | One Location for all tools in Cloud Vms | 23,761,086 |
1 | 2 | 0 | 1 | 1 | 0 | 1.2 | 0 | I have used Pika to integrate the Websocket in Tornado and RabbitMQ. It sucessfully runs on various queues till some time. Then raises the following error:
CRITICAL:pika.connection:Attempted to send frame when closed
I have taken the code reference from https://github.com/haridas/RabbitChat/blob/master/tornado_webapp/rabbit_chat.py
I have gone through my code thoroughly, however fail to understand why it raises such an error.
Can someone help troubleshoot!
Thanks!
Also note changing the backpressure multiplier does not solve the problem. So looking for a real solution for this one. | 0 | python,websocket,rabbitmq,tornado,pika | 2014-05-19T05:32:00.000 | 0 | 23,730,297 | Since the consumers and producers were queuing - dequeuing from a particular queue, at a point PIKA Client just choked out due to the multiple asynchronous threading systems over the shared queue.
Thus, in case anybody else faces the same issue follow the several check ups in your code:
How many connections are you having? How many channels? how many queues? How many producers- consumers? (These could be determined by the sudo rabbitmqctl list_queues etc )
Once you understand the structure you are using, track the running transactions. For several requests by several users.
Thus on each transaction, print the thread action, so that you understand the pika activities. Since these threads run in async, if overwhelmed wrongly, causes the pika client to crash. Thus create a Thread Manager to control the threads.
Solution was advised by Gavin Roy & Michael Klishin, from Pika & RabbitMQ respectively. | 0 | 1,276 | true | 0 | 1 | CRITICAL:pika.connection:Attempted to send frame when closed | 24,450,966 |
1 | 2 | 0 | 3 | 2 | 1 | 1.2 | 0 | in unittest python library, exists the functions setUp and tearDown for set variables and other things pre and post tests.
how I can run or ignore a test with a condition in setUp ? | 0 | python,fixtures,python-unittest | 2014-05-19T15:03:00.000 | 0 | 23,741,133 | You can call if cond: self.skipTest('reason') in setUp(). | 0 | 1,507 | true | 0 | 1 | if condition in setUp() ignore test | 23,741,307 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | I have project in django 1.4 and I need to run django test in contious integration system (GitLab 6.8.1 with Gitlab CI 4.3).
Gitlab Runner have installed on server with project.
When I run:
cd project/app/ && ./runtest.sh test some_app
I get:
Traceback (most recent call last):
File "manage.py", line 2, in <module>
from django.core.management import execute_manager
ImportError: No module named django.core.management
How I may run tests? | 0 | python,django-testing,django-1.4,gitlab-ci | 2014-05-19T15:21:00.000 | 0 | 23,741,509 | Do you have Django installed on the testrunner?
If not, try to configure a virtualenv for your testsuite. Best might be (if you have changing requirements) to make the setup and installation of this virtualenv part of your testsuite. | 0 | 3,987 | false | 1 | 1 | Running django test on the gitlab ci | 25,917,374 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I've developed a little Python Dropbox app but I have no I idea how to hide the app key and app secret. Until I solve this problem I'm not sure how I can ship my app as this seems to be a significant security threat.
I know it is hard to obfuscate code, most especially Python so I'm not really sure that that is an option.. but what else could I do? I thought about using some form of encryption and/or storing them on a server to be retrieved when the app starts. Is it possible to write the part that deals with the keys in another language that's more easily to obfuscate like C? As I don't know much about encryption, I'm not sure if any of these options are feasible or not. | 0 | python,encryption,obfuscation,dropbox-api | 2014-05-20T04:07:00.000 | 0 | 23,750,850 | To prevent casual misuse of your app secret (like someone who copy/pastes code not realizing they're supposed to create their own app key/secret pair), it's probably worth doing a little obfuscation, but as you point out, that won't prevent a determined individual from obtaining the app secret.
In a client-side app (like a mobile or desktop app), there's really nothing you can do to keep your OAuth app secret truly secret. That said, the consensus seems to be that this doesn't really matter. In fact, in OAuth 2, the recommended flow for client-side apps is the "token" or "implicit" flow, which doesn't use the app secret at all. | 0 | 341 | true | 0 | 1 | Python Dropbox app, what should I do about app key and app secret? | 23,751,446 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I have developped a HIDServer (bluetooth keyboard) with python on my computer. There are 2 Serversockets (psm 0x11 and 0x13) listening for connections.
When I try to connect my IPhone to my computer, I receive an incoming connection (as can be seen in hcidump), but somehow the connection is terminated by remote host. My sockets never get to accept a client connection. Can you help me please?
hcidumps:
After starting my programm:
HCI Event: Command Complete (0x0e) plen 4
Write Extended Inquiry Response (0x03|0x0052) ncmd 1
status 0x00
When trying to connect IPhone:
HCI Event: Connect Request (0x04) plen 10
bdaddr 60:D9:C7:23:96:FF class 0x7a020c type ACL
HCI Event: Command Status (0x0f) plen 4
Accept Connection Request (0x01|0x0009) status 0x00 ncmd 1
HCI Event: Connect Complete (0x03) plen 11
status 0x00 handle 11 bdaddr 60:D9:C7:23:96:FF type ACL encrypt 0x00
HCI Event: Command Status (0x0f) plen 4
Read Remote Supported Features (0x01|0x001b) status 0x00 ncmd 1
HCI Event: Read Remote Supported Features (0x0b) plen 11
status 0x00 handle 11
Features: 0xbf 0xfe 0xcf 0xfe 0xdb 0xff 0x7b 0x87
HCI Event: Command Status (0x0f) plen 4
Read Remote Extended Features (0x01|0x001c) status 0x00 ncmd 1
HCI Event: Read Remote Extended Features (0x23) plen 13
status 0x00 handle 11 page 1 max 2
Features: 0x07 0x00 0x00 0x00 0x00 0x00 0x00 0x00
HCI Event: Command Status (0x0f) plen 4
Remote Name Request (0x01|0x0019) status 0x00 ncmd 1
HCI Event: Remote Name Req Complete (0x07) plen 255
status 0x00 bdaddr 60:D9:C7:23:96:FF name 'iPhone'
HCI Event: Command Complete (0x0e) plen 10
Link Key Request Reply (0x01|0x000b) ncmd 1
status 0x00 bdaddr 60:D9:C7:23:96:FF
HCI Event: Encrypt Change (0x08) plen 4
status 0x00 handle 11 encrypt 0x01
HCI Event: Disconn Complete (0x05) plen 4
status 0x00 handle 11 reason 0x13
Reason: Remote User Terminated Connection | 0 | python,sockets,bluetooth,connection,bluez | 2014-05-20T09:51:00.000 | 1 | 23,756,453 | Setting class of device in my programm in the first place did not work as it got reset. To make the HIDServer work on blueZ I had to set the class of device right before I wait for connections. I cannot say why it gets reset, but I know it does. Maybe somebody else can tell why. | 0 | 689 | false | 0 | 1 | Bluetooth Socket no incoming connection | 24,003,966 |
1 | 1 | 0 | 2 | 0 | 1 | 1.2 | 0 | I know python and want to contribute on OpenSource projects that features python. Anyone can help me where to contribute and how.
I already googled it and find github and code.google as a good place to contribute but how to start it I don't know.
Suggest how to get started. | 0 | python,open-source | 2014-05-20T15:56:00.000 | 0 | 23,764,710 | Not sure if this is an appropriate question for SO - you might get voted down. But ...
Whenever I have seen this question, the answer is almost always:
find a project you like / you're interested in
find something in that project that you feel you can fix / enhance (have a look through their bug tracker)
fork the project (github makes this easy)
make the change, find out what is appropriate for that project (documentation, unit tests, ...)
submit the change back to the project (github has "request pull")
Good luck! | 0 | 435 | true | 0 | 1 | how to contribute on open source project featuring python | 23,764,809 |
1 | 2 | 0 | 21 | 23 | 0 | 1 | 0 | I've been working on getting some distributed tasks working via RabbitMQ.
I spent some time trying to get Celery to do what I wanted and couldn't make it work.
Then I tried using Pika and things just worked, flawlessly, and within minutes.
Is there anything I'm missing out on by using Pika instead of Celery? | 0 | python,rabbitmq,celery,task-queue,pika | 2014-05-20T17:50:00.000 | 1 | 23,766,658 | I’m going to add an answer here because this is the second time today someone has recommended celery when not needed based on this answer I suspect. So the difference between a distributed task queue and a broker is that a broker just passes messages. Nothing more, nothing less. Celery recommends using RabbitMQ as the default broker for IPC and places on top of that adapters to manage task/queues with daemon processes. While this is useful especially for distributed tasks where you need something generic very quickly. It’s just construct for the publisher/consumer process. Actual tasks where you have defined workflow that you need to step through and ensure message durability based on your specific needs, you’d be better off writing your own publisher/consumer than relying on celery. Obviously you still have to do all of the durability checking etc. With most web related services one doesn’t control the actual “work” units but rather, passes them off to a service. Thus it makes little sense for a distributed tasks queue unless you’re hitting some arbitrary API call limit based on ip/geographical region or account number... Or something along those lines. So using celery doesn’t stop you from having to write or deal with state code or management of workflow etc and it exposes the AMQP in a way that makes it easy for you to avoid writing the constructs of publisher/consumer code.
So in short if you need a simple tasks queue to chew through work and you aren’t really concerned about the nuances of performance, the intricacies of durability through your workflow or the actual publish/consume processes. Celery works. If you are just passing messages to an api or service you don't actually control, sure, you could use Celery but you could just as easily whip up your own publisher/consumer with Pika in a couple of minutes. If you need something robust or that adheres to your own durability scenarios, write your own publish/consumer code like everyone else. | 0 | 12,748 | false | 0 | 1 | RabbitMQ: What Does Celery Offer That Pika Doesn't? | 27,367,747 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have a file abc.py under the workspace dir.
I am using os.listdir('/home/workspace/tests') in abc.py to list all the files (test1.py, test2.py...)
I want to generate the path '/home/workspace/tests' or even '/home/workspace' instead of hardcoding it.
I tried os.getcwd() and os.path.dirname(os.path.abspath(____file____)) but this instead generates the path where the test script is being run.
How to go about it? | 0 | python,path,operating-system,listdir | 2014-05-21T12:28:00.000 | 1 | 23,783,185 | I think you are asking about how to get the relative path instead of absolute one.
Absolute path is the one like: "/home/workspace"
Relative looks like the following "./../workspace"
You should construct the relative path from the dir where your script is (/home/workspace/tests) to the dir that you want to acces (/home/workspace) that means, in this case, to go one step up in the directory tree.
You can get this by executing:
os.path.dirname(os.path.join("..", os.path.abspath(__file__)))
The same result may be achieved if you go two steps up and one step down to workspace dir:
os.path.dirname(os.path.join("..", "..", "workspace", os.path.abspath(__file__)))
In this manner you actually can access any directory without knowing it's absolute path, but only knowing where it resides relatively to your executed file. | 0 | 1,068 | false | 0 | 1 | How to generate path of a directory in python | 23,785,492 |
3 | 4 | 0 | 0 | 1 | 0 | 0 | 0 | I am given a task to automate few boring tasks that people in office do everyday using SAP Logon 640.
There are like 30-40 transactions that are required to be automated.
I searched a lot on SAP Automation and found SAP GUI Scripting but failed to find any starting point for python, php or java.
How should i start to automate SAP transactions using python , php or java ? I am not even sure what i need from my IT department to get started. | 0 | java,php,python,automation,sap | 2014-05-24T14:14:00.000 | 0 | 23,846,012 | You can implement Scheduled Jobs using JAVA if i am understanding you correctly. | 0 | 4,691 | false | 1 | 1 | How to Automate repeated tasks in SAP Logon | 24,307,979 |
3 | 4 | 0 | 0 | 1 | 0 | 0 | 0 | I am given a task to automate few boring tasks that people in office do everyday using SAP Logon 640.
There are like 30-40 transactions that are required to be automated.
I searched a lot on SAP Automation and found SAP GUI Scripting but failed to find any starting point for python, php or java.
How should i start to automate SAP transactions using python , php or java ? I am not even sure what i need from my IT department to get started. | 0 | java,php,python,automation,sap | 2014-05-24T14:14:00.000 | 0 | 23,846,012 | SapGui has buit in record and playback tool which gives you out of the box vbs files which you can use for automation , if the values does not change, then you can use the same scripts every time.
You can find it in the main menu of the sap gui window customise local layout(Alt+F12)->Script Recording and playback. | 0 | 4,691 | false | 1 | 1 | How to Automate repeated tasks in SAP Logon | 42,849,870 |
3 | 4 | 0 | 1 | 1 | 0 | 0.049958 | 0 | I am given a task to automate few boring tasks that people in office do everyday using SAP Logon 640.
There are like 30-40 transactions that are required to be automated.
I searched a lot on SAP Automation and found SAP GUI Scripting but failed to find any starting point for python, php or java.
How should i start to automate SAP transactions using python , php or java ? I am not even sure what i need from my IT department to get started. | 0 | java,php,python,automation,sap | 2014-05-24T14:14:00.000 | 0 | 23,846,012 | We use either VBScript or C# to automate tasks. Using VBSCript is the easiest. Have the SAP GUI record a task then it will produce a vbscript that can serve as a starting point for your coding. When you have this vbscript file then you can translate it into other languages. | 0 | 4,691 | false | 1 | 1 | How to Automate repeated tasks in SAP Logon | 23,878,952 |
1 | 1 | 0 | 5 | 3 | 0 | 1.2 | 0 | I'm teaching myself backend and frontend web development (I'm using Flaks if it matters) and I need few pointers for when it comes to unit test my app.
I am mostly concerned with these different cases:
The internal consistency of the data: that's the easy one - I'm aiming for 100% coverage when it comes to issues like the login procedure and, most generally, checking that everything that happens between the python code and the database after every request remain consistent.
The JSON responses: What I'm doing atm is performing a test-request for every get/post call on my app and then asserting that the json response must be this-and-that, but honestly I don't quite appreciate the value in doing this - maybe because my app is still at an early stage?
Should I keep testing every json response for every request?
If yes, what are the long-term benefits?
External APIs: I read conflicting opinions here. Say I'm using an external API to translate some text:
Should I test only the very high level API, i.e. see if I get the access token and that's it?
Should I test that the returned json is what I expect?
Should I test nothing to speed up my test suite and don't make it dependent from a third-party API?
The outputted HTML: I'm lost on this one as well. Say I'm testing the function add_post():
Should I test that on the page that follows the request the desired post is actually there?
I started checking for the presence of strings/html tags in the row response.data, but then I kind of gave up because 1) it takes a lot of time and 2) I would have to constantly rewrite the tests since I'm changing the app so often.
What is the recommended approach in this case?
Thank you and sorry for the verbosity. I hope I made myself clear! | 0 | python,unit-testing,flask,integration-testing | 2014-05-24T20:07:00.000 | 0 | 23,849,163 | Most of this is personal opinion and will vary from developer to developer.
There are a ton of python libraries for unit testing - that's a decision best left to you as the developer of the project to find one that fits best with your tool set / build process.
This isn't exactly 'unit testing' per se, I'd consider it more like integration testing. That's not to say this isn't valuable, it's just a different task and will often use different tools. For something like this, testing will pay off in the long run because you'll have piece of mind that your bug fixes and feature additions aren't impacting your end to end code. If you're already doing it, I would continue. These sorts of tests are highly valuable when refactoring down the road to ensure consistent functionality.
I would not waste time testing 3rd party APIs. It's their job to make sure their product behaves reliably. You'll be there all day if you start testing 3rd party features. A big reason to use 3rd party APIs is so you don't have to test them. If you ever discover that your app is breaking because of a 3rd party API it's probably time to pick a different API. If your project scales to a size where you're losing thousands of dollars every time that API fails you have a whole new ball of issues to deal with (and hopefully the resources to address them) at that time.
In general, I don't test static content or html. There are tools out there (web scraping tools) that will let you troll your own website for consistent functionality. I would personally leave this as a last priority for the final stages of refinement if you have time. The look and feel of most websites change so often that writing tests isn't worth it. Look and feel is also really easy to test manually because it's so visual. | 0 | 1,256 | true | 1 | 1 | How to properly unit test a web app? | 23,849,290 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I am really new to Python and I want to use the Twitter API on PyCharm but it kept on telling me that it isn't recognized.
I ran Tweeter API using just the terminal and it works. But, with the terminal it has limited functionality, hence I want to use the IDE instead.
So;
A) what is the difference between Python on the terminal and the IDE?
B) How would I install and run Twitter API on the IDE? | 0 | python,twitter | 2014-05-25T02:06:00.000 | 0 | 23,851,357 | First, please specify what "Twitter API" you are using with your terminal.
There are Twitter API provided by Twitter and various wrapper libraries for Python.
For your question:
A) Technically No. But if you can tell me your OS and your terminal
Python version, maybe I can help you more :)
B) Depends on "which Twitter API" question at first. | 0 | 58 | false | 0 | 1 | Using Twitter in Python IDE | 24,349,793 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a library written in Python complete with unit tests. I'm about to start porting of the functionality to Scala to run on Spark and as part of an Android application. But I'm loath to reproduce the unit tests in scala.
Is there a method for exposing the to-be-written Scala library to external interrogation from Python? I can rewrite the tests to use a command line interface, but I wondered if there were other ways.
I have ruled out Jython because it is not compatible with my existing Python 3 library and unit tests. | 0 | python,scala,unit-testing | 2014-05-28T06:15:00.000 | 0 | 23,904,066 | Short answer: Maybe, but don't.
Eventually common sense prevailed. I now aim to either use pyspark or port the unit tests across to Scala along with the rest of the library.
Exposing Scala to Python was achievable by generating Scala code that prints specifically what I wanted. Then from Python calling SBT to compile that code and run it and capture the stdout.
I started mocking the scala version of the same api in Python to create these custom scala scripts and by adding in a compilation step for each query it was getting very slow. As I started considering a command line interface or socket based api for my Mocked Scala classes the reality sunk in.
To answer the actual question of running my existing Python unittests, while I think it could still be possible it is not a very good idea. | 0 | 520 | true | 0 | 1 | Scala Testing with Python unittests | 24,052,936 |
2 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | I have a complex function that performs math operations that cannot be vectorized. I have found that using NUMBA jit compiler actually slows performance. It it probably because I use within this function calls to python math.sqrt.
How can I force NUMBA to replace calls to python math.sqrt to faster C calls to sqrt?
--
regards
Kes | 0 | python,performance,jit,numba | 2014-05-28T10:02:00.000 | 0 | 23,908,547 | Numba is mapping math.sqrt calls to sqrt/sqrtf in libc already. The slowdown probably comes from the overhead of Numba. This overhead comes from (un)boxing PyObjects and detecting if errors occurred in the compiled code. It affects calling small functions from Python but less when calling from another Numba compiled function because there is no (un)boxing
If you set the environment variable NUMBA_OPT=3, aggressive optimization will turn on, eliminating some of the overhead but increases the code generation time. | 1 | 1,612 | false | 0 | 1 | how improve speed of math.sqrt() with numba jit compiler in python 2.7 | 23,944,220 |
2 | 2 | 0 | 4 | 1 | 1 | 0.379949 | 0 | I have a complex function that performs math operations that cannot be vectorized. I have found that using NUMBA jit compiler actually slows performance. It it probably because I use within this function calls to python math.sqrt.
How can I force NUMBA to replace calls to python math.sqrt to faster C calls to sqrt?
--
regards
Kes | 0 | python,performance,jit,numba | 2014-05-28T10:02:00.000 | 0 | 23,908,547 | Numba already does replace calls to math.sqrt to calls to a machine-code library for sqrt. So, if you are getting slower performance it might be something else.
Can you post the code you are trying to speed up. Also, which version of Numba are you using. In the latest version of Numba, you can call the inspect_types method of the decorated function to print a listing of what is being interpreted as python objects (and therefore still being slow). | 1 | 1,612 | false | 0 | 1 | how improve speed of math.sqrt() with numba jit compiler in python 2.7 | 23,943,709 |
1 | 1 | 0 | 25 | 15 | 1 | 1.2 | 0 | I need to write a test that will check a local variable value inside a static function. I went through all the unittest docs but found nothing yet. | 0 | python,unit-testing | 2014-05-28T10:54:00.000 | 0 | 23,909,692 | You can't. Local variables are local to the function and they cannot be accessed from the outside.
But the bigger point is that you shouldn't actually be trying to test the value of a local variable. Functions should be treated as black boxes from the outside. The are given some parameters and they return some values and/or change the external state. Those are the only things you should check for. | 0 | 9,780 | true | 0 | 1 | How to unittest local variable in Python | 23,909,751 |
1 | 1 | 0 | 7 | 2 | 1 | 1.2 | 0 | I understand that GMPY2 supports the GMP library and numpy has fast numerical libraries. I want to know how the speed compares to actually writing C (or C++) code with GMP. Since Python is a scripting language, I don't think it will ever be as fast as a compiled language, however I have been wrong about these generalizations before.
I can't get GMP to work on my computer, so I can't run any tests. If I could, just general math like addition and maybe some trig functions. I'll figure out GMP later. | 0 | python,c,numpy,gmp,gmpy | 2014-05-29T22:46:00.000 | 0 | 23,944,242 | numpy and GMPY2 have different purposes.
numpy has fast numerical libraries but to achieve high performance, numpy is effectively restricted to working with vectors or arrays of low-level types - 16, 32, or 64 bit integers, or 32 or 64 bit floating point values. For example, numpy access highly optimized routines written in C (or Fortran) for performing matrix multiplication.
GMPY2 uses the GMP, MPFR, and MPC libraries for multiple-precision calculations. It isn't targeted towards vector or matrix operations.
The Python interpreter adds overhead to each call to an external library. Whether or not the slowdown is significant depends on the how much time is spend by the external library. If the running time of the external library is very short, say 10e-8 seconds, then Python's overhead is significant. If the running time of the external library is relatively long, several seconds or longer, then Python's overhead is probably insignificant.
Since you haven't said what you are trying to accomplish, I can't give a better answer.
Disclaimer: I maintain GMPY2. | 1 | 2,014 | true | 0 | 1 | How do numpy and GMPY2 compare with GMP in terms of speed? | 23,946,348 |
1 | 1 | 0 | 1 | 2 | 0 | 1.2 | 0 | I wrote a program in C, and designed its GUI using Python. Now I want to convert it to a web app.
I have GUI.py and abc.exe file.
Can I directly execute GUI Python script (GUI.py) on 'Apache2' local server? If yes, then how? | 0 | python,apache,web-deployment | 2014-05-31T06:44:00.000 | 1 | 23,967,242 | It depends on how the GUI is written, what abc.exe does and how you want to use the web interface. In general, what you want is not possible. While for local applications there is only one user and it is clear, when the user terminates the program, for web applications there can be millions of users at the same time, and when the application doesn't hear anything form a user, it is not clear, if the user closed the window, or there is a network connection broken, or anything else. That's why web applications are as far as possible stateless, or session information is written to databases. This is not the case for local applications, so you have to rewrite probably large parts of the C code. | 0 | 417 | true | 0 | 1 | How to run GUI Python script on Apache? | 23,967,560 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | Hy!
I use google-mail-oauth2-tools but I have a problem:
When I write the verification code the program dead.
Traceback (most recent call last):
File "oauth2.py", line 346, in <module>
main(sys.argv)
File "oauth2.py", line 315, in main
print 'Refresh Token: %s' % response['refresh_token']
KeyError: 'refresh_token
Why?
Thank you! | 0 | python,oauth-2.0,keyerror | 2014-06-05T07:38:00.000 | 0 | 24,054,363 | You get this keyerror because there is no refresh_token in the response. If you didn't ask for offline access in your request, there will be no refresh token in the response, only an access token, bearer and token expiry. | 0 | 467 | true | 0 | 1 | google-mail-oauth2-tools KeyError :-/ | 24,058,208 |
1 | 1 | 0 | 1 | 3 | 1 | 1.2 | 0 | I'm working on a fairly large Python project with a large number of modules in the main repo. The problem is that over time we have stopped using many of these modules without deleting or moving the .py file. So the result is we have far more files in the directory than we are actually using, and I would like to remove them before porting the project to a new platform to avoid porting code we don't need.
Is there a tool that lets me supply the initial module (that starts the application) and a directory, and tells me which files are imported and which aren't? I've seen many similar tools that look at imports and tell me if they are used or not, but none that check files to see if they were imported to begin with. | 0 | python | 2014-06-06T16:12:00.000 | 0 | 24,086,346 | You can retrieve loaded modules at runtime by reading sys.modules. This is a dictionary containing module names as keys and module objects as values. From module object you can get the path to the loaded module.
But be aware: somewhere in you project modules can be loaded in runtime. In this case you can try to find usages of __import__(...) function and importlib.import_module(...). | 0 | 277 | true | 0 | 1 | Finding unused python modules in a directory | 24,086,656 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | There is a long running task (20m to 50m) which is invoked from a HTTP call to a Webserver. Now, since this task is compute intensive, the webserver cannot take up more than 4-5 tasks in parallel (on m3.medium).
How can this be scaled?
Can the auto-scaling feature of EC2 be used in this scenario?
Are there any other frameworks available which can help in scaling up and down, preferably on AWS EC2? | 0 | python,amazon-web-services,amazon-ec2,flask,scalability | 2014-06-09T04:15:00.000 | 1 | 24,113,602 | Autoscaling is tailor-made for situations like these. You could run an initial diagnostic to see what the CPU usage usually is when a single server is running it's maximum allowable tasks (let's say it's above X%).
You can then set up an autoscaling rule to spin up more instances once this threshold is crossed. Your rule could ensure a new instance is created every time one instance crosses X%. Further, you can also add a rule to scale down (setting the minimum instances to 1) based on a similar usage threshold. | 0 | 63 | false | 1 | 1 | Long running task scalablity EC2 | 24,115,986 |
1 | 1 | 0 | 4 | 3 | 1 | 1.2 | 0 | I recently purchased a Raspberry Pi.
I wish to start writing code for it using either C or Python.
I know the differences between ARM vs x86 architecture, viz. RISC vs CISC, but what I don't know is that are there any special considerations on the actual code that I would need to write.
If I write my code on my desktop and compile it there, and then take the same code and compile on my Raspberry Pi, will it compile the same or would it break? | 0 | python,x86,arm,raspberry-pi,raspbian | 2014-06-09T08:47:00.000 | 0 | 24,116,662 | If you write code in python, it will work perfectly fine directly on both your desktop and the raspberry pi.
C, you'll have to recompile but that's about it. There might also be some issues if you start writing data structures to files directly and then using the same files across the different platforms -- you'll typically want to use a portable data format where the data is stored in strings (JSON, XML, or similar...) | 0 | 3,679 | true | 0 | 1 | Differences when writing code for ARM vs x86? | 24,117,029 |
Subsets and Splits