Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 3 | 0 | 0 | 6 | 0 | 0 | 0 | I am working on a project where I need to extract the Mel-Cepstral Frequency Coefficients (MFCC) from audio signals. The first step for this process is to read the audio file into Python.
The audio files I have are stored in a .sph format. I am unable to find a method to read these files directly into Python. I would like to have the sampling rate, and a NumPy array with the data, similar to how wav read works.
Since the audio files I will be dealing with are large in size, I would prefer not to convert to .wav format for reading. Could you please suggest a possible method to do so? | 0 | python,audio | 2015-05-26T05:05:00.000 | 0 | 30,449,860 | You can read sph files via audioreadwith ffmpeg codecs. | 1 | 3,923 | false | 0 | 1 | Read .sph files in Python | 50,763,111 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I am using pexpect with cgi to ssh into remote machine,
Whenever i run a script which does ssh using pexpect from the linux terminal it gives me the correct result, but when i call the script using commands module from the cgi program i am getting the following error.
(0, '(256, \'Traceback (most recent call last):\n File "/usr/local/https/pro.py", line 85, in \n main()\n File "/usr/local/https/pro.py", line 66, in main\n mac = RemoteCommand(command,passwd)\n File "/usr/local/https/pro.py", line 42, in RemoteCommand\n child = pexpect.spawn(command)\n File "/usr/lib/python2.5/site-packages/pexpect.py", line 375, in init\n self.__spawn()\n File "/usr/lib/python2.5/site-packages/pexpect..py", line 446, in __spawn\n raise ExceptionPexpect(\\'Error! pty.fork() failed: \\' + str(e))\npexpect.ExceptionPexpect: Error! pty.fork() failed: out of pty devices\')') | 0 | python,cgi,pexpect | 2015-05-26T12:19:00.000 | 1 | 30,458,394 | ExceptionPexpect(\'Error! pty.fork() failed: \' +
str(e))\npexpect.ExceptionPexpect: Error! pty.fork() failed: out of
pty devices\')')
your system has reached the max # of pty devs. your should increase those according to your needs. keep in mind, for resource & security reasons it shall be useful to limit access to pty to the specific script user. | 0 | 731 | false | 0 | 1 | pty.fork() error python pexpect | 32,283,501 |
4 | 12 | 0 | 4 | 75 | 0 | 0.066568 | 0 | I just started setting up a centos server today and noticed that the default version of python on centos is set to 2.6.6. I want to use python 2.7 instead. I googled around and found that 2.6.6 is used by system tools such as YUM so I should not tamper with it. Then I opened up a terminal on my mac and found that I had python 2.6.8 and 2.7.5 and 3.3.3 installed. Sorry for the long story. In short I just want to know how to lookup all the version of python installed on centos so I don't accidentally install it twice. | 0 | python,linux,macos,centos,version | 2015-05-26T17:18:00.000 | 1 | 30,464,980 | As someone mentioned in a comment, you can use which python if it is supported by CentOS. Another command that could work is whereis python. In the event neither of these work, you can start the Python interpreter, and it will show you the version, or you could look in /usr/bin for the Python files (python, python3 etc). | 0 | 184,308 | false | 0 | 1 | How to check all versions of python installed on osx and centos | 30,465,953 |
4 | 12 | 0 | 4 | 75 | 0 | 0.066568 | 0 | I just started setting up a centos server today and noticed that the default version of python on centos is set to 2.6.6. I want to use python 2.7 instead. I googled around and found that 2.6.6 is used by system tools such as YUM so I should not tamper with it. Then I opened up a terminal on my mac and found that I had python 2.6.8 and 2.7.5 and 3.3.3 installed. Sorry for the long story. In short I just want to know how to lookup all the version of python installed on centos so I don't accidentally install it twice. | 0 | python,linux,macos,centos,version | 2015-05-26T17:18:00.000 | 1 | 30,464,980 | COMMAND: python --version && python3 --version
OUTPUT:
Python 2.7.10
Python 3.7.1
ALIAS COMMAND: pyver
OUTPUT:
Python 2.7.10
Python 3.7.1
You can make an alias like "pyver" in your .bashrc file or else using a text accelerator like AText maybe. | 0 | 184,308 | false | 0 | 1 | How to check all versions of python installed on osx and centos | 58,172,493 |
4 | 12 | 0 | 20 | 75 | 0 | 1 | 0 | I just started setting up a centos server today and noticed that the default version of python on centos is set to 2.6.6. I want to use python 2.7 instead. I googled around and found that 2.6.6 is used by system tools such as YUM so I should not tamper with it. Then I opened up a terminal on my mac and found that I had python 2.6.8 and 2.7.5 and 3.3.3 installed. Sorry for the long story. In short I just want to know how to lookup all the version of python installed on centos so I don't accidentally install it twice. | 0 | python,linux,macos,centos,version | 2015-05-26T17:18:00.000 | 1 | 30,464,980 | we can directly use this to see all the pythons installed both by current user and the root by the following:
whereis python | 0 | 184,308 | false | 0 | 1 | How to check all versions of python installed on osx and centos | 56,606,519 |
4 | 12 | 0 | 7 | 75 | 0 | 1.2 | 0 | I just started setting up a centos server today and noticed that the default version of python on centos is set to 2.6.6. I want to use python 2.7 instead. I googled around and found that 2.6.6 is used by system tools such as YUM so I should not tamper with it. Then I opened up a terminal on my mac and found that I had python 2.6.8 and 2.7.5 and 3.3.3 installed. Sorry for the long story. In short I just want to know how to lookup all the version of python installed on centos so I don't accidentally install it twice. | 0 | python,linux,macos,centos,version | 2015-05-26T17:18:00.000 | 1 | 30,464,980 | Use, yum list installed command to find the packages you installed. | 0 | 184,308 | true | 0 | 1 | How to check all versions of python installed on osx and centos | 30,466,232 |
1 | 3 | 0 | 0 | 0 | 0 | 1.2 | 0 | I've seen other similar questions but none of the solutions are working for me. I am trying to get Twilio working with Google App Engine. I am using the python API and can't seem to get it to work. I tried a few tactics:
used pip install twilio
downloaded the twilio file directly into my root directory
sym linked the required files according to a few tutorials
nothing seems to work. When I write the line "import twilio.twiml" it makes the google app engine crash and say "error: server error:
What is the best way to import Twilio and load into the google app engine server? | 0 | python,google-app-engine,twilio | 2015-05-27T09:39:00.000 | 1 | 30,478,647 | Thanks for the input. I had already tried all of these things. When I ran the app on a local host I saw in the console that the error I was facing was with 'pytz'
Turns out that Twilio requires the Pytz dependency to be in the root directory of Google App Engine. They have not updated the documentation yet.
Hope that helps anyone in the future. | 0 | 275 | true | 1 | 1 | importing Twilio on google app engine | 30,514,752 |
1 | 2 | 0 | 0 | 1 | 1 | 0 | 0 | I am using Zed Shaw's "Learn Python the Hard Way" and have gotten Python to work. I made a file and saved it as ex1.py. I type the command
python ex1.py
This does not recognize the file like it should and instead gives me this
can't open file 'ex1.py': [Errno 2] No such file or directory
I have checked and double-checked it. There is definitely a ex1.py file saved in my Local Disk (C:). None of the common errors on his tutorial include mine. Any tips? | 0 | python,powershell | 2015-05-27T23:56:00.000 | 1 | 30,495,055 | I got it to work finally, I had it in a folder under the path
C:\Users\MyUser\PythonTest\ex1.py
This was wrong, however when I made it
C:\Users\MyUser\ex1.py
It worked, thanks for all the help! | 0 | 173 | false | 0 | 1 | Powershell not finding a file with a .py extension | 30,516,327 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I am trying to convert a python script to an executable file.
I have tried cxFreeze and py2exe, but both told me that Python27 are not in the registry. I found several other questions that tell me to go to regedit and find the python folder, but it is not there. I tried going to HKEY_CURRENT_USER/Software and Python27 was not there.
Do I need to add it there to run the installer for cxFreeze or py2exe or is there another way? | 0 | python,windows,exe | 2015-05-28T05:44:00.000 | 1 | 30,498,106 | I fixed the issue. Apparently I accidentally installed 32-bit Python on a 64-bit machine. So I have to use the 32-bit installer because it installs the registry key in a different place. Thanks for the help anyways. | 0 | 297 | false | 0 | 1 | Convert script to executable, python not in registry | 30,521,807 |
1 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 0 | I have a python program which posts to my local web server. The script runs on a raspberry pi running the latest version of raspbian 3.18. How can I make the Python script run at startup? Raspbian has a login password which is the first thing I have to remove. If the power ever goes out I want the pi to reboot and start running my script again. Should I be using Raspbian for this? The script is the only thing the pi is used for. I tried adding the script to /etc/init.d but I do not think it will run either way if the pi requires login info upon booting. | 0 | python,shell,raspberry-pi,raspbian | 2015-05-28T12:56:00.000 | 0 | 30,507,243 | You can use the Linux crontab to run the Python script. Let's go to root and create a shell script.
sudo -i
nano startup.sh
Then type your python script on this bash script. navigate to home directory, then to this directory, then execute python script, then back home.
cd /
cd home/pi/your directory
sudo python yourpythonscript.py
cd /
Save the script and then exit. Make this sh script executable by giving it permission.
chmod 755 startup.sh
Now open and edit rc.local file.
nano /etc/rc.local
Add /root/startup.sh & before exit 0
now save and exit from the file and reboot your pi.
sudo reboot | 0 | 4,531 | false | 0 | 1 | Start shell script on Raspberry Pi startup | 39,902,403 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | unittest.shortDescription() returns only the first line of the test method's docstring.
Is there a way to change this behavior, e.g. to display the entirety of the docstring, or to display another message ?
Would I need to override shortDescription() ?
EDIT: I did know that shortDescription() takes no arguments (besides the implicit object reference), but I was very unclear in the wording of my question. What I'm really looking for is pointers to how to override shortDescription() and get at, say, the entire contents of the docstring. Thanks ! | 0 | python,python-2.7,python-unittest | 2015-05-28T20:28:00.000 | 0 | 30,516,510 | unittest.shortDescription() takes no arguments. You would have to override it to get the entire docstring. | 0 | 291 | false | 0 | 1 | How to extend/customize unittest's shortDescription()? | 30,516,764 |
1 | 1 | 0 | 0 | 5 | 0 | 0 | 0 | The landscape of Python tools that seem to accomplish the task of propagating Earth satellites/celestial bodies is confusing. Depending on what you're trying to do, PyEphem or Python-SGP4 may be more suitable. Which of these should I use if:
I want ECEF/ECI coordinates of an Earth satellite
I want general sky coordinates of a celestial object
Near Earth vs. far away objects
Want to use two-line element sets
Do any of these accomplish precise orbit determination? If not, where do I go/what resources are there out there for precise orbit determination?
I kind of know the answers here. For instance, POD is not part of any of these libraries. These computations seem to be very involved. POD for many objects are available from IGS. The main reason I ask is for documentation purposes. I'm not familiar with python-skyfield, but I have a hunch it accomplishes what these other two do. --Brandon Rhodes, I await your expertise :) | 0 | python,libraries,coordinate-systems,satellite,sgp4 | 2015-05-29T17:39:00.000 | 0 | 30,535,831 | Michael mentioned it in his comment, but PyEphem I believe is deprecated as of the current Python 3 version. That being said, if you are to use TLEs, SGP4 was made to handle TLEs in particular. The non-keplerian and non-newtonian terms you see in TLEs are specifically passed into the SGP4 propagator (B* drag, second derivative of mean motion, etc.). Once you get outside of Earth neighborhood (beyond GEO), SGP4 is not meant to handle these cases. SGP4 in of itself is inherently a near-earth propagator that does not scale well on an inter-planetary or even cis-lunar regime. In fact, if you are to have both apogee and perigee extend beyond GEO, I would tend to avoid SGP4.
It is important to note that SGP4 outputs things in a TEME frame (true equator mean equinox). This is an inertial frame. If you want ECEF coordinates, you will need to find a package that converts you from inertial to fixed frames. Regardless of whether or not you desired earth-fixed coordinates, I highly recommend making this conversion so you can then convert to your inertial frame of choice. | 0 | 2,484 | false | 0 | 1 | Which should I use: Python-sgp4, PyEphem, python-skyfield | 57,221,666 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I want to create a python library with a 0 argument function that my custom Robot Framework keywords can call. It needs to know the absolute path of the file where the keyword is defined, and the name of the keyword. I know how to do something similar with test cases using the robot.libraries.BuiltIn library and the ${SUITE_SOURCE} and ${TEST NAME} variables, but I can't find anything similar for custom keywords. I don't care how complicated the answer is, maybe I have to dig into the guts of Robot Framework's internal classes and access that data somehow. Is there any way to do this? | 0 | python,robotframework | 2015-05-29T20:17:00.000 | 0 | 30,538,356 | I took a relatively quick look through the sources, and it seems that the execution context does have any reference to currently executing keyword. So, the only way I can think of resolving this is:
Your library needs also to be a listener, since listeners get events when a keyword is started
You need to go through robot.libraries.BuiltIn.EXECUTION_CONTEXT._kw_store.resources to find out which resource file contains the keyword currently executing.
I did not do a POC, so I am not sure whether this actually doable, bu that's the solution that comes to my mind currently. | 0 | 2,627 | false | 0 | 1 | Robot Framework location and name of keyword | 30,659,802 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | Sorry if the title is misleading.
I am trying to write a program that calculates frequency of emails being sent out of different email ids. We need to trigger alerts based on number and frequency of mails sent. For example for a particular email if in past 60 minutes more than 25 mails were sent, a trigger needs to be sent.
A different trigger for another directory based on another rule. Fundamental rules are about how many mails sent over past 60 minutes, 180 minutes, 12 hours and 24 hours. How do we come up with a strategy to calculate frequency and store it without too much of system/cpu/database overheads.
The actual application is a legacy CRM system. We have no access to the Mail Server to hack something inside the Postfix or MTA. Moreover there are multiple domains involved, so any suggestion to do something on the mail server may not help.
We have ofcourse access to every attempt to send a mail, and can look at recording them. My challenge is on a large campaign database writes would be frequent, and doing some real time number crunching resource intensive. I would like to avoid that and come up with an optimal solution
Language would be Python, because CRM is also written using the same. | 0 | python,database,algorithm | 2015-05-31T07:59:00.000 | 0 | 30,555,149 | Try to do hack on client side in recording email attempt to a log file. Then you can read that file to count frequency of emails sent.
I think that you can put data in memory in dict for some time say for ex 5 or 10 min. Then you can send data to DB thus not putting load on DB of frequent writes. If you put a check in your code for sudden surge in email from a particular domain then it might provide you a solution for your problem. | 0 | 50 | false | 0 | 1 | Strategies for storing frequency of dynamic data | 30,555,758 |
1 | 3 | 0 | 1 | 3 | 1 | 0.066568 | 0 | I was trying to build an encryption program in python 2.7. It would read the binary from a file and then use a key to encrypt it. However, I quickly ran into a problem. Files like image files and executables read as hex values. However, text files do not using open(). Even if i run
file=open("myfile.txt", "rb")
out=file.read()
it still comes out as just text. I'm on windows 7, not linux which i think may make a difference. Is there any way i could read the binary from ANY file (including text files), not just image and executable files? | 0 | file,python-2.7,encryption,binary | 2015-05-31T22:01:00.000 | 0 | 30,563,177 | Your binary file is coming out looking like text because the file is being treated like it is encoded in an 8 bit encoding (ASCII or Latin-1, etc). Also, in Python 2, bytes and (text) characters are used interchangeably... i.e. a string is just an array of ASCII bytes.
You should search the differences between python 2 and 3 text encoding and you will quickly see why anomalies such as you are encountering can develop. Most of the Python 2 version encryption modules use the python byte strings.
Your "binary" non-text files are not really being treated any differently from the text ones; they just don't map to an intelligible coding that you recognize, whereas the text ones do. | 0 | 8,188 | false | 0 | 1 | Python read text file as binary? | 42,172,135 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | I'm looking for a best way to invoke a python script on a remote server and poll the status and fetch the output. It is preferable for me to have this implementation also in python. The python script takes a very long time to execute and I want to poll the status intermediately on what all actions are being performed. Any suggestions? | 0 | python,polling,remote-server | 2015-06-01T04:53:00.000 | 0 | 30,565,824 | There are so many options. 'Polling' is generally a bad idea, as it assumes CPU occupation.
You could have your script send you status changes.
You could have your script write it's actual status into a (remote) file (wither overwriting or appending to a log file) and you can look into that file. This is probably the easiest way. You can monitor the file with tail -f file over the link
And many more - and more complicated - other options. | 0 | 235 | true | 0 | 1 | Invoke a python script on a remote server and poll the status | 30,566,324 |
1 | 1 | 0 | -1 | 0 | 0 | -0.197375 | 0 | this is a conceptual question. As part hobby, part art project I'm looking to build a Python script that allows two people to play battleships between their computers (across the net, without being on the same network).
The idea would be you could run the program something like:
python battleships.py 192.168.1.1
Where the IP address would be the computer you wanted to do battle with.
I have some modest Python coding abilities but I'm curious how hard it would be to build this and how one might go about it?
One key goal is that it must require almost zero set-up: I'm hoping anyone can download the python script, open the terminal and play battleships with someone else.
Thanks! | 0 | python,p2p | 2015-06-01T17:23:00.000 | 0 | 30,579,494 | I think the simpliest way to do this is reading socket server in this battleship game. But here is a problem, in this case you will have a problem with connecting, in case when your ip is invisible from the internet. | 0 | 282 | false | 0 | 1 | Conceptual: how to code battleships between two computers in Python? | 30,579,803 |
1 | 1 | 0 | 5 | 7 | 1 | 1.2 | 0 | I'm writing a Python application that utilizes the Tumblr API and was wondering how I would go about hiding, or encrypting, the API key.
Github warns against pushing this information to a repo, so how would I make the application available to the public and still follow that policy? | 0 | python,api,github | 2015-06-03T04:31:00.000 | 0 | 30,610,892 | Why do you need to post your API key? Why not post your app code to Github without your API key and have a configuration parameter for your users to add their own API key? | 0 | 248 | true | 0 | 1 | API key encryption for Github? | 30,612,182 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I want to know the names of session identifiers for Python and Ruby, for example, the names of session identifier for J2EE is JSESSIONID, for PHP is PHPSESSID.
Can you help me please? | 0 | python,ruby-on-rails,session | 2015-06-03T16:29:00.000 | 0 | 30,625,748 | Sorry for the delay, I have the asnwer of my question:
I captured the HTTP traffic of some python and ruby on rails applications, the most common sessions ids for each language are the following:
-Python:
sessionid
-Ruby on Rails: The format of session id is: `
_{Name of application}_session
For example, for my "ExampleRoRApp" application, the sessiond id is:
_ExampleRoRApp_session
Thanks for your comments. | 0 | 129 | true | 1 | 1 | What are the names of session ids for Python and Ruby? | 31,658,762 |
1 | 2 | 0 | 2 | 0 | 0 | 1.2 | 0 | I'd like to be able to pay the users of my site using PayPal Mass Payment. I think this is pretty straightforward if they have a PayPal account.
However, if they do not have a PayPal account, is there any way to have them sign up through my site, without leaving? Or just with a nice redirect? Whatever is least friction. I just don't want to lose users in the onboarding experience.
This is analogous to Stripe managed accounts, but I'm not sure if PayPal has such an analogue. | 0 | python,paypal,oauth,payment | 2015-06-04T00:07:00.000 | 0 | 30,632,887 | PayPal doesn't have a way for people to sign up entirely through your site, although there are some ways to facilitate the process. You'd probably have to call PayPal to get access to some of those as they are aimed primarily at larger businesses.
However, don't neglect the easy/automatic assistance that PayPal gives you: you can pay to any email account, and if that email is not already active on a PayPal account then the payment will be waiting for them when they activate that email into an existing or new PayPal account. So you can onboard merchants to your site and leave the PayPal signup for later, when money will be waiting for them to claim. Psychologically, walking away from money is harder than than deciding not to start processing :). | 0 | 47 | true | 1 | 1 | managed paypal accounts with least friction? | 30,634,280 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 | My project has existing (relatively low-coverage; maybe 50%, and a couple of them can't actually test the result, only that the process completes) tests using Python's built-in unittest suite. I've worked with hypothesis before and I'd like to use that as well - but I'm not sure I want to throw out the existing tests.
Has anyone tried having two completely separate testing frameworks and test sets on a project? Is this a good idea, or is it going to cause unexpected problems down the line? | 0 | python,unit-testing,code-coverage | 2015-06-04T00:19:00.000 | 0 | 30,632,967 | IMO, If current framework supports the attribute based categorization then you can separate them by adding separate categories to have separate results from old and new tests.
On the other hand you can also go for multiple framework if they're supported and have no conflict of interest(E.g. asserts, test reports) by the test runner in your project. But in this case you'll end up having two separate reports from your test executions. | 0 | 25 | true | 0 | 1 | Is it wise to use two completely separate unit testing suites? | 30,633,030 |
2 | 5 | 0 | 0 | 6 | 1 | 0 | 0 | I am pretty new in programming, just learning python.
I'm using Komodo Edit 9.0 to write codes. So, when I write "from math import sqrt", I can use the "sqrt" function without any problem. But if I only write "import math", then "sqrt" function of that module doesn't work. What is the reason behind this? Can I fix it somehow? | 0 | python,module,komodo | 2015-06-04T14:29:00.000 | 0 | 30,646,650 | If the command Import math is present more than once you will get the error: UnboundLocalError: local variable 'math' referenced before assignment | 0 | 66,714 | false | 0 | 1 | "from math import sqrt" works but "import math" does not work. What is the reason? | 70,898,035 |
2 | 5 | 0 | 3 | 6 | 1 | 0.119427 | 0 | I am pretty new in programming, just learning python.
I'm using Komodo Edit 9.0 to write codes. So, when I write "from math import sqrt", I can use the "sqrt" function without any problem. But if I only write "import math", then "sqrt" function of that module doesn't work. What is the reason behind this? Can I fix it somehow? | 0 | python,module,komodo | 2015-06-04T14:29:00.000 | 0 | 30,646,650 | When you only use import math the sqrt function comes in under a different name: math.sqrt. | 0 | 66,714 | false | 0 | 1 | "from math import sqrt" works but "import math" does not work. What is the reason? | 30,646,701 |
1 | 1 | 0 | -1 | 1 | 0 | -0.197375 | 0 | I have a set of unit tests files created in python with unittest as the import.
Running nosetests on both the terminal of MacOSX and on the cmd.exe of Windows 7, it finds the tests and runs them.
Trying to execute nosetests under Cygwin does not find any tests to run.
All three cases use the same version of Python (3.4) and the same version of nose(1.3.6). Also, none of the files are marked as executable
I suspect that is something environmental on cygwin. Does anyone know that do I need to do? | 0 | python,cygwin,nose,nosetests | 2015-06-06T09:54:00.000 | 1 | 30,681,408 | If you only have a single file with tests, you can launch it like this: nosetests tests.py | 0 | 218 | false | 0 | 1 | nosetest not finding tests on cygwin | 34,595,846 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I have a very basic CGI based front end hosted on an IIS server.
I'm trying to find the users within my shop that have accessed this site.
All users on the network sign on with their LAN (Windows) credentials and the same session would be used to access the site.
The python getpass module (obviously) returns only the server name so is there a way to find the user names of the visitors to the site?
The stack is Python 2.7 on IIS 8.0, Windows Server 2012 | 0 | python,iis,cgi | 2015-06-06T13:53:00.000 | 0 | 30,683,672 | When using Windows authentication on IIS, the server variables should contain the username in two variables: AUTH_USER and REMOTE_USER
CGI offers access to all server variables, check your Python docs on how to access them. | 0 | 591 | false | 1 | 1 | Python/CGI on IIS - find user ID | 30,683,908 |
1 | 7 | 0 | 8 | 5 | 1 | 1 | 0 | I am looking for an efficient algorithm to find the longest run of zeros in a binary string. My implementation is in Python 2.7, but all I require is the idea of the algorithm.
For example, given '0010011010000', the function should return 4. | 0 | python,algorithm | 2015-06-09T03:57:00.000 | 0 | 30,722,732 | I don't think there is anything better than a single pass over the string, counting the current sequence length (and updating the maximum) as you go along.
If by "binary string" you mean raw bits, you can read them one byte at a time and extract the eight bits in there (using bit shifting or masking). That does not change the overall algorithm or its complexity. | 0 | 3,277 | false | 0 | 1 | Efficient algorithm to find the largest run of zeros in a binary string? | 30,722,751 |
1 | 4 | 0 | 1 | 19 | 1 | 0.049958 | 0 | I am just starting to explore Python. I am trying to run an AES algorithm code and I am facing the:
ImportError: No module named Crypto.
How do you solve this? | 0 | python,aes,pycrypto | 2015-06-09T16:43:00.000 | 0 | 30,738,083 | Solved when i installed pycrypto rather then crypto
pip2 install pycrypto | 0 | 53,824 | false | 0 | 1 | ImportError: No module named Crypto | 42,974,085 |
2 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | The getting started documentation I can find is helpful in loading up a dronekit script in the simulator, but I can't figure out how to then translate the process to transfer scripts onto my Solo for real world execution. Perhaps I'm misunderstanding somewhere. Please help, and thanks a bunch in advance! | 0 | dronekit-python | 2015-06-09T17:46:00.000 | 0 | 30,739,230 | Root password for both 3drobotics Solo and Artoo is "TjSDBkAu" aka Tj (Tijuana) SD (San Diego) Bk (Back) A (At) u (You).
./src/com/o3dr/solo/android/service/artoo/ArtooLinkManager.java: private static final SshConnection sshLink = new SshConnection("10.1.1.1", "root", "TjSDBkAu");
./src/com/o3dr/solo/android/service/sololink/SoloLinkManager.java: private static final SshConnection sshLink = new SshConnection(getSoloLinkIp(), "root", "TjSDBkAu");
./src/com/o3dr/solo/android/service/update/UpdateService.java: public static final String SSH_PASSWORD = "TjSDBkAu"; | 0 | 403 | false | 0 | 1 | Running DroneKit Air on my 3DR Solo | 30,833,309 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | The getting started documentation I can find is helpful in loading up a dronekit script in the simulator, but I can't figure out how to then translate the process to transfer scripts onto my Solo for real world execution. Perhaps I'm misunderstanding somewhere. Please help, and thanks a bunch in advance! | 0 | dronekit-python | 2015-06-09T17:46:00.000 | 0 | 30,739,230 | We are working on a new firmware release together with some guides/docs that will allow for a better development experience, right now all you can do is scp your files, add them to .mavinit.scr on home directory and kill the mavproxy process to make sure it reloads this configuration.
That said, I would be very careful in flying with untested code as an exception might cause other parts of Solo to fail and cause you to crash, please test thoroughly before flying. | 0 | 403 | false | 0 | 1 | Running DroneKit Air on my 3DR Solo | 30,855,173 |
1 | 4 | 0 | 2 | 15 | 0 | 0.099668 | 0 | I have a Python script which process data off of a HTTP data stream and I need this script to in theory be running at all times and forever unless I kill it off manually to update it and run it again.
I was wondering what the best practice to do this was on a Unix (Ubuntu in particular) so that even if Terminal is closed etc the script continues to run in the background unless the process or server are shut down?
Thanks | 0 | python,unix,ubuntu | 2015-06-09T20:47:00.000 | 1 | 30,742,533 | I am not sure how "best practice" this is but you could do:
Add the program to:
/etc/rc.d/rc.local
This will have the program run at startup.
If you add an '&' to the end of the line it will be run in the background.
If you dont want to run the program manually (not at startup) you could switch to another tty by pressing ctrl + alt + f1, (this opens tty1 and will work with f1 - f6) then run the command. This terminal you do not have to have open in a window so you dont have to worry about it getting closed. To return to the desktop use ctrl + alt + f7. | 0 | 40,621 | false | 0 | 1 | Unix: Have Python script constantly running best practice? | 30,742,647 |
3 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | I have a shell script that I want to run automatically every day at 08 AM, and I am not authorised to use the crontab because I don't have root permission
My home directory is /home/user1/.
Any suggestions? | 0 | python,linux,shell,unix,linux-kernel | 2015-06-11T09:24:00.000 | 1 | 30,776,719 | In a pinch, you can use at(1). Make sure the program you run reschedules the at job. Warning: this goes to heck if the machine is down for any length of time. | 0 | 164 | false | 0 | 1 | schedule automate shell script running not as ROOT | 30,956,053 |
3 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | I have a shell script that I want to run automatically every day at 08 AM, and I am not authorised to use the crontab because I don't have root permission
My home directory is /home/user1/.
Any suggestions? | 0 | python,linux,shell,unix,linux-kernel | 2015-06-11T09:24:00.000 | 1 | 30,776,719 | I don't think root permission is required to create a cron job. Editing a cronjob that's not owned by you - there's where you'd need root. | 0 | 164 | false | 0 | 1 | schedule automate shell script running not as ROOT | 30,777,116 |
3 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | I have a shell script that I want to run automatically every day at 08 AM, and I am not authorised to use the crontab because I don't have root permission
My home directory is /home/user1/.
Any suggestions? | 0 | python,linux,shell,unix,linux-kernel | 2015-06-11T09:24:00.000 | 1 | 30,776,719 | Even if you dont have root permission you can set cron job. Chcek these 2 commands as user1, if you can modify it or its throwing any error.
crontab -l
If you can see then try this as well:
crontab -e
If you can open and edit, then you can run that script with cron.
by adding this line:
* 08 * * * /path/to/your/script | 0 | 164 | false | 0 | 1 | schedule automate shell script running not as ROOT | 30,776,928 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | When I run my tests I get a syntax error: SyntaxError: (unicode error) 'utf-8' codec can't decode byte 0xa7 in position 0: invalid start byte
The cause of this seems to be that I use a § in a string on line 62. I'm using python 3.4.2 for the project and have used § elsewhere without getting a error. I got a friend to open the project as well, on his screen the § in tests.py showed up as question marks, but this was only in the test files, in the other places it had been used it showed up as normal. I got him to change the § that were showing up as question marks to § on his pc and it worked, which is really weird. How would I go about fixing something like this on my computer though? I can't really get him to load up the file and insert special character every time I want to use them in tests.
edit: So I found out pycharm for some reason had set only tests.py to a encoding other than utf-8. I changed this to utf-8 and it then showed the § I had written as question marks. However swapping them out for § did not work for me. The reason is that for some reason even though the encoding is set to utf-8, pycharm still displays latin1 for me and type latin1 characters instead of utf-8. I've tested on 2 other computers (1 mac, and 1 windows 8.1 same as the one I have problems with) where it correctly displays utf-8. On those computers my § still appear as question marks, but if i change it on the other computer it now appears as § on the computer with the problem. So my problem now is to get pycharm to properly use UTF-8 instead of latin 1. | 0 | django,python-3.x,unicode,pycharm | 2015-06-11T10:35:00.000 | 0 | 30,778,437 | Ok so I found the problem. as patrys sugested in a comment the file didn't use UTF-8 as encoding. To change that in pycharm I had to go to file->settings->editor->file encodings and change the file encoding for tests to utf-8. After I did that I had to go into the file and re eddit the § as they have now turned into question marks. However it still didn't work. I found out that I also have to change it to UTF-8 down in the right corner of pycharm. For some reason tests is the only .py file that was affected by this (even though I deleted the original tests.py file and remade it). | 0 | 110 | false | 1 | 1 | Django testing won't run because of syntax error | 30,778,955 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | The main advantage of Nginx is cited as it not needing to spawn separate threads for each request it receives.
Now, if we were to run a python based web application using FastCGI, and this web application has blocking calls, would this create a bottleneck?
Since there is only a limited number of workers running(1 per processor?), wont a blocking call by a python script make it cooperative multiprocessing? | 0 | python,nginx,fastcgi | 2015-06-11T20:42:00.000 | 1 | 30,791,066 | Nginx talks to a fastcgi process over a socket connection.
If the fastcgi process blocks, that means that it won't be sending data over the socket.
This won't block nginx as such, because it keeps processing events (data from other connections). It uses non-blocking techniques like select, poll or equivalent OS-dependent functions (with a timeout) to query sockets without blocking.
But it will stall whatever client is waiting for the fastcgi output. | 0 | 345 | false | 0 | 1 | Nginx + FastCGI with blocking calls | 30,792,360 |
1 | 4 | 0 | 1 | 0 | 1 | 0.049958 | 1 | I'm creating a code to demonstrate how to consume a REST service in Python, but I don't want my API keys to be visible to people when I push my changes to GitHub. How can I hide such information? | 0 | python,git,github | 2015-06-13T16:55:00.000 | 0 | 30,821,218 | Considering storing this kind of data in a config file that isn't tracked by git. | 0 | 2,480 | false | 0 | 1 | How can I hide sensitive data before commiting to GitHub (or any other Git repo)? | 30,821,244 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | i am using 2 python programs with rc.local for my raspberry, first program is my main program and the other is the second program. The second program is shutdown the raspberry, but when i do the second program my first program is still running and will stopped until raspberry truly shutdown.
I want to make the second program kill the first program before raspberry truly shutdown, how can i do it? | 0 | python,python-2.7,raspberry-pi | 2015-06-16T14:21:00.000 | 0 | 30,870,391 | Maybe an easier way would be to use the shell to kill the process in question? Each process in linux has a number assigned to it, which you can see by typing
pstree -p
In your terminal. You can then kill the process by typing in
sudo kill process number
Does that help, or were you thinking of something a bit more complicated? | 0 | 526 | false | 0 | 1 | how to kill a python programs using another python program on raspberry? | 30,873,178 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I installed python module unidecode: downloaded it, run setup.py. Exactly as I did for any other python module.
Then tried import unidecode: It works only in the downloaded directory. What's wrong? | 0 | python,python-2.7,installation | 2015-06-19T11:07:00.000 | 0 | 30,936,625 | just run setup.py with the wrong python version. Solved. | 0 | 3,372 | false | 0 | 1 | Installing unidecode python module | 30,939,196 |
1 | 1 | 0 | 6 | 0 | 0 | 1.2 | 0 | I need to change text documents. The way I've been doing it is making a new file, copying everything line by line from the old file and making changes on the way, then saving the new file as the old file's name. This becomes a problem when I only have read permission on the file. First I get OSErrno 30, not letting me delete the old file at the end of the writing. If I change my open command to 'r+', it simply says the file is not found. I don't have root access. Does anyone know of a workaround to this problem?
EDIT: Thanks for the responses. I guess that IS the intended behavior of a read-only file... | 0 | python,text | 2015-06-19T13:16:00.000 | 1 | 30,939,165 | Yes,
you are right,
you can only read
where you cannot write. | 0 | 253 | true | 0 | 1 | Changing a read-only file | 30,939,390 |
1 | 1 | 0 | 2 | 1 | 0 | 0.379949 | 0 | I am interested to implement LZ algorithms for the compression of ECG signal and want to optimized the code with relevant to Micro controller.
So that it would Entropy efficient and take less time to compress and decompress the ECG signal. I am totally stuck how I go through to achieve this. I am open to any programming language.
I have searched the internet for the source codes and I found a very long code which is difficult to understand in a short period of time.
Any suggestion...? | 0 | python,c,matlab,lzw,lz77 | 2015-06-21T02:07:00.000 | 0 | 30,960,617 | Reconsider your choice of the LZ 77/78 algorithms. ECG waves look similar but they are not binary identical so the dictionary-based compression algorithms don't provide ideal results.
Complicated algorithms can hardly be expressed in few lines of code. | 1 | 339 | false | 0 | 1 | LZ 77, 78 algorithm for ECG Compression | 30,961,062 |
3 | 4 | 1 | 1 | 6 | 1 | 0.049958 | 0 | I am interested into programming with different languages besides Ti-Basic (like Java, C, and Python) on my Ti-84 plus calculator. Does my calculator support this, and if not, are there any calculators on the market that would be able to do this? Thanks in advance!
(The idea is that when I don't have access to my computer at home, I could just pull out my pocket calculator and start programming and testing out some algorithms on the go that come in mind.)
It doesn't have to be a calculator, just something cheap and programmable and something I can carry around in my hand. | 0 | java,python,c,calculator,ti-basic | 2015-06-23T03:13:00.000 | 0 | 30,993,221 | You would need a compiler that will translate whatever language you're writing in (in the case of Java, an implementation of the JVM as well) to the assembly used by the calculator's CPU, it's probably not likely you will find an easy to use solution as calculators like the TI-84 are pretty archaic. | 0 | 3,809 | false | 0 | 1 | Multiple Language Programming on Ti-Calculator | 30,993,515 |
3 | 4 | 1 | 1 | 6 | 1 | 0.049958 | 0 | I am interested into programming with different languages besides Ti-Basic (like Java, C, and Python) on my Ti-84 plus calculator. Does my calculator support this, and if not, are there any calculators on the market that would be able to do this? Thanks in advance!
(The idea is that when I don't have access to my computer at home, I could just pull out my pocket calculator and start programming and testing out some algorithms on the go that come in mind.)
It doesn't have to be a calculator, just something cheap and programmable and something I can carry around in my hand. | 0 | java,python,c,calculator,ti-basic | 2015-06-23T03:13:00.000 | 0 | 30,993,221 | The TI-84 Plus CE Python allows you to code in Python but it is a barebones implementation. But it has been pretty useful for me. | 0 | 3,809 | false | 0 | 1 | Multiple Language Programming on Ti-Calculator | 69,366,466 |
3 | 4 | 1 | 3 | 6 | 1 | 1.2 | 0 | I am interested into programming with different languages besides Ti-Basic (like Java, C, and Python) on my Ti-84 plus calculator. Does my calculator support this, and if not, are there any calculators on the market that would be able to do this? Thanks in advance!
(The idea is that when I don't have access to my computer at home, I could just pull out my pocket calculator and start programming and testing out some algorithms on the go that come in mind.)
It doesn't have to be a calculator, just something cheap and programmable and something I can carry around in my hand. | 0 | java,python,c,calculator,ti-basic | 2015-06-23T03:13:00.000 | 0 | 30,993,221 | After a little research, I found some some hand-held "pocket" devices.
The Palm m500 has a JVM to program java on. There apparently was a website that had an SDK for C, but the website was removed.
In regards to calculators:
TI-82, 83, 84, 85, 86, and related models all support TI-BASIC and z80 ASM.
TI-92, Voyage 200, TI-89, and related models all support TI-BASIC, C, and 68000 ASM.
TI-nspire supports TI-BASIC and Lua.
HP 50g supports RPL (User and System), ARM ASM, Saturn ASM, and C.
HP 49, 48G, or 48S, which support Saturn ASM and RPL. | 0 | 3,809 | true | 0 | 1 | Multiple Language Programming on Ti-Calculator | 31,085,450 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I am doing some research of spam detection on Twitter where my program is dynamic enough which can built a tree of datastructure of metadata of user and his tweets just taking a parameter as Screen_name or Twitter id but collecting legitimate user name and spammer name is a manual task. (If there is any other way please suggest me.) | 0 | python,twitter | 2015-06-23T14:13:00.000 | 0 | 31,005,233 | Use of Stream API could help you. You can collect the real time information there and using some clustering algorithms and data mining techniques can solve it. | 0 | 73 | false | 0 | 1 | Is there any way to get current legitimate user list from Twitter in 5000 to 10000 | 31,005,344 |
2 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | I'm taking a python course, I save exercise scripts in /this/is/where/I save/exercises/exercise.py.
Now whenever I type python in terminal it immediately gives me this:
IOError: [Errno 2] No such file or directory: '/this/is/where/I save/exercises/exercise.py'
I know it's not found since I deleted the file. But why is python running that script whenever it runs? It's annoying.
INFO
Linux OS
Python2 and Python3 both show same error | 0 | python,python-2.7 | 2015-06-23T22:57:00.000 | 1 | 31,014,858 | Thanks, @JoranBeasley, @barunsthakur, @PadraicCunningham, and all.
I had PYTHONSTARTUP set in .bashrc.
May help forgetful people in the future. | 0 | 64 | false | 0 | 1 | Typing 'python' in terminal runs a python script and spits error | 31,015,074 |
2 | 3 | 0 | -1 | 0 | 1 | -0.066568 | 0 | I'm taking a python course, I save exercise scripts in /this/is/where/I save/exercises/exercise.py.
Now whenever I type python in terminal it immediately gives me this:
IOError: [Errno 2] No such file or directory: '/this/is/where/I save/exercises/exercise.py'
I know it's not found since I deleted the file. But why is python running that script whenever it runs? It's annoying.
INFO
Linux OS
Python2 and Python3 both show same error | 0 | python,python-2.7 | 2015-06-23T22:57:00.000 | 1 | 31,014,858 | Python has a special script that is run on startup. On my platform it is located at /usr/lib/python2.5/site-packages/sitecustomize.py IIRC. You may want to check that file for any script calls to that directory. Also, if you are on a linux machine you could check /etc/bashrc or /etc/profile.d. If that doesn't help, try to update your question with more specific information. | 0 | 64 | false | 0 | 1 | Typing 'python' in terminal runs a python script and spits error | 31,014,929 |
1 | 3 | 0 | 7 | 5 | 0 | 1.2 | 0 | Help, I screwed up.
I have a somewhat complex python script that is currently running in a putty window to a Ubuntu server.
I accidentally overwrote the script using another putty window, so the copy on the hard drive is now gone, but the script is still running from memory in the first window.
This happened before I had a chance to run my backups for this folder.
Is there a way I can get the script, (that is currently running) from the memory in the first putty window?
I haven't stopped the script, my guess is once I stop it it will be gone forever.
Can I send it to a background process somehow (some hot key,) then glean the script from a memory dump or something. I assume something like this would have to happen from the actual window from which the script is running.
It would be nice if I could get the .py back, I heard Python compiles the script before running, if that is the case, the human readable part may be gone.
Sigh, it has been a stressful day.
Thanks for any help, Mark.using | 0 | python,recovery | 2015-06-24T00:13:00.000 | 1 | 31,015,556 | As far as I know, Python doesn't keep the source in memory, and the method presented in the comment only keeps executables alive, not scripts. Dumping the memory of the program probably allows you to get to the bytecode, but I have no idea how much effort that might require.
Instead, I'd first try a non-Python-specific approach that I've succesfully used to recover Python sources that I've accidentally deleted. This assumes the filesystem is ext2/3/4 and that you have root access.
The first step (in any recovery) is obviously to try to avoid writing any files on the system so as to not overwrite the data that you are looking for. On a home system I'd probably personally remount the partition as read-only if possible to avoid any more writes going through. I've heard other people recommend just quickly pulling the plug, which might prevent the OS/disk cache from being written to the disk and perhaps save you some additional data (or even prevent the deletion) if you're really quick about it. On a remote system, neither of these is probably a good idea (unless the data is really important and you can get the disk shipped to you or something) since the system might not like it if something goes read-only all of a sudden.
The second step is to execute debugfs /dev/sdXY, where /dev/sdXY is the partition that the deleted file was on. In the prompt, say blocks /path/to/the/directory/the/removed/file/was/in. Then, give the blocks command paths to other existing files in the directory. Now, exit the program by saying quit, and hope that the block numbers you saw were close to each other. If the directory is old and the block numbers scattered, start with a recent file's block numbers. (That is, one that has been last modified as close in time to the removed file's last modification time as possible) We will try to scan the contents of the partition near the other files in the directory under the assumption that the files was stored close to them. Execute dd if=/dev/sdXY bs=4096 skip=BLOCKNUMBER count=COUNT |grep -C APPROXIMATE_LINE_COUNT_OF_THE_REMOVED_FILE WORD, where BLOCKNUMBER is a number somewhere before the first block number, COUNT is some suitable number of blocks to search and WORD is a word that was contained in the source file. If you don't get anything, try fishing near another file's blocks. If time is not an issue, and you can think of a string that only occurred in the removed file (so you don't get too many false positives), you might just skip all that and scan the whole disk with grep -a WORD -C LINECOUNT /dev/sdXY.
Another method (that you should try before the other one) that probably won't work for you since IIRC recent versions of Ubuntu (and probably other systems too) by default configure the kernel to block access to /dev/mem, is trying to scan the memory for the file. Just do grep -a WORD -C LINECOUNT /dev/mem to scan memory instead of the partition. (This can also save your day if you've written a long text in a form field and you misclick and your browser empties the field) | 0 | 17,971 | true | 0 | 1 | Recover Python script from memory, I screwed up | 31,015,993 |
1 | 3 | 0 | 1 | 3 | 1 | 0.066568 | 0 | The texts in the pdf files are text formats, not scanned. PDFMiner does not support python3, is there any other solutions? | 0 | pdf,python-3.x,pdf-parsing,pdfminer | 2015-06-24T10:11:00.000 | 0 | 31,023,793 | For python3, you can download pdfminer as:
python -m pip install pdfminer.six | 0 | 3,008 | false | 0 | 1 | PDF text extract with Python3.4 | 52,716,999 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have used python to write script for Photoshop using versions from CS6 to CC 2014 and my scripts have worked flawlessly. Last week I updated my Photoshop to CC 2015 and my scripts stopped working. I got error messages about missing attributes. I thought this was due to CC 2015 having changed something about the COM interface. When I checked the available attributes, I noticed that only a small subset of all the attributes was avilabe.
I then reinstalled the CC 2014 but the problem did not disappear, I still see just a small subset of the attributes, all the rest gave the same error message. I noticed the splash screen had changed from my previous version of CC 2014 which may mean that some changes has been made in my newer CC 2014 as well.
I can still run the script on my old machine with CS6.
I have of course run makepy prior to using the new Ps version.
I have tried two approaches: win32com and comtypes. Neither works. I get different sets of exported methods for the win32com and comtypes approaches which is surprising.
I am very dependent on my scripts and would like to find a solution.
If you are using Photoshop CC 2014 with python, I recommend you don't upgrade until this problem is solved. | 0 | python,photoshop | 2015-06-24T11:20:00.000 | 0 | 31,025,169 | It seems this is due to a change in the Photoshop API.
You may want to report it to them and get the right feedback.
Since most of the stackflow user might be more "code-oriented", it is likely that you won't get effective answer. | 0 | 2,211 | false | 0 | 1 | Python and Photoshop | 31,025,935 |
1 | 2 | 0 | 4 | 1 | 0 | 1.2 | 0 | I have a bash script to automate few things I do. The bash calls 2 python scripts, If I run the bash script normally, everything runs, no errors what so ever. I set up a cron job to Automate this and when I checked the logs I noticed the python scripts don't run at all. It gives me the following error.
python: can't open file 'movefiles.py': [Errno 2] No such file or directory
python: can't open file 'create_log_shimmer.py': [Errno 2] No such file or directory
Both these files exist and runs when calling the bash script directly. | 0 | python,bash,unix,crontab | 2015-06-24T11:31:00.000 | 1 | 31,025,392 | The working directory of the cron is different from the directory you run the script directly.
Make your bash script to use absolute path for python script files.
Or make the bash script to change directory to where you run the script directly. | 0 | 1,804 | true | 0 | 1 | Crontab, python script fails to run | 31,025,417 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 1 | Im using tweepy to search for a keyword in all tweets in the last 5 minutes. But I couldn't find a way to do this. I saw the since_id and max_id arguments, but they only work if I know the tweet_id before 5 minutes. | 0 | python,json,tweepy | 2015-06-24T13:40:00.000 | 0 | 31,028,239 | There is no specific GETfeature that would allow that.
What you'll need to do is create a search for that keyword and Get search/tweets and use the since_id and max_id like you have and look at time_created_at from the JSON and filter yet again using that.
The problem with the previous step is that you're limited to 180 tweets /15 minutes. Another solution is to use the streaming API since it'll give you recent information most are within 5 minutes and you'll be able to filter by keywords as well. | 0 | 3,078 | false | 0 | 1 | Using tweepy: how to retrieve all tweets within past 5 minutes | 31,028,615 |
1 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 1 | I prepare some test suites for an e-commerce web site, so i use Selenium2Library which requires a running browser on a display. I am able to run these test on my local machine but i had to run them on remote server which does not have an actual display. I tried to use xvfb to create a virtual display but it did not worked, tried all solutions on some answers here but nothing changed.
So i saw pyvirtualdisplay library of Python but it seems like helpful with tests written in Python. I'd like to know that if I am able to run test suites that i wrote in robotframework (which are .txt formatted and could be runnned via pybot) via Python so i can use pyvirtualdisplay?
Sorry about my English, thanks for your answers... | 0 | python,selenium,robotframework,xvfb | 2015-06-25T08:59:00.000 | 0 | 31,045,708 | Yes there is with Xvfb installed.
In very short:
/usr/bin/Xvfb :0 -screen 0 1024x768x24&
export DISPLAY=:0
robot your_selenium_test | 0 | 2,600 | false | 1 | 1 | Is there anyway to run robot framework tests without a display? | 36,692,169 |
1 | 1 | 0 | 5 | 5 | 1 | 1.2 | 0 | i have a scenario where some unknown exceptions may be raised during program execution and i can't except most of them and i want every time any exception will raise an email should send to me as exceptions cause program to terminate if not properly catch!
so i have read about python provide atexit module but it did not work with exceptions so my question is , is there any way to make atexit work with exceptions?? so every any exception raised and programs terminates it should send me a mail?
thanks | 0 | python | 2015-06-25T20:48:00.000 | 0 | 31,060,517 | Look at sys.excepthook. As its name suggests, it's a hook into the exception. You can use it to send you an email when exceptions are raised. | 0 | 2,151 | true | 0 | 1 | how to use atexit when exception is raised | 31,061,329 |
2 | 3 | 0 | 0 | 0 | 0 | 1.2 | 0 | I allow myself to write to you, due to a block on my part at Salt.
I made a bash script that adds a host in my zabbix monitoring server. it works perfectly when I run .sh
the idea is that I want to automate this configuration through salt. I am when I do a highstate my state that contains the script runs in the master before minion because there's my login authentication in my bash script.
Is there's a special configuration for its? is what you have ideas how to do like this kind of setup? according to my research I found that to be used as the salt-runner but I do not know if this is good or not;
In anticipation of your return, I wish you a good weekend. | 0 | python,git,salt-stack | 2015-06-26T13:07:00.000 | 1 | 31,073,993 | If you need the highstate on the minion to cause something to occur on the master than you are going to want too look into using salt's Reactor (which is designed to do exactly this kind of multi-machine stuff). | 0 | 905 | true | 0 | 1 | Run script bash on saltstack master before minion | 31,254,893 |
2 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I allow myself to write to you, due to a block on my part at Salt.
I made a bash script that adds a host in my zabbix monitoring server. it works perfectly when I run .sh
the idea is that I want to automate this configuration through salt. I am when I do a highstate my state that contains the script runs in the master before minion because there's my login authentication in my bash script.
Is there's a special configuration for its? is what you have ideas how to do like this kind of setup? according to my research I found that to be used as the salt-runner but I do not know if this is good or not;
In anticipation of your return, I wish you a good weekend. | 0 | python,git,salt-stack | 2015-06-26T13:07:00.000 | 1 | 31,073,993 | Run a minion on the same box as your master, then you can run the script on your master's minion and then on the other server. | 0 | 905 | false | 0 | 1 | Run script bash on saltstack master before minion | 31,083,687 |
1 | 1 | 0 | 4 | 2 | 0 | 1.2 | 0 | In Eclipse, I have the following Console output
Logfile: File "C:\temp2\file1.log", line 1
Testimplementierung: File "A:\TestSafety\file2.py", line 222
Both paths are shown as hyperlinks.
When I click these hyperlinks in Eclipse Kepler, the files are openend in the Python Text Editor (this is what I want).
When I click these hyperlinks in Eclipse Luna or Mars, it works for the second file. For the first file, I get a dialog "The definition was found at: C:\temp2\file1.log (which cannot be opened because it is a compiled extension)"
Whats going wrong here?
Note: A:\TestSafety is my Eclipse Project. C:\temp2 is outside of my Eclipse workspace. Somebody told me I should add a "Link to existing source" into my Eclipse project, which seemed to work for a file, now it is broken again and I don't know why. | 0 | python,eclipse,pydev | 2015-06-29T08:53:00.000 | 0 | 31,111,488 | It seems I've found the solution:
Open Window -> Preferences, goto PyDev -> Editor -> Code Style -> File Types, look for "Valid source files (comma-separated)" and append ", log".
The file extensions listed in this filed are evaluated by FileTypesPreferencesPage.java:getDottedValidSourceFiles(), which is called by PythonPathHelper.java:isValidSourceFile(), which is called by PyGoToDefinition.java:doOpen(), which is the method raising the error message.
Now my *.log file opens when I click on a hyperlink to it in the Text-Editor. | 0 | 914 | true | 0 | 1 | Eclipse PyDev error message "compiled extension" | 34,672,189 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I have few tests written in Python with unittest module. Tests working properly, but in Jenkins even if test fails, build with this test is still marked as successive. Is there way to check output for python test and return needed result? | 0 | python,jenkins,python-unittest | 2015-07-02T11:40:00.000 | 0 | 31,183,654 | When you publish the unit test results in the post build section (If you aren't already, you should), you set the thresholds for failure.
If you don't set thresholds, the build will always fail unless running them returns a non zero exit code.
To always fail the build on any unit test failure, set all failure thresholds to zero.
Note that you can also set thresholds for skipped tests as well. | 0 | 1,706 | true | 0 | 1 | How to make jenkins trigger build failure if tests were failed | 31,184,389 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I just wrote a small python script that uses BeautifulSoup in order to extract some information from a website.
Everything runs fine whenever the script is run from the command line. However run as a crontab, the server returns me this error:
Traceback (most recent call last):
File "/home/ws/undwv/mindfactory.py", line 7, in
from bs4 import BeautifulSoup
ImportError: No module named bs4
Since I do not have any root access to the server, BeautifulSoup was installed at the user directory: $HOME/local/lib/python2.7/site-packages
I suppose the cron tab does not look for modules in the user directory. Any ideas how to solve that? | 0 | python,cron,beautifulsoup,crontab | 2015-07-02T12:52:00.000 | 1 | 31,185,207 | ~/.local paths (populated by pip install --user) are available automatically i.e., it is enough if the cron job belongs to the corresponding user.
To configure arbitrary path, you could use PYTHONPATH envvar in the crontab. Do not corrupt sys.path inside your script. | 0 | 1,207 | false | 0 | 1 | How do I enable local modules when running a python script as a cron tab? | 31,189,359 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I recently installed HiveMQ on a Ubuntu machine and everything works fine. Being new to Linux( I am more on windows guy) , I am stuck with following question.
I started HiveMQ with command as ./bin/run.sh . A window opens and confirm that HiveMQ is running..Great !!!. I started this with putty and when I close the putty , HiveMQ also stops. How to make HiveMQ run all the time ?.
I am using the HiveMQ for my IoT projects ( raspberry pi). I know to subscribe and publish to HiveMQ broker from python , but what confuses me is , should I be running the python program continuously to make this work ?. Assuming I need to trigger 2+ GPIO on Pi , can I write one program and keep it running by making it subscribe to 2+ topic for trigger events ?.
Any help is greatly appreciated.
Thanks | 0 | python,gpio,messagebroker,iot,hivemq | 2015-07-03T13:32:00.000 | 1 | 31,208,102 | Start HiveMQ with the following: ./bin/run.sh &
Yes it is possible to subscribe to two topics from the same application, but you need to create separate subscribers within your python application. | 0 | 257 | false | 0 | 1 | HiveMQ and IoT control | 31,220,724 |
1 | 2 | 1 | 1 | 2 | 1 | 0.099668 | 0 | I have a C# Application and I want to run python script by passing some arguments and i also want to return some values from python. should i make a .dll file of my python script ? or is there any other way.
I can't use ironpython because i am unable to import my python project libraries in ironpython
thanks | 0 | c#,python | 2015-07-03T17:37:00.000 | 0 | 31,211,941 | Convert your Python script to executable using py2exe.exe, and call it from C# using Process.
Another way is to execute cmd command "python program.py" from your C# program. But, you've to make sure that environment variable for Python is set. | 0 | 339 | false | 0 | 1 | Executing Python Script from C# (ironPython is not an option) | 31,212,458 |
1 | 1 | 0 | 6 | 2 | 1 | 1 | 0 | Instead of having to run a python script and then get errors saying "ImportError: No module named aaa" because that module isn't installed in my system and then install it and then run the script again, and then maybe get the same kind of error for another module, is there any way to discover which modules aren't installed in my system and install them all at once if there're ones that aren't installed yet and which are required for the script? | 0 | python | 2015-07-04T05:45:00.000 | 0 | 31,217,356 | Add all your imports in a single txt file, say requirements.txt and every time you run your program on a new system, just do a
pip install -r requirements.txt
Most Code editor's like Pycharm do this for you on the first run.
You can do a pip freeze > requirements.txt to get all the installed/required packages. | 0 | 5,506 | false | 0 | 1 | How to install all imports at once? | 31,217,400 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | I'm running series of testcases in multiple files, but I want to run the prereq and cleanup only once through out the run, please let me know is there a way to do it? | 0 | python,pytest,nosetests,python-unittest,unittest2 | 2015-07-06T17:15:00.000 | 0 | 31,251,808 | py.test -> session scoped fixtures and their finalization should help you
You can use conftest.py to code your fixture. | 0 | 472 | false | 0 | 1 | Is there a way to run tests prerequisite once and clean up in the end in whole unit test run | 31,251,843 |
1 | 2 | 0 | 8 | 6 | 0 | 1 | 0 | I want to create a .rar file by passing file paths of the files to be archived.
I have tried the rarfile package. But it doesn't have a 'w' option to write to the rarfile handler.
Is there any other way? | 0 | python,rar | 2015-07-07T07:03:00.000 | 0 | 31,261,879 | os.system('rar a <archive_file_path> <file_path_to_be_added_to_archive>')
Can also be used to achieve this. | 0 | 9,232 | false | 0 | 1 | How to create a .rar file using python? | 31,288,178 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I am using redhat linux platform
I was wondering why when I use python script inside crontab to run every 2 minutes it won't work even though when I do monitor the crond logs using
tail /etc/sys/cron it shows that it called the script , tried to add the path of python , [ I am using python2.6 -- so the path would be /usr/bin/python2.6 ]
the crontab -e [tried user and root same problem ]
*/2 * * * * /usr/bin/python2.6 FULLPATH/myscript.py | 0 | python,linux,crontab,redhat | 2015-07-07T16:48:00.000 | 1 | 31,274,717 | Thank you all guys , but I did a little research and I have found a solution , first you have to test sudo python to see if it works with the module , if not you have to do alias for the sudo you put it inside /etc/bashrc [ to make it system wide alias ] , alias sudo='sudo env PATH=$PATH LD_LIBRARY_PATH=$LD_LIBRARY_PATH ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN'
Then you have to change crontab to call a script to assign these values to the variables , using source /the script && /usr/bin/python script.py | 0 | 902 | true | 0 | 1 | How to modify crontab to run python script? | 31,286,520 |
1 | 1 | 0 | 2 | 1 | 1 | 0.379949 | 0 | We use Python Nose for unit testing our GUI app components and application logic. Nose runs all tests in one process, not a big problem for the application logic but for a complex C++/Python lib like PyQt, this is a problem since there is "application wide" state that Qt creates, and it is hard to ensure that cleanup occurs at the right time such that every test method has a "clean Qt slate".
So I would prefer to have Nose start a separate Python process for each test method/function (or at least those that are flagged as needed this). I realize this would slow down the test suite but the benefit should outweigh the cost. I have seen the Insulate and the Multiprocess plugins but neither do this (Insulate only starts separate proc if a crash occurs -- Multiprocess just tries to use N process for N cores).
Any suggestions? | 0 | python,python-3.x,pyqt,nose,pyqt5 | 2015-07-07T16:59:00.000 | 0 | 31,274,926 | You can try nosetests --processes=1 --process-restartworker | 0 | 848 | false | 0 | 1 | Python Nose unit testing using separate (sequential) Python processes | 31,299,481 |
1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | I have code in python that integrates c++ code using swig.
The c++ class has a field with the long value 1393685280 which is converted to python.
The problem is that when calling the getter of this field from python I get the int value -2901282016 instead.
How can I fix it?
Thanks | 0 | python,type-conversion,swig | 2015-07-09T10:15:00.000 | 0 | 31,314,526 | Well, sorry to give everyone the run-around on this one.
We found the solution and it's completely unrelated to SWIG.
Problem emanated from the ODBC driver for the Vertica DB we were using which behaved unpredictably when using a long variable to bind the SQL result to.
Tomer and Inbal | 0 | 103 | false | 0 | 1 | swig converts a positive long to negative int | 31,378,499 |
1 | 4 | 0 | 2 | 0 | 0 | 0.099668 | 0 | We are using ROBOT framework to execute automation test cases.
Could anyone please guide me to write script for e-mail notification of test results.
Note:
I have e-mail server details.
Regards,
-kranti | 0 | python-2.7,robotframework | 2015-07-09T14:19:00.000 | 0 | 31,320,273 | you can use jenkins to run your Robot Framework Testcases. There is a auto-generated mail option in jenkins to send mail with Test results. | 0 | 9,967 | false | 1 | 1 | Email Notification for test results by using Python in Robot Framework | 33,709,506 |
1 | 1 | 0 | 0 | 3 | 0 | 1.2 | 1 | I'm using jira-python to automate a bunch of tasks in Jira. One thing that I find weird is that jira-python takes a long time to run. It seems like it's loading or something before sending the requests. I'm new to python, so I'm a little confused as to what's actually going on. Before finding jira-python, I was sending requests to the Jira REST API using the requests library, and it was blazing fast (and still is, if I compare the two). Whenever I run the scripts that use jira-python, there's a good 15 second delay while 'loading' the library, and sometimes also a good 10-15 second delay sending each request.
Is there something I'm missing with python that could be causing this issue? Anyway to keep a python script running as a service so it doesn't need to 'load' the library each time it's ran? | 0 | python,jira-rest-api | 2015-07-09T15:26:00.000 | 0 | 31,321,872 | @ThePavoIC, you seem to be correct. I notice MASSIVE changes in speed if Jira has been restarted and re-indexed recently. Scripts that would take a couple minutes to run would complete in seconds. Basically, you need to make sure Jira is tuned for performance and keep your indexes up to date. | 0 | 923 | true | 0 | 1 | Jira python runs very slowly, any ideas on why? | 31,656,989 |
1 | 1 | 0 | 1 | 3 | 0 | 0.197375 | 0 | I'm writing some threaded python code in vim. When I run my tests, with
:! py.test test_me.py
Sometimes they hang and cannot be killed with ctrl-C. So I have to background vim (actually the shell the tests are running in) and pkill py.test. Is there a better way to kill the hanging test suite?
I tried mapping :map ,k:! pkill py.test but this doesn't work since while the tests are running my input is going to the shell running the test, not vim.
EDIT:
I'm looking for a way to kill the test process that is quicker than ctrl-Z, pkill py.test, fg <cr> to return to editing. Ideally just a hotkey. | 0 | python,shell,unix,vim | 2015-07-13T04:56:00.000 | 1 | 31,375,628 | When you do :! in Vim, you effectively put Vim into background and the running process, in this case py.test, gets the focus. That means you can't tell Vim to kill the process for you since Vim is not getting keystrokes from you.
Ctrl-Z puts Vim into background while running py.test because Vim is the parent process of py.test. Thus the shell goes through the chain then puts all children as well as the parent into background.
I would suggest that you open another terminal window and do all the housekeeping chores there. | 0 | 371 | false | 0 | 1 | kill a shell created by vim when ctrl-C doesn't work | 31,391,726 |
3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | ERPNext + frappe need to change layout(footer & header) front-end. I tried to change base.html(frappe/templates/base.html) but nothing happened. Probably this is due to the fact that the html files need to somehow compile. Maybe someone have info how to do it?
UPDATE:
No such command "clear-cache".
Commands:
backup
backup-all-sites
config
get-app
init
migrate-3to4
new-app
new-site
patch
prime-wheel-cache
release
restart
set-default-site
set-mariadb-host
set-nginx-port
setup
shell
start
update | 0 | python,frameworks,erpnext,frappe | 2015-07-13T07:01:00.000 | 0 | 31,377,196 | bench clear-cache will clear the cache. After doing this, refresh and check. | 0 | 1,663 | false | 1 | 1 | Customize frappe framework html layout | 31,379,032 |
3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | ERPNext + frappe need to change layout(footer & header) front-end. I tried to change base.html(frappe/templates/base.html) but nothing happened. Probably this is due to the fact that the html files need to somehow compile. Maybe someone have info how to do it?
UPDATE:
No such command "clear-cache".
Commands:
backup
backup-all-sites
config
get-app
init
migrate-3to4
new-app
new-site
patch
prime-wheel-cache
release
restart
set-default-site
set-mariadb-host
set-nginx-port
setup
shell
start
update | 0 | python,frameworks,erpnext,frappe | 2015-07-13T07:01:00.000 | 0 | 31,377,196 | It seems you're not in your bench folder.
When you create a new bench with, for example : bench init mybench it creates a new folder : mybench.
All bench commands must be run from this folder.
Could you try to run bench --help in this folder ? You should see the clear-cache command. | 0 | 1,663 | false | 1 | 1 | Customize frappe framework html layout | 58,808,805 |
3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | ERPNext + frappe need to change layout(footer & header) front-end. I tried to change base.html(frappe/templates/base.html) but nothing happened. Probably this is due to the fact that the html files need to somehow compile. Maybe someone have info how to do it?
UPDATE:
No such command "clear-cache".
Commands:
backup
backup-all-sites
config
get-app
init
migrate-3to4
new-app
new-site
patch
prime-wheel-cache
release
restart
set-default-site
set-mariadb-host
set-nginx-port
setup
shell
start
update | 0 | python,frameworks,erpnext,frappe | 2015-07-13T07:01:00.000 | 0 | 31,377,196 | If anyone stumbles on this. The command needed is bench build. That will compile any assets related to the build.json file in the public folder. (NOTE: You usually have to create build.json yourself). | 0 | 1,663 | false | 1 | 1 | Customize frappe framework html layout | 68,402,268 |
2 | 2 | 0 | 4 | 4 | 0 | 0.379949 | 1 | I have set up an Import.io bulk extract that works great with say, 50 URLs. It literally zips through all of them in seconds. However, when I try to do an extract of 40,000 URLs, the extractor starts very fast for the first thousand or so, and then progressively keeps getting slower every incremental URL. By 5,000 it literally is taking about 4-5 seconds per URL.
One solution that seems to work is breaking them into chunks of 1,000 URLs at a time and doing a separate bulk extract for each. However, this is very time consuming, and requires splicing back together all of the data at the end.
Has anyone experienced this, and if so do they have a more elegant solution?
Thanks,
Mike | 0 | python,import.io | 2015-07-14T02:36:00.000 | 0 | 31,396,712 | One slightly less elegant solution would be to create a crawler. And before you run it insert the the 10k URLs in the "where to start crawling" box.
Under advanced options set the crawl depth to zero, that way you will only get the pages you put in the where to start crawling input box.
That should do the trick. Plus the cawler has a bunch of other options like wait between pages and concurrent pages etc. | 0 | 155 | false | 0 | 1 | Import.io bulk extract slows down when more URLs are in list | 31,427,417 |
2 | 2 | 0 | 0 | 4 | 0 | 0 | 1 | I have set up an Import.io bulk extract that works great with say, 50 URLs. It literally zips through all of them in seconds. However, when I try to do an extract of 40,000 URLs, the extractor starts very fast for the first thousand or so, and then progressively keeps getting slower every incremental URL. By 5,000 it literally is taking about 4-5 seconds per URL.
One solution that seems to work is breaking them into chunks of 1,000 URLs at a time and doing a separate bulk extract for each. However, this is very time consuming, and requires splicing back together all of the data at the end.
Has anyone experienced this, and if so do they have a more elegant solution?
Thanks,
Mike | 0 | python,import.io | 2015-07-14T02:36:00.000 | 0 | 31,396,712 | Mike, would you mind trying again?
We have worked on the Bulk Extract, now it should be slightly slower at the beginning, but more constant
Possibly 40k are still too many, in which case you may try to split, but I did run 5k+ in a single run
Let me know how it goes! | 0 | 155 | false | 0 | 1 | Import.io bulk extract slows down when more URLs are in list | 32,215,417 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 1 | is there any way to constantly "listen" to a website and run some code when it updates?
i'm working on earthquake data, specifically, parsing earthquake data from a site that updates and lists earthquake details in real time.
so far, my only (and clunky) solution has been to use task scheduler to run every 30 minutes, which of course would have a time difference of 1-29 minutes depending on when the event will happen between the 30 minute downtime between running the code.
i also thought of using some twitter API, since the site also has an automated twitter account tweeting details every time an earthquake happens, but again this would require constantly "listening" to the twitter stream via python as well.
would appreciate help, thanks. | 0 | python,python-2.7,twitter | 2015-07-14T17:01:00.000 | 0 | 31,412,919 | is there any way to constantly "listen" to a website and run some code when it updates?
If the site offers a feed or stream of updates, use it.
However, if you are looking to scrape a page and trigger code on differences, then you need to poll the site like you are doing (clumsily) with TaskScheduler. | 0 | 381 | true | 0 | 1 | constantly "listen" to website using python | 31,413,075 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | Hi there I'm trying to deploy my python app using Travis CI but I'm running into problems when I run the "travis setup heroku" command in the cmd prompt.
I'm in my project's root directory, there is an existing ".travis.yml" file in that root directory.
I've also installed ruby correctly and travis correcty because when I run:
"ruby -v" I get "ruby 2.2.2p95 (2015-04-13 revision 50295) [x64-mingw32]"
"travis -v" I get "1.7.7"
When I run "travis setup heroku" I get this message "The system cannot find the path specified" then prompts me for a "Heroku API token:"
What's the issue? | 0 | python,ruby,windows,heroku,travis-ci | 2015-07-15T04:55:00.000 | 1 | 31,421,793 | If you hadn't had Heroku Toolbelt setup to the $PATH environment variable during installation, here are some steps to check:
Check if Heroku toolbelt is set in PATH variable. If not, cd to your Heroku toolbelt installation folder, then click on the address bar and copy it.
Go to the Control Panel, then click System and Advanced System Protection.
Go to Environment Variables, then look for $PATH in the System Variables
After the last program in the variable, put a ; then paste in your Heroku CLI folder and click OK. (This requires cmd to be restarted manually)
Login to Heroku CLI
grab the token key from heroku auth:token
run travis setup heroku if the setup goes smoothly, you shouldn't get the command not found and prompt you for heroku auth key. It will ask that you want to encrypt the auth key (highly recommend) and verify the information you provided with the toolbelt and Travis CLI.
commit changes
you should be able to get your app up and running within your tests. | 0 | 456 | false | 1 | 1 | travis setup heroku command on Windows 7 64 bit | 31,445,471 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | In case sentry sends lots of similar messages, is there a way to stop it?
We have a lot of clients and the sentry messages are basically all the same so sentry spams me. | 0 | python,sentry | 2015-07-15T06:56:00.000 | 0 | 31,423,456 | If you're talking about notifications you can disable them per-account or entirely on a project via the integration.
That said if all the messages are the same you should look into why they're not grouping. A common case would be you're using a logging integration and there's a variable in the log message itself. | 0 | 1,004 | true | 0 | 1 | Stop sentry from sending messages | 31,424,387 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'd like to have advertisements in an android App I've written and built using PGS4A. I've done my research and all, but there doesn't seem to be any online resources that explains how to do that just yet. I haven't much knowledge on Java either, which is clearly why I've written that in Python. Has anyone found a way to achieve that? If not, how difficult would it be to convert the project files into an Android Studio (or even an Eclipse) project? (so then one can just implement the ads following the Java Admob documentation found everywhere)
Thank you in advance. | 0 | java,android,python,android-studio,admob | 2015-07-15T21:18:00.000 | 0 | 31,441,307 | To access Java already implemented version you can use pyjnius. I tried to use it for something else and I didn't succeed. Well, I yielded pretty quickly because it wasn't necessary for my project.
Otherwise, I am afraid, you will have to implement it yourself from scratch.
I never heard about a finished solution for your problem.
If you succeeded to use PGU, it wouldn't be so hard.
If not, well, I wish you luck, and put your solution online for others.
There is an Eclipse plug-in for Python. I think that Android studio does not support PGS4A. Never needed it. Console is the queen. | 0 | 199 | false | 1 | 1 | Admob Ads with Python Subset For Android (PGS4A) | 31,441,510 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have written a python script that is designed to run forever. I load the script into a folder that I made on my remote server which is running debian wheezy 7.0. The code runs , but it will only run for 3 to 4 hours then it just stops, I do not have any log information on it stopping.I come back and check the running process and its not there. Is this a problem in where I am running the python file from? The script simply has a while loop and writes to an external csv file. The file runs from /var/pythonscript. The folder is a custom folder that I made. There is not error that I receive and the only way I know how long the code runs is by the time stamp on the csv file. I run the .py file by ssh to the server and sudo python scriptname.I also would like to know the best place in the linux debian directory to run python files from and limitations concerning that. Any help would be much appreciated. | 0 | python,debian,remote-server,directory-structure | 2015-07-16T07:30:00.000 | 1 | 31,447,971 | Basically you're stuffed.
Your problem is:
You have a script, which produces no error messages, no logging, and no other diagnostic information other than a single timestamp, on an output file.
Something has gone wrong.
In this case, you have no means of finding out what the issue was. I suggest any of the following:
either adding logging or diagnostic information to the script.
Contacting the developer of the script and getting them to find a way of determining the issue.
Delete the evidently worthless script if you can't do either option 1, or 2, above, and consider an alternative way of doing your task.
Now, if the script does have logging, or other diagnostic data, but you delete or throw them away, then that's your problem and you need to stop discarding this useful information.
EDIT (following comment).
At a basic level, you should print to either stdout, or to stderr, that alone will give you a huge amount of information. Just things like, "Discovered 314 records, we need to save 240 records", "Opened file name X.csv, Open file succeeded (or failed, as the case may be)", "Error: whatever", "Saved 2315 records to CSV". You should be able to determine if those numbers make sense. (There were 314 records, but it determined 240 of them should be saved, yet it saved 2315? What went wrong!? Time for more logging or investigation!)
Ideally, though, you should take a look at the logging module in python as that will let you log stack traces effectively, show line numbers, the function you're logging in, and the like. Using the logging module allows you to specify logging levels (eg, DEBUG, INFO, WARN, ERROR), and to filter them or redirect them to file or the console, as you may choose, without changing the logging statements themselves.
When you have a problem (crash, or whatever), you'll be able to identify roughly where the error occured, giving you information to either increase the logging in that area, or to be able to reason what must have happened (though you should probably then add enough logging so that the logging will tell you what happened clearly and unambiguously). | 0 | 85 | false | 0 | 1 | Where to run python file on Remote Debian Sever | 31,448,678 |
1 | 2 | 0 | 1 | 1 | 1 | 1.2 | 0 | I am new to python. I read unittest docs. In the documentation about tearDown() method, I found following lines
"This is called even if the test method raised an exception, so the implementation in subclasses may need to be particularly careful about checking internal state."
What does this statement conveys ? Can you let me understand me with the help of some good example, where the internal statement can create havoc ?
Thanks in advance.
EDIT :
I got some answers, but they are quite simple. I need some examples, where some state is involved, like tests involving database and so on. | 0 | python,python-unittest | 2015-07-16T12:08:00.000 | 0 | 31,453,751 | From the OP:
"This is called even if the test method raised an exception, so the implementation in subclasses may need to be particularly careful about checking internal state."
The first things this conveys is that you can be sure that teardown is called whatever happens in your test methods. Consequently this means that you should not have any teardown code in your test method, you should move it into the teardown method.
However, if you do have an exception in your test method, this may mean that the state of your test instance may be different on different test runs and the teardown method must take this into account, or you must structure your code so that it will always work.
An example may be that you test code involves the creation of tables in the database. If you have an exception, then maybe not all the tables are created, so teardown should make sure it doesn't try to drop non-existent tables. However, the better way might be for setup to start a transaction and teardown to rollback the transaction. | 0 | 353 | true | 0 | 1 | teardown method in unittest python | 31,455,230 |
1 | 1 | 0 | 2 | 1 | 1 | 0.379949 | 0 | I run a python script from a php script and i want to php take return value from python(return value is a list). I use exec() function for running python.
How can i do that? Thx | 0 | php,python,return | 2015-07-16T18:59:00.000 | 0 | 31,462,392 | You can print your result in JSON format and the exec() function will return the string. This string can be used to retrieve your value using any JSON decoder. | 0 | 361 | false | 0 | 1 | Take return value from python script from php | 31,463,247 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | Can I give a user access to a sentry project but not send him all the emails?
Sometimes I want to forward the error message to our mobile developers, so they can see the parameters, but they don't need to get all the other reports. | 0 | python,sentry | 2015-07-17T00:48:00.000 | 0 | 31,466,828 | I found it - found even two different ways:
I can make projects public (under project settings) so everyone (with a sentry account) can access it.
I can give a users access to a project and that user has to opt-out of emails by going to his Account (top right) and then notifications. | 0 | 162 | false | 1 | 1 | Sentry limit notifications to certain users | 31,466,945 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | Can I give a user access to a sentry project but not send him all the emails?
Sometimes I want to forward the error message to our mobile developers, so they can see the parameters, but they don't need to get all the other reports. | 0 | python,sentry | 2015-07-17T00:48:00.000 | 0 | 31,466,828 | The best way to deal with this is to make the organization "open membership". Members can then choose to join or leave teams, and opt-out of project-specific notifications when they want. | 0 | 162 | false | 1 | 1 | Sentry limit notifications to certain users | 31,466,990 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have this python setup using objects that can perform specific tasks for I2C protocol. Rather than having the script create objects and run a single task when run from a command line command, is there a way to have the objects 'stay alive' in the background and somehow feed the program new data from the command line?
The general idea is to have something running on a remote pc and use ssh to send commands (new data) over to the program.
One idea I had was to have the program constantly check (infinite loop) for a data file containing a set of tasks to perform and run those when it exists. But it seems like it could go awry if I were to sftp a new data file over because the program could be reading the one that already exists and cause undesirable effects.
I'm sure there are many better ways to go about a task like this, any help will be appreciated. | 0 | python,linux,ssh,sftp | 2015-07-17T15:49:00.000 | 1 | 31,479,763 | You can try and use a client-server or sockets approach. Your remote PC has a server running listening to commands or data coming into. Your client or local computer can send commands on the port and ip that the remote PC is listening to. The server then parses the data coming in, looks at whatever commands you have defined and executes them accordingly. | 0 | 49 | false | 0 | 1 | Running python in the background and feeding data | 31,479,959 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I understand that we use CI to test a software after any changes made to it.It will kick off unit tests and system level tests as soon as someone checks-in.
Now, where unit and functional test scripts we wrote fit in here?
Am I right that CI won't have any built-in tests: unit,functional,system. "We" write all those test scripts but have CI kick them off ? | 0 | python,unit-testing,continuous-integration,functional-testing | 2015-07-17T21:57:00.000 | 0 | 31,485,333 | CI is intended to be a system which provides a framework for all your unit tests and functional tests. The CI will kick off the unit tests , build and run functional tests and take appropriate actions as specified by you. | 0 | 113 | true | 0 | 1 | Continuous Integration vs Software automation test engineer | 31,485,412 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I want to debug a c++ program using gdb. I use the pi and the py commands to evaluate python commands from within gdb, which works fine when I invoke gdb from the command line. However, when I invoke gdb from within emacs using M-x gdb and then gdb -i=mi file_name, the following errors occur:
the pi command correctly opens an interactive python shell, but any input to this shell yields the errors like this:
File "stdin", line 1
-interpreter-exec console "2"
SyntaxError: invalid syntax
the py command works correctly for a single command (like py print 2+2), but not for multiple commands
I can get around those problems by starting gdb with gud-gdb, but then I dont have the support for gdb-many-windows. Maybe the problem is caused by the prompt after typing pi, which is no longer (gdb) but >>> instead? | 0 | python,emacs,gdb,gud | 2015-07-20T10:58:00.000 | 1 | 31,514,741 | I am going to go out on a limb and say this is a bug in gud mode. The clue is the -interpreter-exec line in the error.
What happens here is that gud runs gdb in a special "MI" ("Machine Interface") mode. In this mode, commands and their responses are designed to be machine-, rather than human-, readable.
To let GUIs provide a console interface to users, MI provides the -interpreter-exec command, which evaluates a command using some other gdb "interpreter" (which doesn't mean what you may think and in particular has nothing to do with Python).
So, gud sends user input to gdb, I believe, with -interpreter-exec console .... But, in the case of a continuation line for a python command, this is the wrong thing to do.
I tried this out in Emacs and I was able to make it work for the python command when I spelled it out -- but py, pi, and python-interactive all failed. | 0 | 810 | false | 0 | 1 | gdb within emacs: python commands (py and pi) | 31,729,095 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 | Pretty new to all this so I apologize if I butcher my explanation. I am using python scripts on a server at work to pull data from our Oracle database. Problem is whenever I execute the script I get this error:
Traceback (most recent call last):
File "update_52w_forecast_from_oracle.py", line 3, in
import cx_Oracle
ImportError: libnnz11.so: cannot open shared object file: No such file or direct ory
But if I use:
export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
Before executing the script it runs fine but only for that session. If I log back in again I have to re-set the path. Anything I can do to make this permanent? I'm trying to use Cron as well to automate the script once a week. It was suppose to automate early Monday morning but it didn't run.
EDIT: Just had to add the path to my .bashrc file in the root directory. | 0 | python,oracle | 2015-07-20T17:30:00.000 | 1 | 31,522,754 | Well, that was pretty simple. I just had to add it to the .bashrc file in my root directory. | 0 | 841 | true | 0 | 1 | cx_Oracle, and Library paths | 31,525,508 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | Using python 2.7, I need to convert the following curl command to execute in python.
curl -b /tmp/admin.cookie --cacert /some/cert/location/serverapache.crt --header "X-Requested-With: XMLHttpRequest" --request POST "https://www.test.com"
I am relatively new to Python and are not sure how to use the urllib library or if I should use the requests library. The curl options are especially tricky for me to convert. Any help will be appreciated. | 0 | python,curl | 2015-07-23T08:28:00.000 | 0 | 31,582,012 | Can you stay under command line ?
If yes, try the python lib nammed "pexpect". It's pretty useful, and let you run commands like on a terminal, from a python program, and interact with the terminal ! | 0 | 205 | false | 0 | 1 | curl and curl options to python conversion | 31,584,263 |
1 | 1 | 0 | 0 | 2 | 1 | 0 | 1 | I'm looking to backup a subreddit to disk. So far, it doesn't seem to be easily possible with the way that the Reddit API works. My best bet at getting a single JSON tree with all comments (and nested comments) would seem to be storing them inside of a database and doing a pretty ridiculous recursive query to generate the JSON.
Is there a Reddit API method which will give me a tree containing all comments on a given post in the expected order? | 0 | python,json,reddit | 2015-07-24T00:23:00.000 | 0 | 31,600,249 | The number of comments you get from the API has a hard limit, for performance reasons; to ensure you're getting all comments, you have to parse through the child nodes and make additional calls as necessary.
Be aware that the subreddit listing will only include the latest 1000 posts, so if your target subreddit has more than that, you probably won't be able to obtain a full backup anyways. | 0 | 429 | false | 0 | 1 | Get a JSON tree of all comments of a post? | 31,689,325 |
1 | 3 | 0 | 1 | 5 | 0 | 0.066568 | 0 | We have been using Django for a long time. Some old code is not being used now. How can I find which code is not being used any more and remove them.
I used coverage.py with unit tests, which works fine and shows which part of code is never used, but the test covered is very low. Is there any way to use it with WSGI server to find which code have never served any web requests? | 0 | python,django,code-coverage | 2015-07-24T03:43:00.000 | 0 | 31,601,820 | On a well tested project, coverage would be ideal but with some untested legacy code I don't think there is a magical tool.
You could write a big test loading all the pages and run coverage to get some indication.
Cowboy style:
If it's not some critical code and you're fairly sure it's unused (i.e. not handling payments, etc.). Comment it out, check that the tests pass, deploy and wait a week or so before removing it definitely (or putting it back if you got a notification). | 0 | 1,412 | false | 1 | 1 | How to find unused code in Python web site? | 31,603,369 |
1 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 | What code should i use in raspberry pi using python to implement this. I have done the client part.I have installed apache/php/mysql in my pi. What framework should i use here. Besides what part am i missing.I need to read/write to sensors through my pi. I have made an api hosted in a site which i have successfully tested in my app. Now i need to implement this in my pi remotely.
P.S.I don't know nothing about my pi.Should i write all my code in python(I prefer java), what libraries should i be using. | 0 | php,android,python,apache,raspberry-pi | 2015-07-25T10:36:00.000 | 0 | 31,625,552 | I'll just add that to use Python with Apache you'll have to enable mod_wsgi or mod_python or just write a standard CGI or FastCGI script.
I think bottle/flask supports all methods.
There is somewhere compiled JVM with support for hardware floating points, and that version should work a little better.
But, well, Raspberry Pi and Java aren't exactly friends.
I admit, it is a little odd, because we know that Java works perfectly fine on ARMs (e.g. Android), but current state for RasPi is as it is. | 0 | 80 | false | 0 | 1 | I want to make a remote webserver using raspberry pi contrplled by android app | 31,626,041 |
2 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 0 | I accidentally changed the "Shell path" specified in the Terminal setting for PyCharm and now I am getting this error:
java.io.IOException:Exec_tty error:Unkown reason
I replaced the default value with the string returned by echo $PATH which is:
/usr/local/cuda-7.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/bin
I've been trying to google what the default value is that goes here, but I have not been able to find it. Can someone help me resolve this?
Notes:
The specific setting is found in Settings > Tools > Terminal > Shell path | 0 | python,path,terminal,pycharm | 2015-07-27T02:54:00.000 | 1 | 31,644,298 | I came across this error too in PhpStorm, to fix it simply navigate through to...
Preferences > Tools > Terminal
Under 'Application Settings' click [...] at the end of Shell path and open the .bash profile.
This should grey out the Shell path to '/bin/bash'
You can now launch Terminal. | 0 | 1,300 | false | 0 | 1 | Default values for PyCharm Terminal? | 43,356,885 |
2 | 2 | 0 | 1 | 2 | 0 | 1.2 | 0 | I accidentally changed the "Shell path" specified in the Terminal setting for PyCharm and now I am getting this error:
java.io.IOException:Exec_tty error:Unkown reason
I replaced the default value with the string returned by echo $PATH which is:
/usr/local/cuda-7.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/bin
I've been trying to google what the default value is that goes here, but I have not been able to find it. Can someone help me resolve this?
Notes:
The specific setting is found in Settings > Tools > Terminal > Shell path | 0 | python,path,terminal,pycharm | 2015-07-27T02:54:00.000 | 1 | 31,644,298 | The default value is the value of the $SHELL environment variable, which is normally /bin/bash. | 0 | 1,300 | true | 0 | 1 | Default values for PyCharm Terminal? | 31,661,642 |
2 | 4 | 0 | 0 | 0 | 1 | 0 | 0 | I know how to import a module I have created if the script I am working on is in the same directory. I would like to know how to set it up so I can import this module from anywhere. For example, I would like to open up Python in the command line and type "import my_module" and have it work regardless of which directory I am in. | 0 | python-2.7,module | 2015-07-27T04:08:00.000 | 0 | 31,644,834 | To make this work consistently, you can put the module into the lib folder inside the python folder, then you can import it regardless of what directory you are in | 0 | 31 | false | 0 | 1 | Importing a python module I have created | 36,118,903 |
2 | 4 | 0 | 0 | 0 | 1 | 0 | 0 | I know how to import a module I have created if the script I am working on is in the same directory. I would like to know how to set it up so I can import this module from anywhere. For example, I would like to open up Python in the command line and type "import my_module" and have it work regardless of which directory I am in. | 0 | python-2.7,module | 2015-07-27T04:08:00.000 | 0 | 31,644,834 | You could create pth file with path to your module and put it into your Python site-packages directory. | 0 | 31 | false | 0 | 1 | Importing a python module I have created | 36,118,957 |
1 | 1 | 0 | 1 | 1 | 1 | 1.2 | 0 | I'm looking to be able to create an executable with py2exe or something similar that takes information from an excel sheet and returns a word file.
Since my coworkers are technically challenged, I need to create an executable that will take the work out of it for them.
Two questions here:
I have to be able to import something into the python script that represents DataNitro. What module represents DataNitro?
Is this legal? I won't be using a DataNitro license on every machine this exe will run on, besides my own, so if it's even possible, is this a bit shady?
Thank you.
P.S. If I'm not able to do this I will probably have to use xlrd,xlwt,etc. | 0 | python,excel,datanitro | 2015-07-28T19:12:00.000 | 0 | 31,685,165 | The best way to give non-technical users access to DataNitro is to copy the VBA interface: hook the script up to an Excel button and have users press that button to run it. (There's no difference between running a Python script with DataNitro and running VBA code from the user's point of view.)
Each person using the script would need a DataNitro license.
There's no way to make DataNitro work with py2exe, unfortunately.
Source: I'm one of the DataNitro developers. | 0 | 108 | true | 0 | 1 | Can I integrate Datanitro into an executable? | 31,688,486 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | So I have a set of points in a city (say houses or residences) and I want to find the shortest distance between these points and a set of candidate points for a store. I am looking for the best store locations that will minimize the distance to all of the houses in the set. So I will iteratively move the candidate store points and then recompute the distances between each store and house (again using Djikstra's algorithm). Because of the sheer volume of calculations, I cannot keep hitting the database for each iteration of the optimization algorithm.
I have used pgrouting many times and this would work, however it would be too slow because of the large number of points and the fact that it has to search the disk each time.
Is there a tool where I an load some small Open Street Maps city map in memory and then calculate the shortest routes in memory? I would need something fast, so preferably in C or python? But any language is okay as long as it works. | 0 | python,postgresql,gis,pgrouting | 2015-07-28T19:19:00.000 | 0 | 31,685,279 | Heres and idea. Get the latlongs of the house and all the stores.
Calculate the geohash of all points(house and stores) with maximum precision (12) and check if geohash of any stores matches that of the house. If it doesnt, calculate the geohash with lower precision (11) and then rinse and repeat till you get a store(May be multiple Ill get into that later) that matches the geohash of the house.
This is a fuzzy-distance calculation. This will work great and with minimal processing time. But this will fail if you get two or more stores with the same geohash with some precision. So this is what I recommend you do
The geohash loop with decreasing precision. Break when geohash of store(s) match the gohash of the house
IF (more than one sores match) Go for plain distance calculation and find the closest store and return it
ELSE return the one store that the matches the geohash
Advantage of this method : Changes your strict requirements to a fuzzy probability problem. If you get a single sore, great. If you dont at least you reduce the number of candidates for distance calculation
Disadvantage of this method: What if all stores land in the same geohash ? We introduce the same complexity here.
Youll be banking on the chances that not all(or most) stores come under the same geohash. Realistically speaking, the disadvantage is only a disadvantage in corner cases. So overall you should soo performance improvement | 0 | 1,546 | false | 0 | 1 | Way to calculate road distances in a city very quickly | 31,685,995 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 1 | I am working for a company who wants me to test and cover every piece of code I have.
My code works properly from browser. There is no error no fault.
Except my code works properly on browser and my system is responding properly do I need to do testing? Is it compulsory to do testing? | 0 | python,django,testing | 2015-07-29T05:47:00.000 | 0 | 31,692,090 | Whether it’s compulsory depends on organization you work for. If others say it is, then it is. Just check how tests are normally written in the company and follow existing examples.
(There’re a lot of ways Django-based website can be tested, different companies do it differently.)
Why write tests?
Regression testing. You checked that your code is working, does it still work now? You or someone else may change something and break your code at some point. Running test suite makes sure that what was written yesterday still works today; that the bug fixed last week wasn’t accidentally re-introduced; that things don’t regress.
Elegant code structuring. Writing tests for your code forces you to write code in certain way. For example, if you must test a long 140-line function definition, you’ll realize it’s much easier to split it into smaller units and test them separately. Often when a program is easy to test it’s an indicator that it was written well.
Understanding. Writing tests helps you understand what are the requirements for your code. Properly written tests will also help new developers understand what the code does and why. (Sometimes documentation doesn’t cover everything.)
Automated tests can test your code under many different conditions quickly, sometimes it’s not humanly possible to test everything by hand each time new feature is added.
If there’s the culture of writing tests in the organization, it’s important that everyone follows it without exceptions. Otherwise people would start slacking and skipping tests, which would cause regressions and errors later on. | 0 | 33 | false | 1 | 1 | is testing compulsory if it works fine on realtime on browser | 31,694,536 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I'm developing a python script that runs as a daemon in a linux environment. If and when I need to issue a shutdown/restart operation to the device, I want to do some cleanup and log data to a file to persist it through the shutdown.
I've looked around regarding Linux shutdown and I can't find anything detailing which, if any, signal, is sent to applications at the time of shutdown/restart. I assumed sigterm but my tests (which are not very good tests) seem to disagree with this. | 0 | python,linux,signals,shutdown | 2015-07-30T19:01:00.000 | 1 | 31,731,980 | When Linux is shutting down, (and this is slightly dependent on what kind of init scripts you are using) it first sends SIGTERM to all processes to shut them down, and then I believe will try SIGKILL to force them to close if they're not responding to SIGTERM.
Please note, however, that your script may not receive the SIGTERM - init may send this signal to the shell it's running in instead and it could kill python without actually passing the signal on to your script.
Hope this helps! | 0 | 214 | false | 0 | 1 | Handling a linux system shutdown operation "gracefully" | 31,732,143 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a python file which starts 2 threads - 1 is daemon process 2 is to do other stuff. Now what I want is to check if my 2 thread is stopped then 1 one also should stop. I was suggested to do so by cron job/runnit.. I am completely new to these so can you please help me achieve the goal
Thanks | 0 | python-2.7,cron | 2015-07-31T05:45:00.000 | 1 | 31,738,875 | The best way out is to create this daemon as a child thread so it automatically gets killed when parent process is killed | 0 | 48 | true | 0 | 1 | Killing a daemon process through cron job/runnit | 31,787,179 |
Subsets and Splits