Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I want to check that the given email id is really exists or not in smtp server. Is it possible to check or not.? If it possible please give me suggestion how can we do it. | 0 | python,email,smtp | 2015-03-16T09:57:00.000 | 0 | 29,073,907 | Short of sending an email and having someone respond to it is impossible to verify an email exists.
You can verify the SMTP server has a whois address, but thats it. | 0 | 1,440 | false | 0 | 1 | How to Check is email exists or not in python | 29,073,968 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | C program is compiled and converted into .dll using cygwin compiler.In python scripting it can be loaded using ctypes and call the functions successfully.But I import that python scripting as libraray into Robot Framework Automation tool,it cant load that .dll file and the test case also failed.
is the cygwin created dll file not be supported by RTF?
Can anyone suggest any other method for that? | 0 | python,c,dll,cygwin,robotframework | 2015-03-16T12:51:00.000 | 0 | 29,077,291 | Given our discussion in the comments. You can't mix and match like this. The format that Cygwin builds a DLL in is different that the format Windows expects a DLL in. You need to build and run all in one environment. | 0 | 406 | true | 0 | 1 | Robot framework cant load .dll file created by cygwin into python script | 29,102,050 |
1 | 2 | 0 | 4 | 8 | 0 | 0.379949 | 0 | Imagine the following three step process:
I use sympy to build a large and somewhat complicated expression (this process costs a lot of time).
That expression is then converted into a lambda function using sympy.lambdify (also slow).
Said function is then evaluated (fast)
Ideally, steps 1 and 2 are only done once, while step 3 will be evaluated multiple times. Unfortunately the evaluations of step 3 are spread out over time (and different python sessions!)
I'm searching for a way to save the "lambdified" expression to disk, so that I can load and use them at a later point. Unfortunately pickle does not support lambda functions. Also my lambda function uses numpy.
I could of course create a matching function by hand and use that, but that seems inefficient and error-prone. | 0 | python,lambda,sympy | 2015-03-16T14:54:00.000 | 0 | 29,079,923 | The above works well.
In my case with Python 3.6, I needed to explicitly indicate that the saved and loaded files were binary. So modified the code above to:
dill.dump(f, open("myfile", "wb"))
and for reading:
f_new=dill.load(open("myfile", "rb")) | 0 | 2,216 | false | 0 | 1 | Save/load sympy lambdifed expressions | 49,578,768 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | Hy. So I installed python 2.7.9 and the twitter follow bot from github. I don't really know what I'm doing wrong but when I try to use a command I get an error. using this from twitter_follow_bot import auto_follow_followers_for_userresults in this
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
from twitter_follow_bot import auto_follow
File "twitter_follow_bot.py", line 21, in <module>
from twitter import Twitter, OAuth, TwitterHTTPError
ImportError: cannot import name Twitter
Any idea what I did wrong. I never use python before so if you could explain it to me step by step it would be great. Thnaks | 0 | python,twitter,bots,twitter-follow | 2015-03-17T23:06:00.000 | 0 | 29,111,388 | In from twitter import Twitter, OAuth, TwitterHTTPError "Twitter" does not exists in "twitter". Try to re-download or double check if that file is even within "twitter". | 0 | 249 | false | 0 | 1 | twitter python script generates error | 29,111,422 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I have a text file of size 2.5 GB which contains hash values of some standard known files. My task is to find the hash of all files on my file system and compare it with the hashes stored in the text file. If a match is found I need to print Known on the screen and if no match is found then I need to print unknown on the screen.
Thus the approach for the task is quite simple but the main issue is that the files involved in the process are very huge.
Can somebody suggest how to accomplish this task in an optimized way.
Should I import the text file containing hashes to a database. If yes, then please provide some link which could possibly help me accomplish it.
Secondly what algorithm can I use for searching to speed up the process?
My preferred language is Python. | 0 | python,hash | 2015-03-19T04:05:00.000 | 0 | 29,137,043 | Search on StackOverflow for code to recursively list full file names in Python
Search on StackOverflow for code to return the hash checksum of a file
Then list files using an iterator function. Inside the loop:
Get the hash checksum of the current file in the loop
Iterate through every hash. Inside the loop:
Compare with the current file's checksum
Algorithms? Don't worry about it. If you iterate through each line of the file, it will be fine. Just don't load it all at once, and don't load it into a data structure such as a list or dictionary because you might run out of memory. | 0 | 62 | true | 0 | 1 | Perform searching on very large file programatically in Python | 29,137,224 |
1 | 5 | 0 | 0 | 2 | 0 | 0 | 0 | I am very interested in using the new service recently released for secret management within Azure. I have found a few example guides walking through how to interact with key vault via powershell cmdlets and c#, however haven't found much at all in regards to getting started with using the rest API.
The thing I am particularly confused with is the handling of oauth2 w/ active directory. I have written a oauth2 application listener, built a web application with an AD instance and can now generate a "access_token". It is very unclear to me how to proceed beyond this though, as I seem to consistently receive a 401 HTTP resp code whenever attempting to use my access_token to perform a key vault API call.
Any guides / tips on using azure key vault with python would be greatly appreciated! | 0 | python,azure,oauth-2.0,identity,app-secret | 2015-03-19T16:17:00.000 | 0 | 29,149,806 | When Key Vault returns a 401 response, it includes a www-authenticate header containing authority and resource. You must use both to get a valid bearer token. Then you can redo your request with that token, and if you use the same token on subsequent requests against the same vault, it shouldn't return a 401 until the token expires.
You can know the authority and resource in advance, but it's generally more robust to prepare your code to always handle the 401, specially if you use multiple vaults.
Be sure to only trust on a www-authenticate header of a valid SSL connection, otherwise you might be a victim of spoofing! | 0 | 5,475 | false | 0 | 1 | Interacting with Azure Key Vault using python w/ rest api | 30,823,957 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I have a program written in Perl, and I want to execute this program in MATLAB. In this program I am calling an XML file, but on execution, I am getting XML: DOM error as Error using perl (line 80) System error: Can't locate XML/DOM.pm in @INC, etc. How can I get out of this error?
Program is executing in Perl very well... | 0 | python,xml,matlab,xml-parsing | 2015-03-19T16:22:00.000 | 0 | 29,149,900 | Matlab brings it's own perl installation located at fullfile(matlabroot, 'sys\perl\win32\bin\'). Probably here the additional resources are missing. Navigate to this folder and install the requirements using ppm | 0 | 56 | false | 0 | 1 | XML DOM ERROR when executing Python program in perl | 29,150,179 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I want my users to be able to login using all common OpenIds. But there seems to be a forest of clauses on how to use logos from google, facebook, yahoo and twitter.
Actually I'd prefer a wider button with the text on it, but it seems that all these buttons don't have the aspect ratio. And some of them I am not allowed to scale.
Also some pages switched to javascript buttons which seems incredibly stupid, because that causes the browser to download all the javascripts and the page looks like 1990 where you can watch individual elements loading.
So finally I surrendered and went for the stackoverflow approach of using square logos with the text "login with" next to them. But now I can't find an official source for a square yahoo icon.
The question: Can you recommend any articles or do you have any tipps on how to make OpenId look uniform?
And: Have you seen an official source for a square Icon that I'm allowed to use?
PS: I'm using python-social-auth for Django. | 0 | oauth,openid,python-social-auth | 2015-03-23T11:20:00.000 | 0 | 29,208,955 | You could simply use the yahoo site's favicon. It's already squared and gets as close to official as it can get. | 0 | 21 | false | 1 | 1 | How to make most common OpenID logins look the same? | 30,144,643 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I am working on a customer care chatbot based on aiml using flaskrestful and pyaiml. My problem is when I test it locally it is working, i.e. responding from the aiml knowledge base, when I put it on openshift the python script won't find the aiml file and I don't understand why. | 0 | python,openshift,flask-restful | 2015-03-23T15:51:00.000 | 0 | 29,214,605 | In openshift, if you want your data to be persistent you have to save it in the 'OPENSHIFT_DATA_DIR'. I fixed the problem by placing the knowledge base file in this directory instead and referencing this location in my code. | 0 | 59 | false | 0 | 1 | openshift won't load aiml file | 29,459,242 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | Let's assume I want to set a test case status with (non critical) so that when the generated report which is displayed after execution excludes this test cases from its critical test cases... if i want to do that from the command line I would write ( pybot --noncritical ) but what if I want to do the same in RIDE tool ? | 0 | python-2.7,automated-tests,robotframework | 2015-03-23T16:52:00.000 | 0 | 29,215,850 | In the "Run" Tab, put "--noncritical your_tags" in the "Arguments" box. That should do it. | 0 | 677 | false | 0 | 1 | How to set the criticality of certain test case in RIDE? | 29,216,521 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Hello i want to install these dependencies in OpenShift for my App
yum -y install wget gcc zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel libffi-devel libxslt libxslt-devel libxml2 libxml2-devel openldap-devel libjpeg-turbo-devel openjpeg-devel libtiff-devel libyaml-devel python-virtualenv git libpng12 libXext xorg-x11-font-utils
But don't know how, is it through rhc? if so, how? | 0 | python,openshift | 2015-03-23T22:44:00.000 | 1 | 29,221,836 | You can not use the "yum" command to install packages on OpenShift. What specific issue are you having? I am sure that at least some of those packages are already installed in OpenShift online already (such as wget). Have you tried running your project to see what specific errors you get about what is missing? | 0 | 461 | false | 0 | 1 | How to install dependencies in OpenShift? | 29,236,634 |
2 | 5 | 0 | 3 | 3 | 0 | 0.119427 | 0 | i've installed py 2.7 (64bit) on my PC with Win7 (64bit) without problem but I'm not able to run *.py scripts via DOS shell without declare python full path.
Let me better explain :
If I type D:\ myscript.py it doesn't work. The script is open with wordpad
If I type D:\ C:\Python27 myscript.py it works and run correctly
I try to change the default application software for *.py file via Win7 GUI ( control pannel etc etc) but without success.
Python is not present in the list of available sw and in any case also with the manual set I'm not able to associate python.exe at *.py files.
I've checked in my environment variables but I've not found problem (python path is declared in Path = C:\Python27\;C:\Python27\Scripts).
I've tried also to modify HKEY_CLASSES_ROOT->Applications->python.exe->shell->open->command :
old register value "C:\Python27\python.exe" "%1"
new register value "C:\Python27\python.exe" "%1" %*
without success.
Any suggestion?
Thanks | 0 | python,windows,python-2.7 | 2015-03-24T09:42:00.000 | 1 | 29,229,273 | Here is another check to make, which helped me figure out what was going on.
I switched from the 32bit Anaconda to the 64bit version. I deinstalled, downloaded then reinstalled, but several things didn't get cleaned up properly (quick launch stuff, and some registry keys). The problem on my side was that the default installation path changed, from C:\Anaconda to C:\Anaconda2.
I first tried the assoc and ftype tricks, everything was fine there. However, the HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command registry key was pointing to the old Anaconda path. As soon as I fixed this, python.exe showed up when I tried associating with "Open with" and everything went back to normal.
I also added the %* at the end in the registry key. | 0 | 1,995 | false | 0 | 1 | Impossible to set python.exe to *.py scripts on Win7 | 36,078,931 |
2 | 5 | 0 | 0 | 3 | 0 | 0 | 0 | i've installed py 2.7 (64bit) on my PC with Win7 (64bit) without problem but I'm not able to run *.py scripts via DOS shell without declare python full path.
Let me better explain :
If I type D:\ myscript.py it doesn't work. The script is open with wordpad
If I type D:\ C:\Python27 myscript.py it works and run correctly
I try to change the default application software for *.py file via Win7 GUI ( control pannel etc etc) but without success.
Python is not present in the list of available sw and in any case also with the manual set I'm not able to associate python.exe at *.py files.
I've checked in my environment variables but I've not found problem (python path is declared in Path = C:\Python27\;C:\Python27\Scripts).
I've tried also to modify HKEY_CLASSES_ROOT->Applications->python.exe->shell->open->command :
old register value "C:\Python27\python.exe" "%1"
new register value "C:\Python27\python.exe" "%1" %*
without success.
Any suggestion?
Thanks | 0 | python,windows,python-2.7 | 2015-03-24T09:42:00.000 | 1 | 29,229,273 | @slv 's answer is good and helped me a bit with solving this problem. Anyhow, since I had previous installations of Python before this error occured for me, I might have to add something to this. One of the main problems hereby was that the directory of my python-installation changed.
So, I opened regedit.exe and followed these to steps:
I searched the entire registry for .py, .pyw, .pyx and .pyc (hopefully I did not forget to mention any here). Then, I radically deleted all occurrences I could find.
I searched the entire registry for my old python-installation-path (e.g. C:\Users\Desktop\Anaconda3). Then I replaced this path with my new installation path (e.g. C:\Users\Desktop\Miniconda3). Thereby, I also came across and replaced HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command which @slv mentioned.
Afterwards, it was possible again to connect a .py-file from the Open with...-menu with my python.exe. | 0 | 1,995 | false | 0 | 1 | Impossible to set python.exe to *.py scripts on Win7 | 56,543,680 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 0 | I was trying to write a wrapper for a third party C tool using Java processbuilder. I need to run this process builder millions of times. But, I found something weird about the speed.
I already have a wrapper for this third party tool C tool for python. In python, the wrapper uses the python subprocess.check_output.
So, I ran the java wrapper 10000 times with same command. Also, ran the python wrapper 10000 time with same command.
With python, my 10000 tests ran in about 0.01 second.
With java processbuilder, it ran in 40 seconds.
Can someone explain why I am getting large difference in speed between two languages?
You try this experiment with a simple command like "time". | 0 | java,python,performance,subprocess,processbuilder | 2015-03-24T22:03:00.000 | 1 | 29,243,748 | It seems like python doesn't spawn subprocess. Which is why it was faster.
I am sorry with confusion.
thank you | 0 | 1,183 | false | 1 | 1 | why is Java Processbuilder 4000 times slower at running commands then Python Subprocess.check_output | 29,289,687 |
1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | I have a working C program and now I'm embedding a python script implementing a specific function.
The question is, the arguments passed into Python is a complicated(I mean nested) C structure defined in C. And I hope the solution would be able to do the two ways communication easily:
1.Create structure in C, and pass it into Python. Let the Python do some modification.
2.Create the structure in Python. And pass it back to C.
I was trying "SWIG" to generate some wrappers for the structure, as well as some helper functions using SWIG for Python so that it could return me some nested part of the structure so that I could visit the structure through Python easily.
Will this be a good solution or I may miss some very simple way of solving it? | 0 | python,c,structure,swig,mixed-programming | 2015-03-26T01:12:00.000 | 0 | 29,269,401 | In my experience, SWIG should be able to handle arbitrarily nested structs the way you'd "expect". – brunobeltran0 | 0 | 134 | false | 0 | 1 | How do I pass C structure into Python when embedding a Python module in a C program | 32,625,890 |
1 | 1 | 0 | 2 | 0 | 0 | 0.379949 | 0 | bash-4.1# yum install python-devel
Loaded plugins: fastestmirror, rhnplugin
This system is receiving updates from RHN Classic or RHN Satellite.
Loading mirror speeds from cached hostfile
* rpmforge: mirror.smartmedia.net.id
* webtatic-el5: uk.repo.webtatic.com
http://192.168.210.26/centos/6/updates/i386/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package python-devel.x86_64 0:2.6.6-36.el6 will be installed
--> Processing Dependency: python(x86-64) = 2.6.6-36.el6 for package: python-devel-2.6.6-36.el6.x86_64
--> Finished Dependency Resolution
Error: Package: python-devel-2.6.6-36.el6.x86_64 (centos64-x86_64)
Requires: python(x86-64) = 2.6.6-36.el6
Installed: python-2.6.6-37.el6_4.x86_64 (@centos64-updates-x86_64)
python(x86-64) = 2.6.6-37.el6_4
Available: python-2.6.6-36.el6.x86_64 (centos64-x86_64)
python(x86-64) = 2.6.6-36.el6
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Can somebody help me with above error , I am getting.
Just to let everybody know I am trying to install cx_Oracle on my CentOS system (CentOS release 6.4) and I got this error:-
error: command 'gcc' failed with exit status 1
So, I searched and found out to install python-devel and to do that I am getting the above error. | 0 | python,linux,oracle,centos,cx-oracle | 2015-03-26T15:40:00.000 | 1 | 29,282,771 | You have a newer version of python installed than the corresponding source package you're trying to install.
You have python 2.6.6-37 installed but the latest available source package from your repos (that you can successfully connect to) is 2.6.6-36.
But it looks like the python you have installed came from your "updates" repo,
http://192.168.210.26/centos/6/updates/i386/repodata/repomd.xml which isn't working at t he moment.
If that repo also had the corresponding python-devel-2.6.6-37 package, and it worked, (didn't throw a PYCURL error) you'd be fine, yum would find that and use it.
So your first step should be fixing your LAN repo / mirror. | 0 | 3,455 | false | 0 | 1 | Installing Python devel on centos | 29,283,004 |
2 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 0 | I have a django-rest-framework API, and am trying to understand how sending email from it works. Suppose I'm using django.core.mail.backends.smtp.EmailBackend as email backend to send emails. Sending of an email is quite slow, and I'm wondering, if the django main thread will be blocked somehow during that time so that the other APIs would be unusable during it? Is that true? Would it be a good call to send the email in a background process created by celery for example? | 0 | python,django,multithreading,email,celery | 2015-03-26T16:39:00.000 | 0 | 29,284,106 | I confirm the thread handling the request will be blocked until the email is sent. In a typical Django setup one thread is created per request. | 0 | 822 | false | 1 | 1 | Is django thread blocked while sending email? | 29,284,405 |
2 | 2 | 0 | 3 | 3 | 0 | 1.2 | 0 | I have a django-rest-framework API, and am trying to understand how sending email from it works. Suppose I'm using django.core.mail.backends.smtp.EmailBackend as email backend to send emails. Sending of an email is quite slow, and I'm wondering, if the django main thread will be blocked somehow during that time so that the other APIs would be unusable during it? Is that true? Would it be a good call to send the email in a background process created by celery for example? | 0 | python,django,multithreading,email,celery | 2015-03-26T16:39:00.000 | 0 | 29,284,106 | Yes. Django thread is blocked for that particular user. You might want to use Celery along with Rabbit Mq for sending mail in background. | 0 | 822 | true | 1 | 1 | Is django thread blocked while sending email? | 29,284,457 |
1 | 3 | 0 | 0 | 1 | 1 | 0 | 0 | I need to send a certain email to a lot of different addresses, one recipient at a time. It has an attachment.
So far, I've programmed this as such :
1) create a Thread object per mail-address (looping through the recipients list).
2) within each Thread object, create the MIMEMultipart() message.
3) within each Thread object, send the mail through smtplib.SMTP("smtp.gmail.com:587")
It's working fine.
The problem with this approach is that the attachment has to be attached separately for every single email. Is there a way to only attach it once ? A global MIMEMultipart() message is not possible because different threads would have to change it (to change the recipient's address). | 0 | python,multithreading,email,smtp | 2015-03-26T21:47:00.000 | 0 | 29,289,518 | The other approach is to use an outside service, such as icontact or mailchimp to manage your important messages. That can be done on a cloud based service, and can provide legal compliance. With some, you can even determine whether the mail has been read!
They might even be able to set up opt-in SMS for you to use with your clients. | 0 | 2,213 | false | 0 | 1 | Python - sending multiple emails with same attachment on separate threads | 29,312,580 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I just got my raspberry pi yesterday and have gone through all the setup and update steps required to enable the camera extension (which took me like 6 hrs; yes, I am a newbie).
I need to create a focus point on the camera's output and use the arrow keys to move this point around the screen.
My question is how to access the pixel addresses in order to achieve this (if that is even how to do it). Please can one of you gurus out there point me in the right direction. | 0 | python,camera,raspberry-pi | 2015-03-26T22:57:00.000 | 0 | 29,290,361 | You do realise that the Pi Camera board has a fixed focus at 1m to infinity ?
& If you want focus <1m you're gonna have to manually twist the lens, after removing the glue that fixates it in the housing. | 0 | 1,081 | false | 0 | 1 | A focus point on the raspberry pi camera | 29,363,197 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I just got my raspberry pi yesterday and have gone through all the setup and update steps required to enable the camera extension (which took me like 6 hrs; yes, I am a newbie).
I need to create a focus point on the camera's output and use the arrow keys to move this point around the screen.
My question is how to access the pixel addresses in order to achieve this (if that is even how to do it). Please can one of you gurus out there point me in the right direction. | 0 | python,camera,raspberry-pi | 2015-03-26T22:57:00.000 | 0 | 29,290,361 | If you need to take close-ups or if you are capturing image very close to camera lens then you need to manually set the focus. | 0 | 1,081 | false | 0 | 1 | A focus point on the raspberry pi camera | 38,924,580 |
1 | 1 | 0 | 4 | 2 | 1 | 1.2 | 0 | I have typed ":!python" to launch python shell in vim, but I can't close it.
I have tried ^Z but it has no effect. | 0 | python,shell,vim | 2015-03-27T14:11:00.000 | 0 | 29,302,706 | ^D is what you are actually looking for.
Alternatively, you can use the expression exit() or quit(). | 0 | 182 | true | 0 | 1 | How to exit from ":!python" in Vim? | 29,302,759 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 | Context
My app relies on external service for authentication, python API has function authentiate_request witch takes
request instance as a param, and returns result dict:
if auth was successful, dict contains 3 keys:
successful: true
username: alice
cookies: [list of set-cookie headers required to remember user]
if unsuccessful:
successful: false
redirect: url where to redirect user for web based auth
Now, call to this function is relatively expensive (is does HTTP POST underneath).
Question
I'm new to Pyramid security model, and I'm struggling how to use existing/properly write AuthenticationPolicy for my app, so it uses my auth service, and does not call it's API more than once per session (In auth success scenario)? | 0 | python,authentication,pyramid | 2015-03-30T08:38:00.000 | 0 | 29,341,688 | There are two broad ways to do integrate custom auth with Pyramid:
- write your own authentication policy for Pyramid (I haven't done this)
- write your own middleware to deal with your auth issues, and use the RemoteUserAuthenticationPolicy in Pyramid (I have done this)
For the second, you write some standard wsgi middleware, sort out your custom authentication business in there, and then write to the wsgi env. Pyramid authorization will then work fine, with the Pyramid auth system getting the user value from the wsgi env's 'REMOTE_USER' setting.
I personally like this approach because it's easy to wrap disparate apps in your middleware, and dead simple to turn it off or swap it out. While not really the answer to exactly what you asked, that might be a better approach than what you're trying. | 0 | 201 | true | 1 | 1 | How to use external Auth system in Pyramid | 32,814,226 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I want to create a vehicle routing optimization program. I will have multiple vehicles travelling and delivering items by finding the shortest path from A to B. At first I will simply output results. I may later create a visual representation of the program.
It has been suggested to me that I will find it easiest to do this in Python.
I have to do this task, but it seems very daunting. I am not the best programmer yet, but not a beginner either, I am good at maths and a quick learner.
Any advice on how to break down this task would be really helpful.
Should I use Python?
Any Python modules that will be particularly suited to this task? | 0 | python,optimization,routing | 2015-03-31T21:33:00.000 | 0 | 29,378,991 | I used networkx the other day, and it was superb. Very easy to use, and very quick.
So you'll need to get your data into some kind of usable format, and then run your algorithms through this.
Python is often a good choice for scripting and pulling together pieces of data and analysing them! | 0 | 1,138 | false | 0 | 1 | How to create an automated vehicle routing simulation? | 29,379,456 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I've just started using python-twitter with django hosted on OpenShift and need to use Oauth. At the moment it's just on the dev server. Before I put it live, I was wondering if there's a "best" place to store my token / secret info? Right now I just have them in my views.py file but would it be safer to store them in settings.py and access them from there? | 0 | python,django,security,twitter-oauth | 2015-04-03T15:33:00.000 | 0 | 29,435,173 | Many other libraries ask you to put your API Keys in settings.py, this is also useful if you want to use them in different application within your project. | 0 | 136 | false | 1 | 1 | python-twitter - best location for Oauth keys in Django? | 29,502,136 |
1 | 1 | 0 | 2 | 0 | 0 | 0.379949 | 1 | I want to help my friend to analyze Posts on Social Networks (Facebook, Twitter, Linkdin and etc.) as well as several weblogs and websites.
When it comes to the Storing the Data, I have no experience in huge data. Which one is the best for a bunch of thousand post, tweet and article per day: Database, XML file, plain text? If database, which one?
P.S.
The language that I am going to start programming with is Python. | 0 | python,mysql,xml,database | 2015-04-05T12:29:00.000 | 0 | 29,457,275 | That depends on the way you want to work with the data. If you have structured data, and want the exchange it between different programs, xml might be a good choice. If you do mass processing, plain text might be a good choice. If you want to filter the data, a database might be a good choice. | 0 | 124 | false | 0 | 1 | Storing Huge Data; Database, XML or Plain text? | 29,457,336 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | I am doing my bachelor's thesis where I wrote a program that is distributed over many servers and exchaning messages via IPv6 multicast and unicast. The network usage is relatively high but I think it is not too high when I have 15 servers in my test where there are 2 requests every second that are going like that:
Server 1 requests information from server 3-15 via multicast. every of 3-15 must respond. if one response is missing after 0.5 sec, the multicast is resent, but only the missing servers must respond (so in most cases this is only one server)
Server 2 does exactly the same. If there are missing results after 5 retries the missing servers are marked as dead and the change is synced with the other server (1/2)
So there are 2 multicasts every second and 26 unicasts every second. I think this should not be too much?
Server 1 and 2 are running python web servers which I use to do the request every second on each server (via a web client)
The whole szenario is running in a mininet environment which is running in a virtual box ubuntu that has 2 cores (max 2.8ghz) and 1GB RAM. While running the test, i see via htop that the CPUs are at 100% while the RAM is at 50%. So the CPU is the bottleneck here.
I noticed that after 2-5 minutes (1 minute = 60 * (2+26) messages = 1680 messages) there are too many missing results causing too many sending repetitions while new requests are already coming in, so that the "management server" thinks the client servers (3-15) are down and deregisters them. After syncing this with the other management server, all client servers are marked as dead on both management servers which is not true...
I am wondering if the problem could be my debug outputs? I am printing 3-5 messages for every message that is sent and received. So that are about (let's guess it are 5 messages per sent/recvd msg) (26 + 2)*5 = 140 lines that are printed on the console.
I use python 2.6 for the servers.
So the question here is: Can the console output slow down the whole system that simple requests take more than 0.5 seconds to complete 5 times in a row? The request processing is simple in my test. No complex calculations or something like that. basically it is something like "return request_param in ["bla", "blaaaa", ...] (small list of 5 items)"
If yes, how can I disable the output completely without having to comment out every print statement? Or is there even the possibility to output only lines that contain "Error" or "Warning"? (not via grep, because when grep becomes active all the prints already have finished... I mean directly in python)
What else could cause my application to be that slow? I know this is a very generic question, but maybe someone already has some experience with mininet and network applications... | 0 | python,performance,networking,cpu,mininet | 2015-04-05T19:44:00.000 | 0 | 29,461,480 | I finally found the real problem. It was not because of the prints (removing them improved performance a bit, but not significantly) but because of a thread that was using a shared lock. This lock was shared over multiple CPU cores causing the whole thing being very slow.
It even got slower the more cores I added to the executing VM which was very strange...
Now the new bottleneck seems to be the APScheduler... I always get messages like "event missed" because there is too much load on the scheduler. So that's the next thing to speed up... :) | 0 | 102 | false | 0 | 1 | Console output consuming much CPU? (about 140 lines per second) | 29,502,719 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | How many maximum number of recipients can be added at a time to BCC field while sending a bulk e-mail?
I'm using python Django framework and gmail, smtp for sending mail. | 0 | python-2.7,email,smtp,gmail,django-1.6 | 2015-04-06T05:28:00.000 | 0 | 29,465,822 | BCC limit for Gmail: 500 in any 24 hours period.
If you want to send emails in bulk, you will need to request it as 500 per request. | 0 | 188 | false | 1 | 1 | How many recipients can be added to BCC in python django | 70,995,528 |
1 | 1 | 0 | 0 | 4 | 0 | 0 | 0 | When an exception is raised in the application that is not accounted for (an uncaught/unhandled exception), it should be logged. I would like to test this behaviour in behave.
The logging is there to detect unhandled exceptions so developers can implement handling for these exceptions or fix them if needed.
In order to test this, I think I have to let the code under test raise an exception. The problem is that I cannot figure out how to do that without hard-coding the exception-raising in the production code. This is something I like to avoid as I do not think this test-code belongs in production.
While unit-testing I can easily mock a function to raise the exception. In behave I cannot do this as the application is started in another process.
How can I cause an exception to be raised in behave testing, so it looks as if the production code has caused it, without hard-coding the exception in the production code? | 0 | python-2.7,exception,testing,exception-handling,python-behave | 2015-04-08T13:06:00.000 | 0 | 29,515,509 | Regadless to framework/programming language exception is a state when something went wrong. This issue has to be handled somehow by the application, that's why a good programmer will write exception handling code in places where it needed at most.
Exception handling can be everything. In your case you want to test that exception is logged. Therefore I see the an easy test sequence here:
Execute the code/sequence of actions which will rase the exception
Verify that log file has an entry related to the exception raised in previous step with help of your test automation framework. | 0 | 557 | false | 0 | 1 | How to test uncaught/unhandled exceptions in behave? | 29,516,346 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I am writing a python script to get some basic system stats. I am using psutil for most of it and it is working fine except for one thing that I need.
I'd like to log the average cpu wait time at the moment.
from top output it would be in CPU section under %wa.
I can't seem to find how to get that in psutil, does anyone know how to get it? I am about to go down a road I really don't want to go on....
That entire CPU row is rather nice, since it totals to 100 and it is easy to log and plot.
Thanks in advance. | 0 | python,psutil,iowait | 2015-04-09T20:51:00.000 | 1 | 29,548,735 | %wa is giving your the iowait of the CPU, and if you are using times = psutil.cpu_times() or times = psutil.cpu_times_percent() then it is under the times.iowait variable of the returned value (Assuming you are on a Linux system) | 0 | 1,680 | true | 0 | 1 | Get IO Wait time as % in python | 29,548,863 |
2 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I'm trying to create unit tests for a function that uses database queries in its implementation. My understanding of unit testing is that you shouldn't be using outside resources such as databases for unit testing, and you should just create mock objects essentially hard coding the results of the queries.
However, in this case, the queries are implementation specific, and if the implementation would change, so would the queries. My understanding is also that unit testing is very useful because it essentially allows you to change the implementation of your code whenever you want while being sure it still works.
In this case, would it be better to create a database for testing purposes, or to make the testing tailored to this specific implementation and change the test code if we ever change the implementation? | 1 | python,database,unit-testing | 2015-04-10T15:51:00.000 | 0 | 29,565,712 | As it seems I got the wrong end of the stick, I had a similarish problem and like you an ORM was not an option.
The way I addressed it was with simple collections of Data Transfer objects.
So the new code I wrote, had no direct access to the db. It did everything with simple lists of objects. All the business logic and ui could be tested without the db.
Then I had an other module that did nothing but read and write to the db, to and from my collections of objects. It was a poor mans ORM basically, a lot of donkey work.
Testing was run the db creation script, then some test helper code to populate the db with data I needed for each test.
Boring but effective, and you can with a bit of care, refactor it in to the code base without too much risk. | 0 | 105 | false | 0 | 1 | Unit testing on implementation-specific database usage | 29,576,807 |
2 | 2 | 0 | 2 | 1 | 0 | 1.2 | 0 | I'm trying to create unit tests for a function that uses database queries in its implementation. My understanding of unit testing is that you shouldn't be using outside resources such as databases for unit testing, and you should just create mock objects essentially hard coding the results of the queries.
However, in this case, the queries are implementation specific, and if the implementation would change, so would the queries. My understanding is also that unit testing is very useful because it essentially allows you to change the implementation of your code whenever you want while being sure it still works.
In this case, would it be better to create a database for testing purposes, or to make the testing tailored to this specific implementation and change the test code if we ever change the implementation? | 1 | python,database,unit-testing | 2015-04-10T15:51:00.000 | 0 | 29,565,712 | Well, to start with, I think this is very much something that depends on the application context, the QA/dev's skill set & preferences. So, what I think is right may not be right for others.
Having said that...
In my case, I have a system where an extremely complex ERP database, which I dont control, is very much in the driver's seat and my code is a viewer/observer, rather than a driver of that database. I don't, and can't really, use an ORM layer much, all my added value is in queries that deeply understand the underlying database data model. Note also that I am mostly a viewer of that db, in fact my code has read-only access to the primary db. It does have write access to its own tagging database which uses the Django ORM and testing there is different in nature because of my reliance on the ORM.
For me, it had better be tested with the database.
Mock objects? Please, mocking would have guzzled time if there is a lot of legitimate reasons to view/modify database contents with complex queries.
Changing queries. In my case, changing and tweaking those queries, which are the core of my application logic, is very often needed. So I need to make fully sure that they perform as intended against real data.
Multi-platform concerns. I started coding on postgresql, tweaked my connectivity libraries to support Oracle as well. Ran the unit tests and fixed anything that popped up as an error. Would a database abstraction have identified things like the LIMIT clause handling in Oracle?
Versioning. Again, I am not the master of the database. So, as versions change, I need to hook up my code to it. The unit testing is invaluable, but that's because it hits the raw db.
Test robustness. One lesson I learned along the way is to uncouple the test from the test db. Say you want to test a function that flags active customers that have not ordered anything in a year. My initial test approach involved manual lookups in the test database, find CUST701 to be a match to the condition. Then call my function and test if CUST701 is the result set of customers needing review. Wrong approach. What you want to do is to write, in your test, a query that finds active customers that have not ordered anything in a year. No hardcoded CUST701s at all, but your test query query can be as hardcoded as you want - in fact, it should look as little as your application queries as possible - you don't want your test sql to replicate what could potentially be a bug in your production code. Once you have dynamically identified a target customer meeting the criteria, then call your code under test and see if the results are as expected. Make sure your coverage tools identify when you've been missing test scenarios and plug those holes in the test db.
BDD. To a large extent, I am starting to approach testing from a BDD perspective, rather than a low-level TDD. So, I will be calling the url that handles the inactive customer lists, not testing individual functions. If the overall result is OK and I have enough coverage, I am OK, without wondering about the detailed low-level to and fro. So factor this as well in qualifying my answer.
Coders have always had test databases. To me, it seems logical to leverage them for BDD/unit-testing, rather than pretending they don't exist. But I am at heart a SQL coder that knows Python very well, not a Python expert who happens to dabble in SQL. | 0 | 105 | true | 0 | 1 | Unit testing on implementation-specific database usage | 29,566,319 |
1 | 2 | 0 | 1 | 0 | 1 | 1.2 | 0 | Can anyone point me to why this error keeps showing up during circleci testing?
Neither Pillow nor PIL could be imported: No module named Image python
manage.py test returned exit code 1
For the record, I followed every resource I had in terms of installation instructions for pillow.
Can anyone PLEASE help me? I'm getting desperate. | 0 | python,django,unit-testing,pillow,circleci | 2015-04-12T21:31:00.000 | 0 | 29,594,889 | Have you specified the Python version in your circle.yml? If Python version is not specified, the virtualenv might not get created for you. | 0 | 287 | true | 1 | 1 | Django circleci and the pillow library | 29,603,469 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | My company has a single git repository which is over 15 years and is really massive with about 60% of it which can be archived. I want to find these scripts (python, perl, ruby, java etc) and create a new git repository with only frequently used scripts. The scripts also have cross dependencies.
One solution that I thought was to setup inotify to watch over the files in git repo and collect the names of recently accessed scripts, collect data over few months and then create new repo based on that data. Not sure how efficient it would be though.
Another solution I thought was to use git commit date for each file and remove files which are over 5 years old.
Could anyone let me know of an efficient solution to cleanup this mess ? Or any tool similar to NewRelic that would monitor the filesystem ? | 0 | python,git,newrelic,inotify | 2015-04-15T01:31:00.000 | 0 | 29,640,093 | First, it's not clear what problem you are trying to solve. Is the 15-year git history slowing things down when cloning? If so, maybe just do a shallow git clone instead? (i.e. A shallow cone doesn't download the history.)
As Thilo pointed out, cutting the repo in half isn't going to make things that much faster.
But if the scripts are really that disorganized, it's highly likely that some of them need to be rewritten, documented, etc. If you just move the scripts forward, it's likely you are moving lots of inefficiencies forward too. I'd pick them off one at at time, and give them a little love, test them, etc.
One idea: You can use strace -ff -o strace.out ./myscript to figure out what other files a script opens. | 0 | 95 | false | 0 | 1 | Cleanup Huge Git Repository | 29,640,265 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I'm using a 32-bit Ubuntu machine. I'm trying to create a Whoosh index of a 27GB file. But my system is crashing after index size of 3GB. Is there any size constraint on Whoosh index size? If not then what can be the problem. | 0 | python,indexing,whoosh | 2015-04-15T11:35:00.000 | 1 | 29,649,147 | Is it possible that Whoosh overflows to RAM of your computer by loading the 27Gb file? | 0 | 677 | false | 0 | 1 | Maximum Whoosh Index size? | 29,650,121 |
1 | 1 | 0 | 2 | 3 | 1 | 0.379949 | 0 | I'm developing a small tool which uses mainly NumPy and one SciPy module (scipy.optimize.fsolve). My idea of sharing it with others is that it comes in package with Portable Python so that theoretically everyone can run it.
The whole SciPy package weighs a lot (about 80 mb). Is it possible to compile only 1 module into *.pyd and import it as any other module, so that I don't have to include modules that I don't actually need? | 0 | python,module,scipy,portability,pyd | 2015-04-17T10:01:00.000 | 0 | 29,696,126 | There are several possibilities if you want to distribute only a subset SciPy code with your code (and in particular scipy.optimize.fsolve),
Look at the SciPy source code and copy only the files that are needed for the fsolve function. After a quick glance that would be at least, scipy/optimize/optimize.py, scipy/optimize/minpack.py, scipy/optimize/_minpack.so/.pyd (but maybe I missed a couple).
In simpler approach, remove folder by folder unused parts in the scipy install directory (particularly those that take a lot of space), including scipy/weave, scipy/sparse, scipy/linalg etc.
Write a simple wrapper around the scipy.optimize.fsolve and compile it to C code with cython, this should produce an independent .pyd/.so
There should be a python module to do this automatically, for instance pyinstaller does include only the required modules in the binary executable it produces. So you would need an equivalent of pyinstaller that produces dynamic libraries. | 0 | 371 | false | 0 | 1 | Compiling single SciPy module to *.pyd | 29,698,782 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I'm about to start working on a project where a Python script is able to remote into a Windows Server and read a bunch of text files in a certain directory. I was planning on using a module called WMI as that is the only way I have been able to successfully remotely access a windows server using Python, But upon further research I'm not sure i am going to be using this module.
The only problem is that, these text files are constantly updating about every 2 seconds and I'm afraid that the script will crash if it comes into an MutEx error where it tries to open the file while it is being rewritten. The only thing I can think of is creating a new directory, copying all the files (via script) into this directory in the state that they are in and reading them from there; and just constantly overwriting these ones with the new ones once it finishes checking all of the old ones. Unfortunately I don't know how to execute this correctly, or efficiently.
How can I go about doing this? Which python module would be best for this execution? | 0 | python,windows,file,wmi,remote-access | 2015-04-17T16:30:00.000 | 1 | 29,704,766 | I've done some work with WMI before (though not from Python) and I would not try to use it for a project like this. As you said WMI tends to be obscure and my experience says such things are hard to support long-term.
I would either work at the Windows API level, or possibly design a service that performs the desired actions access this service as needed. Of course, you will need to install this service on each machine you need to control. Both approaches have merit. The WinAPI approach pretty much guarantees you don't invent any new security holes and is simpler initially. The service approach should make the application faster and required less network traffic. I am sure you can think of others easily.
You still have to have the necessary permissions, network ports, etc. regardless of the approach. E.g., WMI is usually blocked by firewalls and you still run as some NT process.
Sorry, not really an answer as such -- meant as a long comment.
ADDED
Re: API programming, though you have no Windows API experience, I expect you find it familiar for tasks such as you describe, i.e., reading and writing files, scanning directories are nothing unique to Windows. You only need to learn about the parts of the API that interest you.
Once you create the appropriate security contexts and start your client process, there is nothing service-oriented in the, i.e., your can simply open and close files, etc., ignoring that fact that the files are remote, other than server name being included in the UNC name of the file/folder location. | 0 | 693 | false | 0 | 1 | python copying directory and reading text files Remotely | 29,705,179 |
1 | 1 | 0 | 6 | 4 | 1 | 1.2 | 0 | I'm trying to install ipython notebook on my win8 laptop.
I follow the following steps to install ipython.
I installed "pip".
Then I install the pywin32.
Then I used pip to install ipython
"pip install ipython[all]"
But when I test the ipython using "iptest"
The test can not proceed due to the following error.
ERROR: Failure: ImportError (No module named ipython)
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\nose\loader.py", line 420, in loadTestsFromName
addr.filename, addr.module)
File "C:\Python27\lib\site-packages\nose\importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "C:\Python27\lib\site-packages\nose\importer.py", line 79, in importFromDir
fh, filename, desc = find_module(part, path)
ImportError: No module named ipython
The wired thing is the ipython notebook seems working fine, but the iptest can't test properly. It appears to me the the "nose" module cannot find the path of "ipython" module. Can anyone help me with this? Thanks. | 0 | python,windows,ipython,nose | 2015-04-19T04:48:00.000 | 0 | 29,725,915 | I guess you create your virtual environment with --system-site-packages.
Try following steps:
Exit virtual environment: deactivate
Switch to superuser: su root
Install jupyter outside the virtual environment: sudo pip3 install jupyter
Then enter your virtual environment and try again. | 0 | 6,302 | true | 0 | 1 | ipython iptest ImportError( no module named ipython) | 47,507,335 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I need to use a bluetooth remote shutter as the ones that come with many selfie sticks in order to make a push-to-talk button for starting and stopping audio recordings in Python. I have tried using pyBluez module (vs 0.21), but the most I have got is to detect the device and its address, but I cannot see which are its services, or how to create a client-server connection with it.
Checking the bluetooth connector status, I can see that it detects the device as "i Shutter" and its type as keyboard. The device is already paired. In case, it is relevant I'm using Ubuntu 14.04 and Python 2.7. | 0 | python,ubuntu,bluetooth,shutter | 2015-04-20T01:26:00.000 | 0 | 29,737,928 | Most of the selfie sticks use HID profile (confirmed as "type keyboard"), and send a Volume Up button press (keyboard report or consumer report). You will need to listen for this key, and not RFCOMM data. | 0 | 910 | false | 0 | 1 | Detecting events from a bluetooth remote shutter to control audio recording in Python | 29,983,309 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | Im trying to find a module that will allow me to run a script locally that will:
1. Open a text file on a remote Windows Machine
2. Read the lines of the text file
3. Store the lines in a variable and be able to process the data.
This is absolutely no problem on a Linux machine via SSH, but I have no clue what module to use for a remote Windows machine. I can connect no problem and run commands on a remote Windows machine via WMI,but WMI does not have a way to read/write to files. Are there any modules out there that I can install to achieve this process? | 0 | python,windows,text-files,remote-access,readfile | 2015-04-20T18:09:00.000 | 1 | 29,755,274 | You can use powershell for this.
first Open powershell by admin previlage.
Enter this command
Enable-PSRemoting -Force
Enter this command also on both computer so they trust eachother.
Set-Item wsman:\localhost\client\trustedhosts *
then restart winrm service on both pc by this command.
Restart-Service WinRM
test it by this command
Test-WsMan computername
for executing a Remote Command.
Invoke-Command -ComputerName COMPUTER -ScriptBlock { COMMAND }
-credential USERNAME
for starting remote session.
Enter-PSSession -ComputerName COMPUTER -Credential USER | 0 | 2,454 | false | 0 | 1 | Python: Open and read remote text files on Windows | 29,755,645 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I start a script and I want to start second one immediately after the first one is completed successfully?
The problem here is that this script can take 10min or 10hours according to specific cases and I do not want to fix the start of the second script.
Also, I am using python to develop the script, so if you can provide me a solution with python control on the cron it will be OK.
Thank you, | 0 | python,cron,cron-task | 2015-04-21T09:43:00.000 | 1 | 29,768,499 | You can use a lock file to indicate that the first script is still running. | 0 | 45 | false | 0 | 1 | How can I check whether the script started with cron job is completed? | 29,769,403 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I need to backup on the Google Storage, files having name with russian characters. Is there any solution? I get this kind of error:
'ascii' codec can't encode characters in position 203-213: ordinal not in range(128) | 0 | python,encode,gsutil | 2015-04-22T10:39:00.000 | 0 | 29,794,957 | I assume you're getting this error when trying to upload(copy) your files to the Google Cloud Storage using 'gsutil' tool.
In order to resolve the issue install UTF-8 version of your Russian font on your computer. | 0 | 141 | false | 0 | 1 | gsutil api google storage nearline | 29,808,376 |
2 | 2 | 0 | 3 | 0 | 0 | 0.291313 | 0 | I have a Python program that runs on my Windows 7 computer which communicates with a Raspberry Pi over the internet through a port I opened with a Port Forwarding rule on my internet modem.
I am concerned about a hacker getting through that open port and causing problems.
My question is:
Is there a way to password protect that port so anyone who tries to access that port is required to enter the correct password to get through to my Raspberry Pi?
If not, what other ways could I protect that open port?
Any help/advice is much appreciated. Thanks in advance. | 0 | python,security,port,password-protection,portforwarding | 2015-04-23T13:25:00.000 | 0 | 29,824,873 | You cannot password protect a port. That concept is several layers up in the network stack and not something regular internet gear has anything to do with.
You'll have to add authentication at the service/application layer. Meaning, your Pi will have to demand authentication. Whether that's possible or not depends on what's running on it.
If that's not available, you'll need an intermediary. Either you set up a proxy in front of the Pi that can handle authentication; or you set up a VPN server instead of a simple forwarded port, which would put the authentication at the point of network access. | 0 | 768 | false | 0 | 1 | Port Forwarding Security | 29,825,016 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have a Python program that runs on my Windows 7 computer which communicates with a Raspberry Pi over the internet through a port I opened with a Port Forwarding rule on my internet modem.
I am concerned about a hacker getting through that open port and causing problems.
My question is:
Is there a way to password protect that port so anyone who tries to access that port is required to enter the correct password to get through to my Raspberry Pi?
If not, what other ways could I protect that open port?
Any help/advice is much appreciated. Thanks in advance. | 0 | python,security,port,password-protection,portforwarding | 2015-04-23T13:25:00.000 | 0 | 29,824,873 | You can effectively "password protect a port" via an SSH tunnel.
Instead of opening your application's port, you open TCP port 22 and run an SSH daemon. The SSH service can be protected by a password, passphrase, a key file or a combination thereof.
When you connect to SSH from Windows using PuTTY or Plink, you can specify that a local port on your Windows box is mapped to the port on your remote Raspberry Pi.
So rather than connecting to, say 203.0.113.0 on port 1234, you would connect to 127.0.0.1 on your Windows machine on port 1234 after establishing the SSH connection and this will route it through to the machine at the other end of the SSH tunnel. | 0 | 768 | false | 0 | 1 | Port Forwarding Security | 29,844,068 |
1 | 4 | 0 | 0 | 2 | 0 | 1.2 | 0 | I have a sample python script: sample.py. The script takes the argument as the user name and password to connect to some remote server. When I run the script sample.py --username --password , the password is being logged in linux messages files. I understand this is a linux behavior, but wondering if we can do anything within my script to avoid this logging. One way I can think is to provide password in an interactive way. Any other suggestions? | 0 | python,linux,passwords | 2015-04-24T09:28:00.000 | 1 | 29,843,710 | I figured out the best way is to disable it via sudo command:
Cmnd_Alias SCRIPT =
Defaults!SCRIPT !syslog
The above lines in sudoers.conf should help from preventing the logging in syslog. | 0 | 446 | true | 0 | 1 | plain Password is logged via my python script | 29,938,376 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | When I run my python script (content irrelevant for this question, just uses print a couple of times) interactively, it sends the output straight away.
When I use it in a pipe to tee or in an output redirection (./script.py > script.log) there is no output. What am I doing wrong? | 0 | bash,python-2.7 | 2015-04-24T14:03:00.000 | 0 | 29,849,552 | I suspect you're encountering output buffering, where it's waiting to get a certain number of bytes before it flushes. You can look at the unbuffer command if that is undesirable for you. | 0 | 13 | false | 0 | 1 | Python only sends output interactvely | 29,849,608 |
1 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | When I call a executable in python using os.system("./mydemo") in Ubuntu, it can't find the .so file (libmsc.so) needed for mydemo. I used os.system("export LD_LIBRARY_PATH=pwd:$LD_LIBRARY_PATH;"), but it still can't find libmsc.so.
The libmsc.so is in the current directory. and shouldn't be global. | 0 | python,ubuntu-12.04,dynamic-library | 2015-04-27T06:32:00.000 | 1 | 29,888,716 | If I remember correctly, executing export ... via os.system will only set that shell variable within the scope, thus it's not available in the following os.system scopes. You should set the LD_LIBRARY_PATH in the shell, before executing the Python script.
Btw. also avoid setting relative paths… | 0 | 926 | false | 0 | 1 | I use os.system to run executable file, but it need a .so file, it can't find library | 29,888,758 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | The compiler works by receiving a file that contains the macro assembler source code then, it generates two files, one is the list and the other one is the hex file.
Everything works alright while offline but I want to make it online.
In this case, the user will provide an MC68HC11 assembly source code file to my server (I already have the server up and running) and after this, my server will compile it using the Python script I wrote and then it will give the user an option to download the list and the hex file. | 0 | python,django,assembly,web,web-applications | 2015-04-27T15:35:00.000 | 0 | 29,900,058 | The simplest solution is to put the two output files into a single zip file, send a doctype header indicating it's in zip format and send the output as raw data.
The alternative is to use httprequest for the client to request each file in turn. | 0 | 112 | true | 0 | 1 | What's the best way to make an Assembly compiler in Python available online? | 29,905,544 |
1 | 3 | 0 | 0 | 2 | 0 | 0 | 1 | I am implementing a small distributed system (in Python) with nodes behind firewalls. What is the easiest way to pass messages between the nodes under the following restrictions:
I don't want to open any ports or punch holes in the firewall
Also, I don't want to export/forward any internal ports outside my network
Time delay less than, say 5 minutes, is acceptable, but closer to real time would be nice, if possible.
1+2 → I need to use a third party, accessible by all my nodes. From this follows, that I probably also want to use encryption
Solutions considered:
Email - by setting up separate or a shared free email accounts (e.g. Gmail) which each client connects to using IMAP/SMTP
Google docs - using a shared online spreadsheet (e.g. Google docs) and some python library for accessing/changing cells using a polling mechanism
XMPP using connections to a third party server
IRC
Renting a cheap 5$ VPS and setting up a Zero-MQ publish-subscribe node (or any other protocol) forwarded over SSH and having all nodes connect to it
Are there any other public (free) accessible message queues available (or platforms that can be misused as a message queue)?
I am aware of the solution of setting up my own message broker (RabbitMQ, Mosquito) etc and make it accessible to my nodes somehow (ssh-forwardning to a third host etc). But my questions is primarily about any solution that doesn't require me to do that, i.e. any solutions that utilizes already available/accessible third party infrastructure. (i.e. are there any public message brokers I can use?) | 0 | python,message-queue,messaging,distributed,distributed-system | 2015-04-27T17:15:00.000 | 1 | 29,902,069 | I would recommend RabbitMQ or Redis (RabbitMQ preferred because it is a very mature technology and insanely reliable). ZMQ is an option if you want a single hop messaging system instead of a brokered messaging system such as RabbitMQ but ZMQ is harder to use than RabbitMQ. Depending on how you want to utilize the message passing (is it a task dispatch in which case you can use Celery or if you need a slightly more low-level access in which case use Kombu with librabbitmq transport ) | 0 | 1,745 | false | 0 | 1 | Simple way for message passing in distributed system | 29,904,422 |
1 | 2 | 0 | 2 | 1 | 1 | 0.197375 | 0 | I have Python 3.2 installed by default on my Raspbian Linux, but I want Python 3.4 (time.perf_counter, yield from, etc.). Installing Python 3.4 via apt-get is no problem, but when i type python3 in my shell I still get Python 3.2 (since /usr/bin/python3 still links to it). Should I change the Symlink, or is there a better was to do this? | 0 | python,python-3.x,debian,apt | 2015-04-27T20:24:00.000 | 1 | 29,905,262 | I'm going to answer my own question, since I have found a solution to my problem. I had previously run apt-get upgrade on my system after setting my debian release to jessie. This did not replace python 3.2 though. What did replace it was running apt-get dist-upgrade; after that apt-get autoremove removed python 3.2. I doubt that this could be a problem, since I hadn't installed any external libraries. | 0 | 5,897 | false | 0 | 1 | Upgrade Python 3.2 to Python 3.4 on linux | 29,975,297 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | Is there a rational way to serve multiple websites via PHP:Nginx, Python:??? & node.js on the same vps?
And would it be reliable?
The sites are expected to be low in traffic.
I currently have PHP running on Nginx, Ubuntu via Digital Ocean and I would like to stick to Nginx for PHP and any major webserver for Python. | 0 | php,python,node.js,ubuntu,nginx | 2015-04-27T22:10:00.000 | 0 | 29,906,888 | The kind of setup you're describing is straightforward and not complicated. Nginx works fine as a reverse proxy and web server that handles serving static assets.
For PHP, you just need to proxy to php-fpm (running on a TCP port or unix socket).
For Python, you need a wsgi server (something like uwsgi or gunicorn, again using a TCP port or unix socket) to server the Python app and have Ngix proxy to requests to it.
For your Node.js app, just run the node server on a port like 8000 and have Nginx proxy requests to it.
If you have a bunch of websites, each should have a server block matching a unique server name (i.e. mapped to a virtual host).
The setup is as reliable as your backend services (like php-fpm, wsgi, and Node.js server). As long as those services are up and running (as daemon services) nginx should have no problem proxying to them. I have used all 3 setups on one server and have never experienced problems with any of the above. | 0 | 789 | false | 1 | 1 | Running node, PHP and Python on the same vps | 29,909,359 |
1 | 5 | 0 | 6 | 17 | 0 | 1 | 1 | I am trying to go through tweets of a particular user and get all replies on that tweet. I found that the APIv1.1 of twitter does not directly support it.
Is there a hack or a workaround on getting the replies for a particular tweet. I am using python Streaming API. | 0 | python,twitter,tweepy,tweets,twitter-streaming-api | 2015-04-28T19:52:00.000 | 0 | 29,928,638 | Here's a work around to fetch replies of a tweet made by "username" using the rest API using tweepy
1) Find the tweet_id of the tweet for which the replies are required to be fetched
2) Using the api's search method query the following (q="@username", since_id=tweet_id) and retrieve all tweets since tweet_id
3) the results matching the in_reply_to_status_id to tweet_id is the replies for the post. | 0 | 18,865 | false | 0 | 1 | Getting tweet replies to a particular tweet from a particular user | 31,647,823 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I'm trying to set GNU Radio as an audio processor for a little community radio in my town.
I've already installed GNU Radio and it's working, but I'm not a sound engineer, so I need some help.
This is my installation:
MIC & Music Player ----> Mixer ----> GNU Radio ---> FM Emitter
I need to know what filters and modules to set to improve sound in this workflow.
Could any of you give me an outline of what GNU Radio modules to use? | 0 | python,audio,radio,gnuradio | 2015-04-29T00:56:00.000 | 0 | 29,932,535 | You can simply use alsa or pulse audio to configure a "virtual" capture device, use that as the device name in the GNU Radio audio sink, FM modulate the audio signal and send the result to your RF hardware. That's pretty much a typical GNU Radio use case. You might want to have a look at the gr-analog examples :) | 0 | 310 | false | 0 | 1 | GNU Radio on Community radios | 30,705,038 |
2 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I'm trying to set GNU Radio as an audio processor for a little community radio in my town.
I've already installed GNU Radio and it's working, but I'm not a sound engineer, so I need some help.
This is my installation:
MIC & Music Player ----> Mixer ----> GNU Radio ---> FM Emitter
I need to know what filters and modules to set to improve sound in this workflow.
Could any of you give me an outline of what GNU Radio modules to use? | 0 | python,audio,radio,gnuradio | 2015-04-29T00:56:00.000 | 0 | 29,932,535 | Since the aim is to improve sound quality in our little community radio, the right way to achieve it is to use an audio processor software, as @KevinReid said.
For the records, one possible solution is use this schema with Jack:
MIC & Music Player ----> Mixer ----> PC with audio processor ---> FM Emiter
The PC with audio processor is a GNU/Linux based PC with Jack as sound server and Calf Jack Hub (calf.sourceforge.net) as audio processor.
Steps:
Install jack, qjackctl and calf.
Open qjackctl and start jacks server
Open calf and set filters you want (eq, limiter, compressor, etc.)
Set connections so you take the input, send it through filters, put it into the output (i.e. headset connector or lineout)
That's all. All this can be done by command line, at startup, etc... but this shows the main idea. | 0 | 310 | true | 0 | 1 | GNU Radio on Community radios | 30,032,777 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | I am using Python3 and mime.multipart to send an attachment. I was able to send attachment successfully. But today, I get an error saying file does not exist, when I can see in WINSCP that it clearly does. Is this a permissions issue? Also when I list the contents of the directory, the file DOES NOT show up. What is going on? | 0 | python,python-3.x,file-upload,email-attachments,mime-message | 2015-04-29T14:25:00.000 | 0 | 29,946,610 | I wasn't closing the stream after writing to the file. So the code couldn't find the file. However when the script finished, the stream would get closed by force and I would see the file in the folder. | 0 | 115 | true | 0 | 1 | Python cannot find file to send as email attachment | 29,947,447 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I want to include the following python code into php. The name of the python file is hello.py. The contents are
print "Hello World"
I want to call this python script in php and show the same in webpage | 0 | php,python | 2015-04-29T15:37:00.000 | 0 | 29,948,415 | Not really possible like that. What you could do is use exec or passthru to run the python in the terminal and return the results. Best option would be to do a separate call to the python script and combine them on the client. | 0 | 50 | false | 0 | 1 | including python code in php | 29,948,514 |
1 | 3 | 0 | 0 | 4 | 0 | 0 | 0 | I'm stumped on this one, please help me oh wise stack exchangers...
I have a function that uses xlrd to read in an .xls file which is a file that my company puts out every few months. The file is always in the same format, just with updated data. I haven't had issues reading in the .xls files in the past but the newest release .xls file is not being read in and is producing this error: *** formula/tFunc unknown FuncID:186
Things I've tried:
I compared the new .xls file with the old to see if I could spot any
differences. None that I could find.
I deleted all of the macros that were contained in the file (older versions also had macros)
Updated xlrd to version 0.9.3 but get the same error
These files are originally .xlsm files. I open them and save them as
.xls files so that xlrd can read them in. This worked just fine on previous releases of the file. After upgrading to xlrd 0.9.3 which supposedly supports .xlsx, I tried saving the .xlsm file as.xlsx and tried to read it in but got an error with a blank error message
Useful Info:
Python 2.7
xlrd 0.9.3
Windows 7 (not sure if this matters but...)
My guess is that there is some sort of formula in the new file that xlrd doesn't know how to read. Does anybody know what FuncID: 186 is?
Edit: Still no clue on where to go with this. Anybody out there run into this? I tried searching up FuncID 186 to see if it's an excel function but to no avail... | 1 | python,windows,excel,python-2.7,xlrd | 2015-04-30T15:01:00.000 | 0 | 29,971,186 | I had the same problem and I think we have to look at the cells excel that these are not picking up empty, that's how I solved it. | 0 | 1,917 | false | 0 | 1 | Python XLRD Error : formula/tFunc unknown FuncID:186 | 30,945,220 |
1 | 1 | 0 | 3 | 1 | 1 | 1.2 | 0 | I have written a piece of software in Python that does a lot of parsing and a lot of writing files to disk. I am starting to write unit tests, but have no idea how to unit test a function that just writes some data to disk, and returns nothing.
I am familiar with unittest and ddt. Any advice or even a link to a resource where I could learn more would be appreciated. | 0 | python,unit-testing,testing,functional-testing | 2015-04-30T17:01:00.000 | 0 | 29,973,700 | Arguably, the best solution is to split your function into two pieces. One piece to do the parsing, the second to do the writing. Then, you can unit test each piece separately.
For the first piece, give it a file and verify the parsing function returns the proper string, and/or throws the proper exception.
For the second, give it a string to write, and then verify that the file was written and that the contents match your string. It's tempting to skip the test that writes the data, since it's reasonable to assume that the python open and write functions work. However, the unit testing also proves that the data you pass in is the data that gets written (ie: you don't have a bug that causes a fixed string to be written to the file).
If refactoring the code isn't something you can do, you can still test the function. Feed it the data to be parsed, then open the file that it wrote to and compare the result to what you expect it to be. | 0 | 324 | true | 0 | 1 | How to write tests for writers / parsers? (Python) | 29,974,588 |
2 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 | I need pypy to speed up my python code. While the pypy doesn't support a lot of modules I need (e.g. GNU Radio). Could I use pypy to speed up parts of my python code. Could I use pypy to only speed up some of my python files? How can I do that? | 0 | python | 2015-04-30T19:34:00.000 | 0 | 29,976,283 | No, you can't. You can only have one interpreter instance running all of the code in a single program at a time. The exception is if you break out some of your functionality into a totally separate program that communicates with the other part of your code through some form of inter-process communication; then you can run those totally separate programs however you like. But for code that is not separated like that, it's not possible.
It will probably be more straightforward to adapt the entirety of your code to work with PyPy one way or another, instead of trying to break out bits and pieces. If that's absolutely not possible, then PyPy probably can't help you. | 0 | 259 | false | 0 | 1 | Could pypy speed up parts of my python code? | 29,976,372 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I need pypy to speed up my python code. While the pypy doesn't support a lot of modules I need (e.g. GNU Radio). Could I use pypy to speed up parts of my python code. Could I use pypy to only speed up some of my python files? How can I do that? | 0 | python | 2015-04-30T19:34:00.000 | 0 | 29,976,283 | No, you can't. And GNU Radio does the signal processing and scheduling in C++, so that's totally opaque to your python interpreter. Also, GNU Radio itself is highly optimized and contains specialized implementations for most of the CPU intense tasks for SSE, SSE4, and some NEON.
I need pypy to speed up my python code.
I doubt that. If your program runs too slow, it's probably nothing your Python interpreter can solve -- you might have to look into what could take so much time, and solve this on a higher level. | 0 | 259 | false | 0 | 1 | Could pypy speed up parts of my python code? | 30,705,404 |
1 | 2 | 0 | 1 | 2 | 0 | 1.2 | 0 | I am using nosetests --with-coverage to test and see code coverage of my unit tests. The class that I test has many external dependencies and I mock all of them in my unit test.
When I run nosetests --with-coverage, it shows a really long list of all the imports (including something I don't even know where it is being used).
I learned that I can use .coveragerc for configuration purposes but it seems like I cannot find a helpful instruction on the web.
My questions are..
1) In which directory do I need to add .coveragerc? How do I specify the directories in .coveragerc? My tests are in a folder called "tests"..
/project_folder
/project_folder/tests
2)It is going to be a pretty long list if I were to add each in omit= ...
What is the best way to only show the class that I am testing with the unittest in the coverage report?
It would be nice if I could get some beginner level code examples for .coveragerc. Thanks. | 0 | python,unit-testing,nosetests,coverage.py | 2015-04-30T20:00:00.000 | 0 | 29,976,769 | The simplest way to direct coverage.py's focus is to use the source option, usually source=. to indicate that you only want to measure code in the current working tree. | 0 | 1,938 | true | 0 | 1 | how to omit imports using .coveragerc in coverage.py? | 29,985,334 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I have a Python script that manages Pushbullet channels for Nexus Android device factory images. It runs on my VPS (cron job that runs every 10 minutes), but my provider has warned that there may be intermittent downtime over the next several days. The VPS is running Ubuntu Server 15.04.
I have a Raspberry Pi that's always on, and I can easily modify the script so that it works independently on both the VPS and the Pi. I would like the primary functionality to exist on the VPS, but I want to fall back to the Pi if the VPS goes down. What would be the best way to facilitate this handoff between the two systems (in both directions)? The Pi is running Raspbian Wheezy.
Additionally, the script uses urlwatch to actually watch the requisite page for updates. It keeps a cache file on the local system for each URL. If the Pi takes over and determines a change is made, it will notify the Pushbullet channel(s) as it should. When the VPS comes back up and takes over, it will have the old cache files and will notify the channel(s) again, which I want to avoid.
So: How can I properly run the script on whichever system happens to be up at the moment (preferring the VPS), and how can I manage the urlwatch caches between the two systems? | 0 | python,pushbullet | 2015-05-03T02:24:00.000 | 1 | 30,009,595 | Could you shutdown the script on your VPS, copy the cache files over the the Pi and run the script there? Then do the reverse when you want to move it back to the VPS.
You could possibly run the script on both systems, but then you'd need to synchronize between them which sounds like a lot of unnecessary work. For instance you could run a third server that you can check with to see if you've sent something yet, but you would need to be able to lock items on there so you don't have a race condition between your two scripts. | 0 | 104 | false | 1 | 1 | Python script fallback to second server | 30,035,305 |
1 | 2 | 0 | 0 | 1 | 1 | 0 | 0 | Given a class object in Python, how can I determine if the class was defined in an extension module (e.g. c, c++, cython), as opposed to being defined in standard python?
inspect.isbuiltin returns True for functions defined in an extension module, and False for functions defined in python, but it unfortunately does not have the same behavior for classes -- it returns False for both kinds of classes.
(The larger goal here is that we've got a system that generates a command line API for a set of classes based on parsing the docstring and signature of their __init__ functions. This system fails for classes defined in cython because inspect.getargspec doesn't work correctly on these classes, so I'm trying to figure out a workaround) | 0 | python,cython,python-internals,python-extensions,function-signature | 2015-05-04T22:55:00.000 | 0 | 30,041,371 | "This system fails for classes defined in cython because X"
Doesn't this mean that the answer you're looking for is X?
To know if a class is of the kind that crashes some function, like when you do inspect.getargspec(X.__init__), then just do inspect.getargspec(X.__init__) in a try: except block. | 0 | 462 | false | 0 | 1 | Check if class object is defined in extension module | 30,050,402 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I have used seqdiag to generate sequence diagram, and it generates 3MB png file. It sounds great, right? But something wrong when I open it. When I open the file, appdata/local/temp has gained 3GB, and generates big files named ~PI*.tmp. After I send the png file to others, they can't open the file in their computer. What is the root cause and how I can send this kind of file to others? | 0 | python,png,appdata | 2015-05-05T02:16:00.000 | 0 | 30,043,202 | The problem is file size. I split the file into ten parts, and each file can be opened. Thx all answers. | 0 | 118 | true | 0 | 1 | python seqdiag file is big and can't be openned in other computer? | 30,076,292 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Basically, I need to disable or turn off a GPIO pin whenever I execute a method in python.
Does anyone knows how to disable the pins? | 0 | python,raspberry-pi,gpio | 2015-05-05T06:26:00.000 | 0 | 30,045,659 | There is a built-in function GPIO.cleanup() that clean up all the ports you've used.
For the power and ground pins, they are not under software control. | 0 | 4,582 | false | 0 | 1 | How to disable GPIO pins on the RaspberryPi? | 30,187,490 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have a server A where some logs are saved, and another server B with a web server (IIS) on it.
I can access serverA from Windows Explorer with zero problems, but when I want to access it from serverB with some PHP code, it doesn't work.
I made a python script that accesses the file from serverA on serverB. It works if I run that script from CMD, but when I run that script from PHP code it doesn't work anymore.
I run IIS server as a domain account that has access on serverA
I try to run that as LocalService, NetworkService, System, LocalUser but no success.
That script is a simple open command, so problem it's not from python. | 0 | php,python,iis | 2015-05-05T15:11:00.000 | 1 | 30,056,836 | Since you provide no example code or describe what you are doing... There are a few things to consider.
Anything running in the context of a webpage in IIS is running in a different context than a logged in user.
The first part of that is simply what file system level permissions might be different for the IIS user account. The proper way you want to handle that is by assigning the necessary changes at the filesystem level for the IIS user. Do not change the IIS user if you do not understand the ramifications of doing that.
The next part is that certain operations cannot be done in the context of the IIS user account (regardless of account permissions), because there are certain things that only a logged in user with access to the console/desktop can do.
Certain operations called from IIS are purposely blocked (shell.execute) regardless of permissions, account used, etc. This occurs in versions of IIS in Windows Server 2008 and later and is done for security. | 0 | 78 | false | 0 | 1 | IIS access to a remote server in same domain | 30,057,339 |
2 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a server A where some logs are saved, and another server B with a web server (IIS) on it.
I can access serverA from Windows Explorer with zero problems, but when I want to access it from serverB with some PHP code, it doesn't work.
I made a python script that accesses the file from serverA on serverB. It works if I run that script from CMD, but when I run that script from PHP code it doesn't work anymore.
I run IIS server as a domain account that has access on serverA
I try to run that as LocalService, NetworkService, System, LocalUser but no success.
That script is a simple open command, so problem it's not from python. | 0 | php,python,iis | 2015-05-05T15:11:00.000 | 1 | 30,056,836 | Resolved.
Uninstall IIS and use XAMPP.
No problem found till now, everything works okay.
So use XAMPP/WAMP! | 0 | 78 | true | 0 | 1 | IIS access to a remote server in same domain | 30,117,742 |
1 | 4 | 0 | 1 | 10 | 1 | 0.049958 | 0 | What's the fastest way to save/load a large list in Python 2.7? I apologize if this has already been asked, I couldn't find an answer to this exact question when I searched...
More specifically, I'm testing out methods for simulating something, and I need to compare the result from each method I test out to an exact solution. I have a Python script that produces a list of values representing the exact solution, and I don't want to re-compute it every time I run a new simulation. Thus, I want to save it somewhere and just load the solution instead of re-computing it every time I want to see how good my simulation results are.
I also don't need the saved file to be human-readable. I just need to be able to load it in Python. | 0 | python,list,python-2.7,io,save | 2015-05-05T15:30:00.000 | 0 | 30,057,240 | I've done some profiling of many methods (except the numpy method) and pickle/cPickle is very slow on simple data sets. The fastest way depends on what type of data you are saving. If you are saving a list of strings and/or integers. The fastest way that I've seen is to just write it directly to a file using a for loop and ','.join(...); read it back in using a similar for loop with .split(','). | 0 | 17,077 | false | 0 | 1 | What's the fastest way to save/load a large list in Python 2.7? | 37,910,499 |
1 | 3 | 0 | 3 | 2 | 1 | 0.197375 | 0 | Are there any ways to set the camera in raspberry pi to take black and white image?, like using some commands / code in picamera library?
Since I need to compare the relative light intensity of a few different images, I'm worried that the camera will already so some adjustments itself when the object is under different illuminations, so even if I convert the image to black and white later on the object's 'true' black and white image will have been lost.
thanks
edit: basically what I need to do is to capture a few images of an object when the camera position is fixed, but the position of the light source is changed (and so the direction of illumination is changed as well). Then for each point on the image I will need to compare the relative light intensity of the different images. As long as the light intensity, or the 'brightness' of all the images are relative to the same scale, then it's ok, but I'm not sure if this is the case. I'm sure if the camera will adjust something like contrast automatically itself when an image is 'inherently' darker or brighter. | 0 | python,camera,raspberry-pi,camera-calibration | 2015-05-05T21:58:00.000 | 0 | 30,063,974 | What do you mean by "black and white image," in this case? There is no "true" black and white image of anything. You have sensors that have some frequency response to light, and those give you the values in the image.
In the case of the Raspberry Pi camera, and almost all standard cameras, there are red, green and blue sensors that have some response centered around their respective frequencies. Those sensors are laid out in a certain pattern, as well. If it's particularly important to you, there are cameras that only have an array of a single sensor type that is sensitive to a wider range of frequencies, but those are likely going to be considerable more expensive.
You can get raw image data from the raspi camera with picamera. This is not the "raw" format described in the documentation and controlled by format, which is really just the processed data before encoding. The bayer option will return the actual raw data. However, that means you'll have to deal with processing by yourself. Each pixel in that data will be from a different color sensor, for example, and will need to be adjusted based on the sensor response.
The easiest thing to do is to just use the camera normally, as you're not going to get great accuracy measuring light intensity in this way. In order to get accurate results, you'd need calibration, and you'd need to be specific about what the data is for, how everything is going to be illuminated, and what data you're actually interested in. | 0 | 16,954 | false | 0 | 1 | how to set the camera in raspberry pi to take black and white image? | 30,064,269 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have an hourly cron job which has been running in Openshift free gear for almost a year and has been no problem. But the past 2 days, cron job stops running automatically. I have been googling around and still cannot find what went wrong. Here are what I have checked/done to date
the service I use to keep the site alive is still up and running as normal. So it is not a case of being idle.
force restart the app. Cron job still not started automatically as it used to.
fake changes to cron script file and push to Openshift. Still not fixed this.
log files looks ok
Mon May 4 13:01:07 EDT 2015: START hourly cron run
Mon May 4 13:01:29 EDT 2015: END hourly cron run - status=0
Any advice or pointer as to why it just stop working when there is no change to the app. Thank you. | 0 | python,django,openshift | 2015-05-06T11:20:00.000 | 1 | 30,075,234 | This issue should be fixed now. Please open a request at help.openshift.com if you continue to have issues with it. | 0 | 567 | false | 0 | 1 | OPENSHIFT Cron job just stop working | 30,107,482 |
1 | 1 | 1 | 0 | 5 | 0 | 0 | 0 | What I have is a large amount of C code and a bunch of swig wrappers to export all the functions into python. We like using python for testing, it's great, but my problem is there don't seem to be any editors out there that will share tags between python and C.
What I want is to ctrl+click (or whatever shortcut) on a function in a *.py file and have it go to the function definition in a *.c file.
Geany seems to do an alright job of this but it has some limitations (poor gdb support, etc). Eclipse, netbeans, Qt Creator are all good editors for C (creator being my fav) but they don't support cross-language tags. Eclipse in particular supports python quite well in PyDev but a tag in python is totally separate from a tag in C, and I can't seem to find a way to make them share. Vim/emacs probably do due to the somewhat lower level ctags use but I don't like either of them.
Any suggestions? | 0 | python,c++,c,exuberant-ctags | 2015-05-07T21:54:00.000 | 0 | 30,112,333 | I do this using UltraEdit, but UltraEdit is not great if you do not like it :-) Its not really an IDE more like an Editor. However the way I do it can most likely be ported to e.g. Eclipse.
I generate the Ctags file my self. and force UE to use the custom generated cTags file. This works like a charm. | 0 | 290 | false | 0 | 1 | Code editor that supports cross-language (c)tags between C and python | 31,437,333 |
1 | 1 | 0 | 7 | 4 | 1 | 1.2 | 0 | I am looking for benchmarks that compare regular expression speeds between python and statically typed languages like C, Java or C++. I would also like to hear about Cython performance for regular expressions. | 0 | python,c,regex,performance,cython | 2015-05-08T02:19:00.000 | 0 | 30,114,763 | This is likely to depend more on the individual implementation than the language.
Just for example, some patterns are O(N2) with some implementations, but ~O(N) with others. Specifically, most RE implementations are based on NFAs (Non-Deterministic Finite State Automata). To make a long story short, this means they can and will backtrack under some circumstances with some patterns. This gives roughly O(N2) complexity. A Deterministic Finite State Automata (DFA) matching the same pattern never backtracks--it always has linear complexity. At the same time, the compilation phase for a DFA is typically more complex than for an NFA (and DFAs don't have all the capabilities of NFAs).
Therefore, with many simple patterns that don't involve backtracking any way, an NFA-based RE engine may easily run faster than a DFA-based engine. But, when the NFA-based RE engine is trying to match a pattern than involves backtracking, it can (and will) slow down drastically. In the latter case, the DFA-based engine may easily be many times faster.
Most RE libraries basically start from a regular expression represented as a string. When you do an RE based search/match, most compile that into a data structure for their NFA/DFA. That compilation step takes some time (not a huge amount, but can become significant, especially if you're working with a lot of different REs). A few RE engines (e.g., Boost XPressive) can compile regular expressions statically--that is, the RE is compiled at the same time as the program's source code. This can eliminate the time to compile the RE from the program's execution time, so if your code spends a significant amount of its time on compiling REs, it could gain a substantial improvement from that (but that's independent of just static typing--at least to my knowledge, you can't get the same in Java or C, or example). A few other languages (e.g., D) provide enough capabilities that you could almost certainly do the same with them, but I'm not aware of an actual implementation for them that you can plan on using right now. | 0 | 1,664 | true | 0 | 1 | How much faster are regular expressions processed in C/Java than in Python? | 30,114,902 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | I apologize I couldn't find a proper title, let me explain what I'm working on:
I have a Python IRC bot, and I want to be able to keep track of how long users have been idle in the channel, and allow them to earn things (I have it tied to Skype/Minecraft/my website) each x amount of hours they're idle in the channel.
I already have everything to keep track of each user and have them validated with the site and stuff, but I am not sure how I would keep track of the time they're idle.
I have it capture on join/leave/part messages. How can I get a timer set up when they join, and keep that timer running, along with other times for all of the users who are in that channel, and each hour they've been idle (not all at same time) do something then restart the timer over for them? | 0 | python,timer | 2015-05-08T07:52:00.000 | 0 | 30,118,631 | Two general ways:
Create a separate timer for each user when he joins, do something when the timer fires and destroy it when the user leaves.
Have one timer set to fire, say, every second (or ten seconds) and iterate over all the users when it fires to see how long they have been idle.
A more precise answer would require deeper insight into your architecture, I’m afraid. | 0 | 103 | true | 0 | 1 | Keep track of items in array with timer | 30,118,684 |
1 | 3 | 0 | 0 | 7 | 0 | 0 | 0 | I need to store a massive numpy vector to disk. Right now the vector that I am trying to store is ~2.4 billion elements long and the data is float64. This takes about 18GB of space when serialized out to disk.
If I use struct.pack() and use float32 (4 bytes) I can reduce it to ~9GB. I don't need anywhere near this amount of precision disk space is going to quickly becomes an issue as I expect the number of values I need to store could grow by an order of magnitude or two.
I was thinking that if I could access the first 4 significant digits I could store those values in an int and only use 1 or 2 bytes of space. However, I have no idea how to do this efficiently. Does anyone have any idea or suggestions? | 0 | python,numpy,scipy | 2015-05-08T18:12:00.000 | 0 | 30,130,277 | Use struct.pack() with the f type code to get them into 4-byte packets. | 1 | 934 | false | 0 | 1 | Binary storage of floating point values (between 0 and 1) using less than 4 bytes? | 30,130,970 |
1 | 3 | 0 | 0 | 4 | 0 | 0 | 0 | I have a python script. Lets say http://domain.com/hello.py, which only prints "Hello, World!".
Is it possible to precompile this Python file?
I get around 300 requests per second and the overhead of compiling is way to high. In Java the server can handle this easily but for calculations Python works much easier. | 0 | python | 2015-05-09T00:10:00.000 | 0 | 30,134,589 | via the python interface where your python source file is abc.py:
import py_compile
py_compile.compile('abc.py') | 0 | 12,688 | false | 0 | 1 | Can I pre-compile a python script? | 69,837,261 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Upon running python in any dir in my amazon EC2 instance, I get the following printout on first line: Python 2.6.9 (unknown, todays_date). Upon going to /usr/bin and running python27, I get this printout on first line: Python 2.7.9 (default, todays_date).
This is a problem because the code that I have only works with Python 2.6.9, and it seems as though my default is Python 2.7.9. I have tried the following things to set default to 2.6:
1) Editing ~/.bashrc and creating an alias for python to point to 2.6
2) Editing ~/.bashrc and exporting the python path
3) Hopelessly scrolling through the /etc folder looking for any kind of file that can reset the default python
What the hell is going on?!?! This might be EC2 specific, but I think my main problem is that upon running /usr/bin/python27 I see that it is default on that first line.
Even upon running python -V, I get Python 2.6. And upon running which python I get /usr/bin/python, but that is not the default that the EC2 instance runs when it attempts to execute my code. I know this because the EC2 prints out Python/2.7.9 in the error log before showing my errors. | 0 | python,python-2.7,amazon-ec2,config,python-2.6 | 2015-05-11T00:29:00.000 | 1 | 30,158,068 | Creating an alias in your ~/bashrc is a good approach.
It sounds like you have not run source ~/.bashrc after you have edited it. Make sure to run this command.
Also keep it mind that when you run sudo python your_script.py it will not use your alias (because you are running as the root, not at the ec2-user).
Make sure to not change your default python, it could break several programs in your linux distributions (again, using an alias in your ~/bashrc is good). | 0 | 51 | false | 0 | 1 | Python Default Confusion | 31,082,418 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I have installed libapache2-mod-wsgi on a Debian 8 64 bit server. Whenever I loaded the domain before this install, the default page for apache2 loaded. But after the install it shows The webpage is not available error, the same error occurs when there is no internet connection on my PC. I have tried a2dismod wsgi to disable it. And then it works again. Can anyone suggest me a workaround? | 0 | python,django,apache2,debian,mod-wsgi | 2015-05-11T09:52:00.000 | 0 | 30,164,528 | >>> a2dismod python did the trick. Mod-wsgi 4.0+ doesn't work with mod-python. | 0 | 256 | true | 1 | 1 | Apache2 doesn't work after installing mod-wsgi | 30,165,117 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 | How can i run a python script at startup with xubuntu 15.04?
I want to run a script that reminds me of things,like backup,buying thing,calling somebody.
I already have the script,i just need it to start at startup.
(Python 3.4)
As far i know, xubuntu 15.04 uses systemd.
All the tutorials i found,are for init.d or upstart.
I need one for systemd | 0 | python,linux,python-3.x,xubuntu | 2015-05-11T18:03:00.000 | 1 | 30,174,585 | Found the answer.
You have to go Configuration/Systemsettings/Session and Startup,
and add the program. | 0 | 517 | true | 0 | 1 | Run Python script when I log into my system? [xubuntu 15.04] | 30,175,033 |
1 | 1 | 0 | 1 | 1 | 1 | 1.2 | 0 | I am writing a small command line tool in python, which has two subcommands (like: git init or git clone). Those subcommands use a few helper functions, which are not exposed on the command line. When writing tests with py.test, does it make sense to test every helper function separately or only test the two functions for the subcommands (they call all the helpers multiple times). | 0 | python,unit-testing,testing,pytest | 2015-05-13T06:53:00.000 | 0 | 30,207,646 | Testing helper functions makes a lot of sense - in this context, these helper functions are the basic building blocks (read: units) for your application. Having tests that prove that they function properly will allow you to easily change their implementation without worrying about whether you're breaking something else or not. The other direction is also true - suppose you did break one of the helper functions. You'd want a simple test to show you the mistake you made, without having to dig through the complicated implementation of the "public" functions to understand why their tests have been broken. | 0 | 1,214 | true | 0 | 1 | Should I test helper functions or only the main function? | 30,207,881 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | How do I set font size for Python console application?
I'm running Debian Wheezy on a Raspberry Pi.
When the program is exited return to default size. | 0 | python,debian,raspberry-pi | 2015-05-14T07:06:00.000 | 1 | 30,231,345 | you can not do this.
font size is determined by terminal emulator not program output. | 0 | 153 | false | 0 | 1 | debian wheezy console application set font size | 30,232,512 |
1 | 1 | 0 | 1 | 1 | 1 | 1.2 | 0 | I am using gVim with Windows 7.
I am trying to run a python script using the pyfile % command, but every time I do so, I get ImportError: No module named libtcodpy
Checking the location of the libtcodpy, it is indeed in the same folder as the script I am trying to run. Furthermore, running the program with the python IDE works fine.
What am I doing wrong? | 0 | python,vim,python-import | 2015-05-14T18:02:00.000 | 0 | 30,244,222 | Vim does not automatically search the current script's directory for imports, only some configured ones (cp. :help python-_get_paths), and the current working directory.
So, you either need to configure the current script's path, or simply :cd %:h to it (alternatively automatically via :set autochdir). | 0 | 62 | true | 0 | 1 | gVim ImportErrror: module not found, even though module is in same folder as script | 30,245,587 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I created a python script that grabs some info from various websites, is it possible to analyze how long does it take to download the data and how long does it take to write it on a file?
I am interested in knowing how much it could improve running it on a better PC (it is currently running on a crappy old laptop. | 0 | python,linux,performance,ubuntu,analytics | 2015-05-14T23:22:00.000 | 0 | 30,249,063 | If you just want to know how long a process takes to run the time command is pretty handy. Just run time <command> and it will report how much time it took to run with it counted in a few categories, like wall clock time, system/kernel time and user space time. This won't tell you anything about which parts of the system are taking up the amount of time. You can always look at a profiler if you want/need that type of information.
That said, as Barmar said, if you aren't doing much processing of the sites you are grabbing, the laptop is probably not going to be a limiting factor. | 0 | 238 | false | 0 | 1 | Is it possible to see what a Python process is doing? | 30,249,116 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I created a python script that grabs some info from various websites, is it possible to analyze how long does it take to download the data and how long does it take to write it on a file?
I am interested in knowing how much it could improve running it on a better PC (it is currently running on a crappy old laptop. | 0 | python,linux,performance,ubuntu,analytics | 2015-05-14T23:22:00.000 | 0 | 30,249,063 | You can always store the system time in a variable before a block of code that you want to test, do it again after then compare them. | 0 | 238 | false | 0 | 1 | Is it possible to see what a Python process is doing? | 30,249,196 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I have a collection of python scripts that import from each other. If I want to use these in a location where the scripts are not physically present, how can I do this. I tried adding the path of the dir with the scripts to my $PATH but got no joy. Any help appreciated, thanks. | 0 | python,unix,path | 2015-05-15T14:12:00.000 | 0 | 30,261,701 | Python doesn't share its own path with the general $PATH, so to be able to do what you're looking for, you must add your scripts in the $PYTHONPATH instead. | 0 | 52 | true | 0 | 1 | Calling python scripts from anywehre | 30,261,822 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | In python, I'm using SendKey module to perform "key down"
My curiosity starts with pressing speed.
When using SendKey, it presses with machine speed. I want to program it to slow down like human speed. Is it possible in Python? | 0 | python,sendkeys | 2015-05-16T06:58:00.000 | 0 | 30,272,764 | It is possible using ctypes.
It can be implemented in win32 apis | 0 | 86 | true | 0 | 1 | How to program "key down"s speed programmatically?(Python) | 31,356,253 |
1 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 | my end goal is to update Google cloud storage with some json data and I would rather it run a script rather than hitting a url endpoint. What would be the proper configuration of cron.yaml if, let's say the script were to sit right next to app.yaml? | 0 | python,google-app-engine,google-cloud-storage | 2015-05-16T20:32:00.000 | 1 | 30,280,460 | A cron job in GAE has to hit a URL, there's no other way to do it. That's just how the system is designed.
But since you have control over app.yaml anyway, you can easily assign your script to a URL there. | 0 | 622 | false | 1 | 1 | Is there a way to run a Google App engine cron task without hitting an endpoint but rather pointing cron.yaml at a python script? | 30,280,609 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 1 | I was wondering if there is a way to detect, from a python script, if the computer is connected to the internet using a tethered cell phone?
I have a Python script which runs a bunch of network measurement tests. Ideally, depending on the type of network the client is using (ethernet, wifi, LTE, ...) the tests should change. The challenge is how to get this information from the python client, with out asking the user to provide it. Especially detect tethering. | 0 | python,wifi,tethering | 2015-05-19T15:58:00.000 | 0 | 30,330,543 | Normally not - from the computer's prospective the tethered cell phone is simply just another wifi router/provider.
You might be able to detect some of the phone carrier networks from the traceroute info to some known servers (DNS names or even IP address ranges of network nodes - they don't change that often).
If you have control over the phone tethering you could also theoretically use the phone's wifi SSID (or even IP address ranges) to identify tethering via individual/specific phones (not 100% reliable either unless you know that you can't get those parameters from other sources). | 0 | 605 | false | 0 | 1 | Detect if connected to the internet using a tethered phone in Python | 30,333,512 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I'm working on a large codebase in an Eclipse project. Our codebase has both Python and C++ code. Since I only worked in Python until now, I created the project as a Python project.
Now I'm about to work heavily with some of our C++ modules, and I would like the benefits of CTD. My Eclipse has CTD (the Eclipse C++ plugin) installed. How do I convert my existing PyDev project to work with C++? | 0 | python,c++,eclipse | 2015-05-20T09:10:00.000 | 0 | 30,345,197 | Basically you could add C++ nature to the project. Eclipse project could have several natures simultaneously, so you will have both python and C++ goodies.
Right-click on the project.
Select: New -> Other
Under C/C++, select "Convert to a C/C++ project" | 0 | 473 | true | 0 | 1 | Converting a Python project to C++ | 30,345,675 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | I am working on a Opencv based Python project. I am working on program development which takes less time to execute. For that i have tested my small program print hello world on python to test the time taken to run the program. I had run many time and every time it run it gives me a different run time.
Can you explain me why a simple program is taking different time to execute?
I need my program to be independent of system processes ? | 0 | python,python-2.7 | 2015-05-20T12:58:00.000 | 0 | 30,350,382 | Python gets different amounts of system resources depending upon what else the CPU is doing at the time. If you're playing Skyrim with the highest graphics levels at the time, then your script will run slower than if no other programs were open. But even if your task bar is empty, there may be invisible background processes confounding things.
If you're not already using it, consider using timeit. It performs multiple runs of your program in order to smooth out bad runs caused by a busy OS.
If you absolutely insist on requiring your program to run in the same amount of time every time, you'll need to use an OS that doesn't support multitasking. For example, DOS. | 0 | 38 | false | 0 | 1 | Different time taken by python script every time it is runned? | 30,350,613 |
1 | 1 | 0 | 1 | 1 | 1 | 0.197375 | 0 | Salutations, all.
I'm working on a python project and have been tasked with cleaning the pylint warnings. Things is, there are specific parts of the code that require indentation or spacing between words that go against Pylint.
Question: is there a way to disable specific pyling warnings on specific files from the rcfile? Like disable wrong continued indentation on files named *tests.py, without making it affect every file.
I'd just add the pylint comment on files, but I was advised against that.
Thanks in advance. | 0 | python,pylint | 2015-05-20T14:23:00.000 | 0 | 30,352,504 | You can use the ignore option in the configuration file or add # pylint: disable=all on the top of the python file. | 0 | 422 | false | 0 | 1 | On PyLint, disable specific warning on specif files using rcfile | 30,378,706 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a simple python script to send data from a Windows 7 box to a remote computer via SFTP. The script is set to continuously send a single file every 5 minutes. This all works fine but I'm worried about the off chance that the process stops or fails and the customer doesn't notice the data files have stopped coming in. I've found several ways to monitor python processes in a ubuntu/unix environment but nothing for Windows. | 0 | python,windows,monitoring | 2015-05-20T15:52:00.000 | 1 | 30,354,621 | If there are no other mitigating factors in your design or requirements, my suggestion would be to simplify the script so that it doesn't do the polling; it simply sends the file when invoked, and use Windows Scheduler to invoke the script on whatever schedule you need. By relying on a core Windows service, you can factor that complexity out of your script. | 0 | 403 | true | 0 | 1 | How can I monitor a python scrypt and restart it in the event of a crash? (Windows) | 30,358,001 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I have a module in site-packages that uses collection of binary files. Besides hard-coding the absolute paths of these files in the module, is there a better way to handle this module-datafile dependency? | 0 | python,file,dependencies | 2015-05-21T03:58:00.000 | 0 | 30,364,325 | Put the data file in a place relative to the module. Use inspect module to figure out the path of the module and then read the data file using this information. | 0 | 24 | false | 0 | 1 | Best way to handle files used by python module | 30,380,080 |
2 | 3 | 0 | 2 | 3 | 0 | 0.132549 | 0 | I was wondering how can I store sent emails
I have a send_email() function in a pre_save() and now I want to store the emails that have been sent so that I can check when an email was sent and if it was sent at all. | 0 | python,django | 2015-05-21T08:22:00.000 | 0 | 30,368,271 | Another way to look at it: send the mail to your backup email account ex: [email protected]. So you can store the email, check if the email is sent or not.
Other than that, having an extra model for logged emails is a way to go. | 0 | 559 | false | 1 | 1 | Python / Django | How to store sent emails? | 30,369,384 |
2 | 3 | 0 | 5 | 3 | 0 | 1.2 | 0 | I was wondering how can I store sent emails
I have a send_email() function in a pre_save() and now I want to store the emails that have been sent so that I can check when an email was sent and if it was sent at all. | 0 | python,django | 2015-05-21T08:22:00.000 | 0 | 30,368,271 | I think the easiest way before messing up with middleware or whatever is to simply create a model for your logged emails and add a new record if send was successful. | 0 | 559 | true | 1 | 1 | Python / Django | How to store sent emails? | 30,368,302 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I am always perplexed with the whole hi-ascii handling in python 2.x. I am currently facing an issue in which I have a string with hiascii characters in it. I have a few questions related to it.
How can a string store hiascii characters in it (not a unicode string, but a normal str in python 2.x), which I thought can handle only ascii chars. Does python internally convert the hiascii to something else ?
I have a cli which I spawn as a subprocess from my python code, when I pass this string to the cli, it works fine. While, if I encode this string to utf-8, the cli fails( this string is a password, so it fails saying the password is invalid).
For the second point, I actually did a bit of research and found the following:
1) In windows(sucks), the command line args are encoded in mbcs (sys.getfilesystemencoding). The question I still don't get is, if I read the same string using raw_input, it is encoded in Windows console encoding(on EN windows, it was cp437).
I have a different question that am confused about now regarding Windows encoding. Is the windows sys.stdin.encoding different from Windows console encoding ?
If yes, is there a pythonic way to figure out what my windows console encoding is. I needed this because when I read input using raw_input, its encoded in Windows console encoding, and I want to convert it to say, utf-8. Note: I have already set my sys.stdin.encoding to utf-8, but it doesnt seem to make any effect in the read input. | 0 | python,string,unicode,utf-8 | 2015-05-21T18:29:00.000 | 0 | 30,381,642 | Is the windows sys.stdin.encoding different from Windows console encoding?
Yes. There are two locale-specific codepages:
the ANSI code page, aka mbcs, used for strings in Win32 ...A APIs and hence for C runtime operations like reading the command line;
the IO code page, used for stdin/stdout/stderr streams.
These do not have to be the same encoding, and typically they aren't. In my locale (UK), the ANSI code page is 1252 and the IO code page defaults to 850. You can change the console code page using the chcp command, so you can make the two encodings match using eg chcp 1252 before running the Python command.
(You also have to be using a TrueType font in the console for chcp to make any difference.)
is there a pythonic way to figure out what my windows console encoding is.
Python reads it at startup using the Win32 API GetConsoleOutputCP and—unless overridden by PYTHONIOENCODING—writes that to the property sys.stdout.encoding. (Similarly GetConsoleCP for stdin though they will generally be the same code page.)
If you need to read this directly, regardless of whether PYTHONIOENCODING is set, you might have to use ctypes to call GetConsoleOutputCP directly.
Note: I have already set my sys.stdin.encoding to utf-8, but it doesnt seem to make any effect in the read input.
(How have you done that? It's a read-only property.)
Although you can certainly treat input and output as UTF-8 at your end, the Windows console won't supply or display content in that encoding. Most other tools you call via the command line will also be treating their input as encoded in the IO code page, so would misinterpret any UTF-8 sent to them.
You can affect what code page the console side uses by calling the Win32 SetConsoleCP/SetConsoleOutputCP APIs with ctypes (equivalent of the chcp command and also requires TTF console font). In principle you should be able to set code page 65001 and get something that is nearly UTF-8. Unfortunately long-standing console bugs usually make this approach infeasible.
windows(sucks)
yes. | 0 | 766 | false | 0 | 1 | hi-ascii characters python string | 30,392,673 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I am trying to make a program (well more like two programs that work together) A. The server(host): which is connected to the Apache server and sends commands to it and B. The Client(client): which is also connected but cant send commands only receive them.
Example:
If I typed in the server(host). Log time it would send that command to the apache server and the client would grab the time and send me back the details to the (host).
Example 2:
If I typed start keylogger -t 2000 (-t is time and 2000 is every 2000 mseconds) in the server(host) it would start the built in keylogger and start sending logged info every 2000 mseconds.
If I typed
I am not a first time programmer. I write usually in c#/ ruby. Python is my first language so I understand what you have for me just I have never really used Apache before. Any help would be very appreciated! | 0 | python,apache | 2015-05-23T03:19:00.000 | 1 | 30,408,568 | Here is how I would take the approach based on what you just asked.
Server:
Apache + mod_wsgi for front-end web server (allows for using flask or django); there's not much here, it's a pretty easy configuration.
I would use Django for the web framework.
Client:
Client would poll the server at some kind of interval (1 minute, 10 minutes, etc.); upon poll the client would receive commands that it would then execute from the server.
I'm not sure if there is a keylogging module in python; if there isn't one I'd have to write one for each of the OS's I intended for the client to be on in C.
I'm going with pull requests because firewalls rarely do egress filtering; they will always do inbound filtering virtually by default and therefore the way you asked how to set this is up initially will not work. This is actually how most botnets work (take a look at Chrome for example). | 0 | 28 | false | 0 | 1 | Python/ Apache Help needed | 30,408,643 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a set of Python scripts that sets up various parameters/properties for ANT script based on various conditions that are checked in the Python scripts and then invokes/executes the appropriate ANT build script.
I initiate the Python script from my Eclipse IDE. I have the PyDev plug-in installed in Eclipse.
I am able to debug the python script but cannot debug the ANT scripts, maybe because it doesn't get invoked as 'Run ANT Script'.
Is there a way to debug an ANT script initiated by Python (or any other mechanism for that matter)? | 0 | python,eclipse,debugging,ant,pydev | 2015-05-23T12:06:00.000 | 1 | 30,412,458 | When dealing with multiple languages, the way for debugging is regular debugging in one and remote debugging in the other... I'm guessing the ant debug uses the regular java debug which supports remote debugging, so, you can probably try that (although I haven't tried it here to confirm). | 0 | 112 | false | 1 | 1 | Debugging ANT script initiated via a Python script | 30,913,800 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I'm looking into what it would take to add a feature to my site so this is a pretty naive question.
I'd like to be able to connect buyers and sellers via an email message once the buyer clicks "buy".
I can see how I could do this in java script, querying the user database and sending an email with both parties involved. What I'm wondering is if there's a better way I can do this, playing monkey in the middle so they only receive an email from my site, and the it's automatically forwarded to the other party. That way they don't have to remember to hit reply-all, just reply. Also their email addresses remain anonymous.
Again assuming I generate a unique subject line with the transaction ID I could apply some rules here to just automatically forward the email from one party to the other but is there an API or library which can already do this for you? | 0 | javascript,php,python,email | 2015-05-24T19:12:00.000 | 0 | 30,427,300 | What you're describing would be handled largely by your backend. If this were my project, I would choose the following simple route:
Store the messages the buyers/sellers send in your own database, then simply send notification emails when messages are sent. Have them reply to each other on your own site, like Facebook and eBay do.
An example flow would go like this:
(Gather the user and buyer's email addresses via registration)
Buyer enters a message and clicks 'Send Message' button on seller's page
Form is posted (via AJAX or via POST) to a backend script
Your backend code generates an email message
Sets the 'To' field to the seller
Your seller gets an email alert of the potential buyer which shows the buyer's message
The Seller then logs on to your site (made easy by a URL in the email) to respond
The Seller enters a response message on your site and hits 'Reply'
Buyer gets an email notification with the message body and a link to your site where they can compose a reply.
...and so on. So, replies would have to be authored on-site, rather than as an email 'reply.'
If you choose to go this route, there are some simple 3rd party "transactional email" providers, which is the phrase you'd use to find them. I've personally used SendGrid and found them easy to set up and use. They have a simple plug in for every major framework. There is also Mandrill, which is newer but gets good reviews as well. | 0 | 74 | false | 1 | 1 | Email API for connecting in a marketplace? | 30,427,829 |
1 | 1 | 0 | 2 | 1 | 1 | 1.2 | 0 | I work on a system and the hosting guys don't want to use an install script that uses pip. Now we have a large pip requirements file that install the dependencies. Is there any other way to do it than using pip? Can it be done using yum or apt-get ? We are using Linux. | 0 | python,linux,django,dependencies,pip | 2015-05-25T08:31:00.000 | 1 | 30,434,106 | For god's sake, please do not fall back to using the distribution's package manager just because your hosting guys do not understand what pip+virtualenv is good for.
Python packages in Linux distribution repositories are often outdated and may come with quirks that other Python package authors did not plan for. This is especially true for Python packages with compiled code. If a documentation tells you that a certain dependency should be obtained directly from PyPI via pip, then you better follow that requirement. Convince your hosting guys to use the right tools, namely pip combined with virtualenv. The latter will create an isolated environment and make sure that the system will stay clean (really, nobody needs to do a sudo pip install, which probably is the thing your hosting guys are afraid of). | 0 | 55 | true | 0 | 1 | Is there a more efficient way to satisfy project dependencies than pip? | 30,434,337 |
1 | 2 | 0 | -3 | 11 | 1 | -0.291313 | 0 | I recently started python development on raspberry pi. While reading about .pyc file to speed up the start up, I was wondering if I test a .pyc file on PC, given that same python modules are available on Rpi, will it work directly ? Please also include what happens if python version or any of module version differs on target platform.
Thanks in advance. | 0 | python,raspberry-pi | 2015-05-25T17:45:00.000 | 0 | 30,443,490 | Short answer: Yes. Just keep in mind that your code must be OS-aware too.
And use the same version of python in both platforms. | 0 | 5,632 | false | 0 | 1 | Is .pyc platform independent? | 30,443,607 |
Subsets and Splits