Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 2 | 1 | 1 | 1.2 | 0 | What are the purposes of *.pyd files in the DLLs directory, header files (*.h) in the include directory, and *.lib files in the libs directory? When I delete them it seems that at least some basic python code works properly. | 0 | python | 2014-08-15T10:35:00.000 | 0 | 25,325,004 | *.pyd are compiled python extensions (on windows). *.lib are library modules used for building and linking with python itself. The *.h are C include files needed when you are creating your own extensions.
Generally these are all quite small and do not consume material disk space. I recommend you leave them alone. even if you don't need them now, you may want them in the future (when it might be difficult to locate). | 0 | 32 | true | 0 | 1 | Purposes of different files in python installation | 25,325,351 |
1 | 1 | 0 | 10 | 3 | 0 | 1.2 | 0 | I'm testing out a producer consumer example of RabbitMQ using Pika 0.98. My producer runs on my local PC, and the consumer runs on an EC2 instance at Amazon.
My producer sits in a loop and sends up some system properties every second. The problem is that I am only seeing the consumer read every 2nd message, it's as though every 2nd message is not being read. For example, my producer prints out this (timestamp, cpu pct used, RAM used):
2014-08-16 14:36:17.576000 -0700,16.0,8050806784
2014-08-16 14:36:18.578000 -0700,15.5,8064458752
2014-08-16 14:36:19.579000 -0700,15.0,8075313152
2014-08-16 14:36:20.580000 -0700,12.1,8074121216
2014-08-16 14:36:21.581000 -0700,16.0,8077778944
2014-08-16 14:36:22.582000 -0700,14.2,8075038720
but my consumer is printing out this:
Received '2014-08-16 14:36:17.576000 -0700,16.0,8050806784'
Received '2014-08-16 14:36:19.579000 -0700,15.0,8075313152'
Received '2014-08-16 14:36:21.581000 -0700,16.0,8077778944'
The code for the producer is:
import pika
import psutil
import time
import datetime
from dateutil.tz import tzlocal
import logging
logging.getLogger('pika').setLevel(logging.DEBUG)
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='54.191.161.213'))
channel = connection.channel()
channel.queue_declare(queue='ems.data')
while True:
now = datetime.datetime.now(tzlocal())
timestamp = now.strftime('%Y-%m-%d %H:%M:%S.%f %z')
msg="%s,%.1f,%d" % (timestamp, psutil.cpu_percent(),psutil.virtual_memory().used)
channel.basic_publish(exchange='',
routing_key='ems.data',
body=msg)
print msg
time.sleep(1)
connection.close()
And the code for the consumer is:
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='0.0.0.0'))
channel = connection.channel()
channel.queue_declare(queue='hello')
print ' [*] Waiting for messages. To exit press CTRL+C'
def callback(ch, method, properties, body):
print " [x] Received %r" % (body,)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
channel.start_consuming() | 0 | python,rabbitmq,amqp,pika | 2014-08-16T21:38:00.000 | 1 | 25,344,239 | Your code is fine logically, and runs without issue on my machine. The behavior you're seeing suggests that you may have accidentally started two consumers, with each one grabbing a message off the queue, round-robin style. Try either killing the extra consumer (if you can find it), or rebooting. | 0 | 1,694 | true | 1 | 1 | Python RabbitMQ - consumer only seeing every second message | 25,345,174 |
1 | 3 | 0 | 1 | 6 | 1 | 0.066568 | 0 | I would like to access the $PATH variable from inside a python program. My understanding so far is that sys.path gives the Python module search path, but what I want is $PATH the environment variable. Is there a way to access that from within Python?
To give a little more background, what I ultimately want to do is find out where a user has Package_X/ installed, so that I can find the absolute path of an html file in Package_X/. If this is a bad practice or if there is a better way to accomplish this, I would appreciate any suggestions. Thanks! | 0 | python,bash,filesystems,sys | 2014-08-16T23:09:00.000 | 1 | 25,344,841 | sys.path and PATH are two entirely different variables. The PATH environment variable specifies to your shell (or more precisely, the operating system's exec() family of system calls) where to look for binaries, whereas sys.path is a Python-internal variable which specifies where Python looks for installable modules.
The environment variable PYTHONPATH can be used to influence the value of sys.path if you set it before you start Python.
Conversely, os.environ['PATH']can be used to examine the value of PATH from within Python (or any environment variable, really; just put its name inside the quotes instead of PATH). | 0 | 1,684 | false | 0 | 1 | sys.path vs. $PATH | 64,768,294 |
1 | 1 | 0 | 4 | 3 | 1 | 1.2 | 0 | Disclaimer: I'm still not sure I understand fully what setup.py does.
From what I understand, using a setup.py file is convenient for packages that need to be compiled or to notify Disutils that the package has been installed and can be used in another program. setup.py is thus great for libraries or modules.
But what about super simple packages that only has a foo.py file to be run? Is a setup.py file making packaging for Linux repository easier? | 0 | python,setup.py | 2014-08-17T00:02:00.000 | 0 | 25,345,130 | Using a setup.py script is only useful if:
Your code is a C extension, and then depends on platform-specific features that you really don't want to define manually.
Your code is pure Python but depends on other modules, in which case dependencies may be resolved automatically.
For a single file or a few set of files that don't rely on anything else, writing one is not worth the hassle. As a side note, your code is likely to be more attractive to people if trying it up doesn't require a complex setup. "Installing" it is then just about copying a directory or a single file in one's project directory. | 0 | 157 | true | 0 | 1 | Should every python program be distributed with a setup.py? | 25,345,187 |
1 | 2 | 1 | 2 | 13 | 0 | 0.197375 | 0 | I am looking for an efficient and smart way to send data between a C++-program and a Python-script. I have a C++ program which calculates some coordinates in-real-time 30Hz. And I wanna access these coordinates with a Python-script. My first idea was to simply create a .txt-file and write the coordinates to it, and then have Python open the file and read. But I figured that it must be a smarter and more efficient way using the RAM and not the harddrive.
Does anyone have any good solutions for this? The C++ program should write 3coordinates (x,y,z) to some sort of buffer or file, and the Python program can open it and read them. Ideally the C++-program overwrites the coordinates every time and there's no problem with reading and writing to the file/buffer at the same time.
Thank you for your help | 0 | python,c++,cross-platform,buffer,communication | 2014-08-19T12:37:00.000 | 0 | 25,383,624 | I think we can use Named Pipes for communication between a python and C++ program if the processes are on the same machine.
However for different machines sockets are the best option. | 0 | 20,489 | false | 0 | 1 | Communication between C++ and Python | 34,848,899 |
1 | 22 | 0 | 1 | 221 | 1 | 0.009091 | 0 | I've got a python project with a configuration file in the project root.
The configuration file needs to be accessed in a few different files throughout the project.
So it looks something like: <ROOT>/configuration.conf
<ROOT>/A/a.py, <ROOT>/A/B/b.py (when b,a.py access the configuration file).
What's the best / easiest way to get the path to the project root and the configuration file without depending on which file inside the project I'm in? i.e without using ../../? It's okay to assume that we know the project root's name. | 0 | python | 2014-08-19T17:02:00.000 | 0 | 25,389,095 | I used the ../ method to fetch the current project path.
Example:
Project1 -- D:\projects
src
ConfigurationFiles
Configuration.cfg
Path="../src/ConfigurationFiles/Configuration.cfg" | 0 | 355,036 | false | 0 | 1 | Python - Get path of root project structure | 61,017,490 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | okay, So for a school project I'm using raspberry pi to make a device that basically holds both the functions of an ocr and a tts. I heard that I need to use Google's tesseract through a terminal but I am not willing to rewrite the commands each time I want to use it. so i was wondering if i could either:
A: Use python to print commands into the LX Terminal
B: use a type of loop command on the LX terminal and save as a script?
It would also be extremely helpful if I could find out how to make my RPI go staight to my script rather than the raspbian desktop when it first boote up.
Thanks in advance. | 0 | python,terminal,raspberry-pi,tesseract,raspbian | 2014-08-20T02:22:00.000 | 1 | 25,395,814 | There are ways to do what you asked, but I think you lack some research of your own, as some of these answers are very "googlable".
You can print commands to LX terminal with python using "sys.stdout.write()"
For the boot question:
1 - sudo raspi-config
2 - change the Enable Boot to Desktop to Console
3 - there is more than one way to make your script auto-executable:
-you have the Crontab (which I think it will be the easiest, but probably not the best of the 3 ways)
-you can also make your own init.d script (best, not easiest)
-or you can use the rc.local
Also be carefull when placing an infinite loop script in auto-boot.
Make a quick google search and you will find everything you need.
Hope it helps.
D.Az | 0 | 428 | false | 0 | 1 | can I use python to paste commands into LX terminal? | 25,409,597 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | How to change a modification time of file via ftp?
Any suggestions? Thanks! | 0 | python-2.7,ftp | 2014-08-20T04:50:00.000 | 0 | 25,396,898 | This is impossible
"The original FTP specification and recent extensions to it do not include any way to preserve the time and data for files uploaded to a FTP server." | 0 | 318 | false | 0 | 1 | How to set file modification time via ftp with python | 25,397,066 |
1 | 2 | 1 | 0 | 1 | 1 | 0 | 0 | Can python embedded into c++ allow you to run n python scripts concurrently?
I am currently dealing with the dread which is the GIL. My project requires concurrency of at least 2 threads and the easy typing in Python would really help with code simplicity.
Would embedding my Python code in a C++ script which deals with the threading circumvent the problems the GIL causes? | 0 | python,c++,multithreading,cpython,gil | 2014-08-20T08:50:00.000 | 0 | 25,400,493 | Only if you spawn separate interpreters.
GIL is one-per-interpreter policy to protect interpreter internals. One interpreter will run one line at a time.
The only other way is to program at least one of your threads in pure C++ and offer a communication queue API to your python script or any way to communicate asynchronously really. | 0 | 1,020 | false | 0 | 1 | Python GIL: concurrent C++ embed | 25,400,635 |
1 | 5 | 0 | -1 | 1 | 0 | -0.039979 | 0 | In Robot Framework, the execution status for each test case can be either PASS or FAIL. But I have a specific requirement to mark few tests as NOT EXECUTED when it fails due to dependencies.
I'm not sure on how to achieve this. I need expert's advise for me to move ahead. | 0 | python,robotframework | 2014-08-21T09:29:00.000 | 0 | 25,422,847 | Actually, you can SET TAG to run whatever keyword you like (for sanity testing, regression testing...)
Just go to your test script configuration and set tags
And whenever you want to run, just go to Run tab and select check-box Only run tests with these tags / Skip tests with these tags
And click Start button :) Robot framework will select any keyword that match and run it.
Sorry, I don't have enough reputation to post images :( | 0 | 3,561 | false | 1 | 1 | Customized Execution status in Robot Framework | 25,830,228 |
1 | 3 | 0 | 0 | 2 | 0 | 1.2 | 0 | I'm using PySerial library for a project and it works fine. Though, now that my requirements have changed and i need to implement a non blocking serial write option. I went through PySerial official documentation as well as several examples however, couldn't find a suitable option. Now my question is : is non-blocking write possible with PySerial ? If yes, how ?
Thanks in advance. | 0 | python,serial-port,pyserial | 2014-08-21T10:30:00.000 | 0 | 25,424,056 | It turns out. To drain serial output physically, we have serial.drainOutput() function. It works but if you're looking for a real time operation it might not help as python is not a very good language if you're expecting a real time performance.
Hope it helps. | 0 | 5,879 | true | 0 | 1 | PySerial non blocking write? | 26,436,860 |
2 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | I'm currently building an application in my Raspberry Pi. This application have to use I2C bus along serial port. Previously I developed both applications independent between them; for I2C app, I used a python3 module for handling the bus. But for handling serial port I use a python 2 module.
Now I want to build an app that handle the two interfaces. Is this possible? How should I do it?
Thanks. | 0 | python,python-3.x,raspberry-pi,compatibility,python-2.x | 2014-08-21T14:47:00.000 | 0 | 25,429,410 | Search for backports or try to split it up into different processes. | 0 | 71 | false | 0 | 1 | Is it possible to have an application using python 2 and python 3 modules? | 25,429,625 |
2 | 2 | 0 | 0 | 3 | 0 | 1.2 | 0 | I'm currently building an application in my Raspberry Pi. This application have to use I2C bus along serial port. Previously I developed both applications independent between them; for I2C app, I used a python3 module for handling the bus. But for handling serial port I use a python 2 module.
Now I want to build an app that handle the two interfaces. Is this possible? How should I do it?
Thanks. | 0 | python,python-3.x,raspberry-pi,compatibility,python-2.x | 2014-08-21T14:47:00.000 | 0 | 25,429,410 | Finally, I used 2to3 to convert python 2 modules. But due to the module interacts with serial port (pyserial), byte handling were not full correctly converted, so I have to edit the code after conversion using encode/decode functions. | 0 | 71 | true | 0 | 1 | Is it possible to have an application using python 2 and python 3 modules? | 25,550,994 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | How to structure GET 'review link' request from Vimeo API?
New to python and assume others might benefit from my ignorance.
I'm simply trying to upload via the new vimeo api and return a 'review link'.
Are there current examples of the vimeo-api in python? I've read the documentation and can upload perfectly fine. However, when it comes to the http GET I can't seem to figure it out. Im using python2.7.5 and have tried requests library. Im ready to give up and just go back to PHP because its documented so much better.
Any python programmers out there familiar? | 0 | python,vimeo,vimeo-api | 2014-08-23T16:56:00.000 | 0 | 25,464,255 | EDIT: Since this was written the vimeo.py library was rebuilt. This is now as simple as taking the API URI and requesting vc.get('/videos/105113459') and looking for the review link in the response.
The original:
If you know the API URL you want to retrieve this for, you can convert it into a vimeo.py call by replacing the slashes with dots. The issue with this is that in Python attributes (things separated by the dots), are syntax errors.
With our original rule, if you wanted to see /videos/105113459 in the python library you would do vc.videos.105113459() (if you had vc = vimeo.VimeoClient(<your token and app data>)).
To resolve this you can instead use python's getattr() built-in function to retrieve this. In the end you use getattr(vc.videos, '105113459')() and it will return the result of GET /videos/105113459.
I know it's a bit complicated, but rest assured there are improvements that we're working on to eliminate this common workaround. | 0 | 435 | true | 0 | 1 | How to structure get 'review link' request from Vimeo API? | 25,668,644 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am dealing with the following string of bytes in Python 3.
b'\xf9', b'\x02', b'\x03', b'\xf0', b'y', b'\x02', b'\x03', b'S', b'\x00', b't', b'\x00', b'a'
This is a very confusing bunch of bytes for me because it is coming from a microcontroller which is emitting information according to the MIDI protocol.
My first question is about the letters near the end. Most all of the other bytes are true hexadecimal values (i.e. I know the b'\x00' is supposed to be a null character). However, the capital S, which is supposed to be a capital S, appears as such (a b'S'). According to the ASCII / HEX charts I have looked at, Uppercase S should be x53 (which is what b'\x53'.decode('utf-8') returns.
However, in Python when I do b'S'.decode('utf-8') it also returns a capita S, (how can it be both?)
Also, Some of the bytes (such as b'\xf9') are truly meant to be escaped (which is why they have the \x however, I am running into issues when trying to decode them. When running [byteString].decode('utf-8') on a longer version of the above string I get the following error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf9 in position 0: invalid start byte
Shouldn't those bytes be skipped over or printed as? Thanks | 0 | python,unicode,utf-8,decode,encode | 2014-08-24T03:09:00.000 | 0 | 25,468,191 | They return the same thing because b'\x53' == b'S'. It's the same of other characters in the ASCII table as they're represented by the same bytes.
You're getting a UnicodeDecodeError because you seem to be using a wrong encoding. If I run b'\xf9'.decode('iso-8859-1') I get ù so it's possible that the encoding is ISO-8859-1.
However, I'm not familiar with the MIDI protocol so you have to review it to see what bytes need to be interpreted as what encoding. If decode all the given bytes as ISO-8859-1 it doesn't give me a meaningful string so it may mean that these bytes stand for something else, not text? | 0 | 312 | true | 0 | 1 | Issues decoding a Python 3 Bytes object based on MIDI | 25,468,417 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I'm working on a large-ish project. I currently have some functional tests hacked together in shell scripts. They work, but I'd like to make them a little bit more complicated. I think it will be painful to do in bash, but easy in a full-blown scripting language. The target language of the project is not appropriate for implementing the tests.
I'm the only person who will be working on my branch for the foreseeable future, so it isn't a huge deal if my tests don't work for others. But I'd like to avoid committing code that's going to be useless to people working on this project in the future.
In terms of test harness "just working" for the largest number of current and future contributors to this project as possible, am I better off porting my shell scripts to Python or Perl?
I suspect Perl, since I know that the default version of Python installed (if any) varies widely across OSes, but I'm not sure if that's just because I'm less familiar with Perl. | 0 | python,perl | 2014-08-26T14:20:00.000 | 1 | 25,508,177 | Every modern Linux or Unixlike system comes with both installed; which you choose is a matter of taste. Just pick one.
I would say that whatever you choose, you should try to write easily-readable code in it, even if you're the only one who will be reading that code. Now, Pythonists will tell you that writing readable code is easier in Python, which may be true, but it's certainly doable in Perl as well. | 0 | 254 | false | 0 | 1 | Python vs Perl for portability? | 25,508,481 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I'm working on a large-ish project. I currently have some functional tests hacked together in shell scripts. They work, but I'd like to make them a little bit more complicated. I think it will be painful to do in bash, but easy in a full-blown scripting language. The target language of the project is not appropriate for implementing the tests.
I'm the only person who will be working on my branch for the foreseeable future, so it isn't a huge deal if my tests don't work for others. But I'd like to avoid committing code that's going to be useless to people working on this project in the future.
In terms of test harness "just working" for the largest number of current and future contributors to this project as possible, am I better off porting my shell scripts to Python or Perl?
I suspect Perl, since I know that the default version of Python installed (if any) varies widely across OSes, but I'm not sure if that's just because I'm less familiar with Perl. | 0 | python,perl | 2014-08-26T14:20:00.000 | 1 | 25,508,177 | This is more personal, but I always used python and will use it till something shows me it's better to use other. Its simple, very expansible, strong and fast and supported by a lot of users.
The version of python doesn't matter much since you can update it and most OSes have python 2.5 with expands the compatibility a lot. Perl is also included in Linux and Mac OS, though.
I think that Python will be good, but if you like perl and have always worked with it, just use perl and don't complicate your life. | 0 | 254 | false | 0 | 1 | Python vs Perl for portability? | 25,508,315 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I'm building a house monitor using a Raspberry Pi and midori in kiosk mode. I wrote a python script which reads out some GPIO pins and prints a value. Based on this value I'd like to specify a javascript event. So when for example some sensor senses it's raining I want the browser to immediately update its GUI.
What's the best way to do this? I tried executing the python script in PHP and accessing it over AJAX, but this is too slow. | 0 | javascript,python,ajax,raspberry-pi,sudo | 2014-08-26T17:25:00.000 | 0 | 25,511,708 | Solved it by spawning my python script from nodejs and communicating in realtime to my webclient using socket.io. | 0 | 556 | true | 1 | 1 | Javascript execute python file | 25,527,346 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I am trying to install 'fabric'. I tried using 'pip install fabric' and the installation is failing when it is trying to install the 'pycrypto'
I see it is fetching the 2.6.1 version. I tried installing lower versions and I am getting same error.
'sudo easy_install fabric' also throws same error.
I also have the gmplib installed. I have the lib file in these places
/usr/lib/libgmp.dylib
/usr/local/lib/libgmp.dylib
pip install fabric
Requirement already satisfied (use --upgrade to upgrade): fabric in /Library/Python/2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): paramiko>=1.10.0 in /Library/Python/2.7/site-packages/paramiko-1.14.1-py2.7.egg (from fabric)
Downloading/unpacking pycrypto>=2.1,!=2.4 (from paramiko>=1.10.0->fabric)
Downloading pycrypto-2.6.1.tar.gz (446kB): 446kB downloaded
Running setup.py (path:/private/tmp/pip_build_root/pycrypto/setup.py) egg_info for package pycrypto
Requirement already satisfied (use --upgrade to upgrade): ecdsa in /Library/Python/2.7/site-packages/ecdsa-0.11-py2.7.egg (from paramiko>=1.10.0->fabric)
Installing collected packages: pycrypto
Running setup.py install for pycrypto
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for __gmpz_init in -lgmp... yes
checking for __gmpz_init in -lmpir... no
checking whether mpz_powm is declared... yes
checking whether mpz_powm_sec is declared... yes
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for inttypes.h... (cached) yes
checking limits.h usability... yes
checking limits.h presence... yes
checking for limits.h... yes
checking stddef.h usability... yes
checking stddef.h presence... yes
checking for stddef.h... yes
checking for stdint.h... (cached) yes
checking for stdlib.h... (cached) yes
checking for string.h... (cached) yes
checking wchar.h usability... yes
checking wchar.h presence... yes
checking for wchar.h... yes
checking for inline... inline
checking for int16_t... yes
checking for int32_t... yes
checking for int64_t... yes
checking for int8_t... yes
checking for size_t... yes
checking for uint16_t... yes
checking for uint32_t... yes
checking for uint64_t... yes
checking for uint8_t... yes
checking for stdlib.h... (cached) yes
checking for GNU libc compatible malloc... yes
checking for memmove... yes
checking for memset... yes
configure: creating ./config.status
config.status: creating src/config.h
building 'Crypto.PublicKey._fastmath' extension
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -Wall -Wstrict-prototypes -Wshorten-64-to-32 -fwrapv -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -std=c99 -O3 -fomit-frame-pointer -Isrc/ -I/usr/include/ -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/_fastmath.c -o build/temp.macosx-10.9-intel-2.7/src/_fastmath.o
src/_fastmath.c:83:13: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
size = p->ob_size;
~ ~~~^~~~~~~
src/_fastmath.c:86:10: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32]
size = -p->ob_size;
~ ^~~~~~~~~~~
src/_fastmath.c:113:49: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
int size = (mpz_sizeinbase (m, 2) + SHIFT - 1) / SHIFT;
~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~
src/_fastmath.c:1310:12: warning: implicit conversion loses integer precision: 'unsigned long' to 'unsigned int' [-Wshorten-64-to-32]
offset = mpz_get_ui (mpz_offset);
~ ^~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/gmp.h:840:20: note: expanded from macro 'mpz_get_ui'
#define mpz_get_ui __gmpz_get_ui
^
src/_fastmath.c:1360:10: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
return return_val;
~~~~~~ ^~~~~~~~~~
src/_fastmath.c:1373:27: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
rounds = mpz_get_ui (n) - 2;
~ ~~~~~~~~~~~~~~~^~~
src/_fastmath.c:1433:9: warning: implicit conversion loses integer precision: 'unsigned long' to 'int' [-Wshorten-64-to-32]
return return_val;
~~~~~~ ^~~~~~~~~~
src/_fastmath.c:1545:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
src/_fastmath.c:1621:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
9 warnings generated.
src/_fastmath.c:1545:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
src/_fastmath.c:1621:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
2 warnings generated.
This is the error i get when i execute 'fab'
Traceback (most recent call last):
File "/usr/local/bin/fab", line 5, in
from pkg_resources import load_entry_point
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2603, in
working_set.require(requires)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 666, in require
needed = self.resolve(parse_requirements(requirements))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 565, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: pycrypto>=2.1,!=2.4 | 0 | python,pycrypto | 2014-08-27T06:25:00.000 | 1 | 25,520,264 | Got it working after installing xcode.
pycrypto was installed by default once xcode was installed and fabric is working now.
(I should have mentioned that I am new to MAC in the question) | 0 | 494 | false | 0 | 1 | Error installing pycrypto in mac 10.9.6 | 25,542,008 |
2 | 3 | 0 | 2 | 1 | 1 | 1.2 | 0 | I am trying to create a virus scanner in Python, and I know that signature based detection is possible, but is heuristic based detection possible in Python, ie. run a program in a safe environment, or scan the program's code, or check what the program behaves like, and then decide if the program is a virus or not. | 0 | python,heuristics,malware,malware-detection | 2014-08-27T18:51:00.000 | 0 | 25,534,894 | Python is described as a general purpose programming language so yes, this is defiantly possible but not necessarily the best implementation. In programming, just like a trade, you should use the best tools for the job.
It could be recommended prototyping your application with Python and Clamd and then consider moving to another language if you want a closed source solution, which you can sell and protect your intellectual property.
Newb quotes:
Anything written in python is typically quite easy to
reverse-engineer, so it won't do for real protection.
I disagree, in fact a lot but it is up for debate I suppose. I really depends how the developer packages the application. | 0 | 2,774 | true | 0 | 1 | Is it possible to implement heuristic virus scanning in Python? | 25,545,902 |
2 | 3 | 0 | 2 | 1 | 1 | 0.132549 | 0 | I am trying to create a virus scanner in Python, and I know that signature based detection is possible, but is heuristic based detection possible in Python, ie. run a program in a safe environment, or scan the program's code, or check what the program behaves like, and then decide if the program is a virus or not. | 0 | python,heuristics,malware,malware-detection | 2014-08-27T18:51:00.000 | 0 | 25,534,894 | Yes, it is possible.
...and...
No, it is probably not the easiest, fastest, best performing, or most efficient way to accomplish the task. | 0 | 2,774 | false | 0 | 1 | Is it possible to implement heuristic virus scanning in Python? | 25,535,011 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | I used pip to install all python packages, and the path is:
PYTHONPATH="/usr/local/lib/python2.7/site-packages"
I found all the packages I tried to install were installed under this path, but when I tried to import them, it always said module not found.
MacBook-Air:~ User$ pip install tweepy
Requirement already satisfied (use --upgrade to upgrade): tweepy in /usr/local/lib/python2.7/site-packages
import tweepy
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named tweepy
I tried with tweepy, httplib2, oauth and some others, none of these can work.
Can anyone tell how can I solve this problem?
Thanks!!!! | 0 | python,module,pip | 2014-08-27T19:11:00.000 | 0 | 25,535,192 | Feel like comment has too many things..
So as @Zahir Jacobs said, this problem is because pip is installing all packages in different path. After I moved all the packages to $which python path, I can import these modules now.
But the following question is, if I still want to use pip installation in the future, I have to move it by manually again. is there anyway to change the path for pip?
I tried to move pip package, but it returned:
MacBook-Air:~ User$ pip install tweepy
Traceback (most recent call last):
File "/usr/local/bin/pip", line 5, in
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources | 0 | 770 | false | 0 | 1 | Python2.7 package installed in right path but not found | 25,537,762 |
1 | 2 | 1 | 1 | 0 | 0 | 0.099668 | 0 | Edit<<<<<<<
The question is:
-How do you launch C code from python? (say, in a function)
-How do you load Java code into python? (perhaps in a class?)
-Can you simply work with these two in a python program or are there special considerations?
-Will it be worth it, or will integrating cause too much lag?
Being familiar with all three languages (C, Java and Python) and knowing that Python supports C libraries, (and apparently can integrate with Java also) I was wondering if Python could integrate a program using both languages?
What I would like is fast flexible C functions while taking advantage of Java's extensive front-end libraries and coordinating the two in Python's clean, readable syntax.
Is this possible?
EDIT---->
To be more specific, I would like to write and execute python code that integrates my own fast C functions. Then, call Java libraries like swing to create user interface and handle networking. Probably taking advantage of XML as well to aid in file manipulation. | 0 | python,c,language-interoperability | 2014-08-29T23:24:00.000 | 0 | 25,577,470 | For C, you can use ctype module or SWIG.
For Java, Jython is a good choice. | 0 | 152 | false | 1 | 1 | How do you use Python with C and Java simultaneously? | 25,577,983 |
1 | 3 | 0 | 0 | 7 | 1 | 0 | 0 | I'm looking through the source of StringIO where it says says some notes:
Using a real file is often faster (but less convenient).
There's also a much faster implementation in C, called cStringIO, but
it's not subclassable.
StringIO just like a memory file object,
why is it slower than real file object? | 0 | python,stringio,cstringio | 2014-08-30T09:28:00.000 | 0 | 25,580,925 | It is not neccessarily obvious from the source but python file objects is built straight on the C library functions, with a likely small layer of python to present a python class, or even a C wrapper to present a python class. The native C library is going to be highly optimised to read bytes and blocks from disk. The python StringIO library is all native python code - which is slower than native C code. | 0 | 2,767 | false | 0 | 1 | Why is StringIO object slower than real file object? | 25,580,974 |
2 | 6 | 0 | 0 | 86 | 0 | 0 | 0 | In Sublime, we have an easy and convent way to run Python or almost any language for that matter using ⌘ + b (or ctrl + b)
Where the code will run in a small window below the source code and can easily be closed with the escape key when no longer needed.
Is there a way to replicate this functionally with Github's atom editor? | 0 | python,atom-editor | 2014-08-30T18:23:00.000 | 1 | 25,585,500 | There is a package called "platformio-ide-terminal" that allows you to run Atom code with Ctrl + Shift + B". That's the only package you need (Windows). | 0 | 209,849 | false | 0 | 1 | Running Python from Atom | 69,060,186 |
2 | 6 | 0 | 3 | 86 | 0 | 0.099668 | 0 | In Sublime, we have an easy and convent way to run Python or almost any language for that matter using ⌘ + b (or ctrl + b)
Where the code will run in a small window below the source code and can easily be closed with the escape key when no longer needed.
Is there a way to replicate this functionally with Github's atom editor? | 0 | python,atom-editor | 2014-08-30T18:23:00.000 | 1 | 25,585,500 | To run the python file on mac.
Open the preferences in atom ide. To open the preferences press 'command + . '
( ⌘ + , )
Click on the install in the preferences to install packages.
Search for package "script" and click on install
Now open the python file(with .py extension ) you want to run and press 'control + r ' (^ + r) | 0 | 209,849 | false | 0 | 1 | Running Python from Atom | 57,858,735 |
1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | How can I efficiently detect collision between image layers and generated shapes?
I need a fast and comprehensive method to detect collision between rotatable image layers and generated shapes.
So far I have just been dividing images into a series of circles that contain the majority of the pixels and then testing each circle against other shapes. To enhance performance I created perimeter circles around each structure and only test these larger circles until two structures are close enough to collide.
The real problem is that it is very difficult to collide a rotatable rectangle, for example, into one of these image structures. Filling a rectangle with circles also just does not seem efficient. Not to mention that I'm getting combinatoric explosions that make looping very complex and poor performance.
Anyone know a better way to handle this type of collision? I write in Java, Python and C. I am willing to use C++ also. | 0 | java,python,c,algorithm,collision-detection | 2014-08-30T22:10:00.000 | 0 | 25,587,180 | Normally, sphere collisions are just filter for real collision tests.
You can either:
decide to limit collisions to that, for example if it's a game.
implement real collisions and do the full math. You're basically intersecting the rotated edges of the two rectangles (16 cases). Intersect two edges as if they were lines, there will be only one point of intersection (unless they're parallel), if that point is inside the segment there's a collision.
To limit complexity, you can use a quadtree/octree. Divide your space into four rectangles, then those rectangles into four etc ... until they're too small to contain an object or are empty. Put your collidable objects into the most specific part of the tree that will contain them. Two objects can only collide if they are in the same sub rectangle or one is in any parent of the other.
Not sure that helps, but they're ideas. | 0 | 170 | false | 0 | 1 | Collision detection between image layers and shapes | 25,588,001 |
1 | 5 | 0 | 4 | 8 | 0 | 0.158649 | 1 | How to obtain the public ip address of the current EC2 instance in python ? | 0 | python,amazon-web-services,amazon-ec2,boto | 2014-08-31T05:41:00.000 | 0 | 25,589,259 | If you are already using boto you can also use the boto.utils.get_instance_metadata function. This makes the call to the metadata server, gathers all of the metadata and returns it as a Python dictionary. It also handles retries. | 0 | 7,397 | false | 0 | 1 | How to get the public ip of current ec2 instance in python? | 25,592,181 |
1 | 1 | 0 | 3 | 0 | 1 | 0.53705 | 0 | I have been coding in a static language like Java and C++ for a very long time and recently I start coding python for a little bit but one thing that keeps me rather "annoyed" is its lack of type. I frequently found myself trying to figure where an object is coming from (if the code is a little old) and its type to know what exactly I am dealing with in terms of its content and functionality. Is there any reference or suggestion on paradigm or coding style for python so I better code in Python without being slowed down by constantly think about object's type?
thanks | 0 | python | 2014-08-31T20:17:00.000 | 0 | 25,595,912 | The key thing I find useful is writing good quality documentation strings, and using a tool such as pydoc to provide you with automatic documentation on your code. | 0 | 70 | false | 0 | 1 | python coding style due to lack of type | 25,595,929 |
2 | 7 | 0 | 0 | 77 | 0 | 0 | 1 | Recently, I made a code that connect to work station with different usernames (thanks to a private key) based on paramiko.
I never had any issues with it, but today, I have that : SSHException: Error reading SSH protocol banner
This is strange because it happens randomly on any connections. Is there any way to fix it ? | 0 | python,paramiko | 2014-09-01T15:36:00.000 | 0 | 25,609,153 | Well, I was also getting this with one of the juniper devices. The timeout didn't help at all. When I used pyex with this, it created multiple ssh/netconf sessions with juniper box. Once I changed the "set system services ssh connection-limit 10" From 5, it started working. | 0 | 118,288 | false | 0 | 1 | Paramiko : Error reading SSH protocol banner | 71,871,218 |
2 | 7 | 0 | 3 | 77 | 0 | 0.085505 | 1 | Recently, I made a code that connect to work station with different usernames (thanks to a private key) based on paramiko.
I never had any issues with it, but today, I have that : SSHException: Error reading SSH protocol banner
This is strange because it happens randomly on any connections. Is there any way to fix it ? | 0 | python,paramiko | 2014-09-01T15:36:00.000 | 0 | 25,609,153 | paramiko seems to raise this error when I pass a non-existent filename to kwargs>key_filename. I'm sure there are other situations where this exception is raised nonsensically. | 0 | 118,288 | false | 0 | 1 | Paramiko : Error reading SSH protocol banner | 68,010,453 |
1 | 5 | 0 | -1 | 2 | 0 | -0.039979 | 0 | I have a python script that does some updates on my database.
The files that this script needs are saved in a directory at around 3AM by some other process.
So I'm going to schedule a cron job to run daily at 3AM; but I want to handle the case if the file is not available exactly at 3AM, it could be delayed by some interval.
So I basically need to keep checking whether the file of some particular name exists every 5 minutes starting from 3AM. I'll try for around 1 hour, and give up if it doesn't work out.
How can I achieve this sort of thing in Python? | 0 | python | 2014-09-02T07:08:00.000 | 1 | 25,617,706 | You can use Twisted, and it is reactor it is much better than an infinite loop ! Also you can use reactor.callLater(myTime, myFunction), and when myFunction get called you can adjust the myTime and add another callback with the same API callLater(). | 0 | 6,642 | false | 0 | 1 | 'Listening' for a file in Python | 25,617,956 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 | i would like to get the name of the "file.py" that I am executing with ironpython.
I have to read and save data to other files with the same name start.
Thank you very much!
Humano | 0 | python | 2014-09-03T10:16:00.000 | 0 | 25,641,782 | Use __file__.
You can use os.path.basename(__file__) | 0 | 189 | true | 0 | 1 | How to know the filename of script.py while executing it with ironpython | 25,641,891 |
2 | 5 | 0 | 10 | 49 | 0 | 1 | 0 | For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order? | 0 | python,fixtures,pytest | 2014-09-04T07:53:00.000 | 0 | 25,660,064 | I was just having this problem with two function-scoped autouse fixtures. I wanted fixture b to run before fixture a, but every time, a ran first. I figured maybe it was alphabetical order, so I renamed a to c, and now b runs first. Pytest doesn't seem to have this documented. It was just a lucky guess. :-)
That's for autouse fixtures. Considering broader scopes (eg. module, session), a fixture is executed when pytest encounters a test that needs it. So if there are two tests, and the first test uses a session-scoped fixture named sb and not the one named sa, then sb will get executed first. When the next test runs, it will kick off sa, assuming it requires sa. | 0 | 26,370 | false | 0 | 1 | In which order are pytest fixtures executed? | 28,593,102 |
2 | 5 | 0 | 2 | 49 | 0 | 0.07983 | 0 | For an application I'm testing I'd like to create an autouse=True fixture which monkeypatches smtplib.SMTP.connect to fail tests if they try to send an email unexpectedly.
However, in cases where I do expect tests to send emails, I want to use a different fixture logging those emails instead (most likely by using the smtpserver fixture from pytest-localserver and monkeypatching the connect method to use the host/port returned by that fixture)
Of course that can only work if the autouse fixture is executed before the other fixture (loaded as funcarg). Is there any specific order in which fixtures are executed and/or is there a way to guarantee the execution order? | 0 | python,fixtures,pytest | 2014-09-04T07:53:00.000 | 0 | 25,660,064 | IIRC you can rely on higher scoped fixtures to be executed first. So if you created a session scoped autouse fixture to monkeypatch smtplib.SMTP.connect then you could create a function-scoped fixture which undoes this monkeypatching for one test, restoring it afterwards. I assume the easiest way to do this is create your own smtpserver fixture which depends on both the disallow_smtp fixture as well as the smtpserver fixture from pytest-localserver and then handles all setup and teardown required to make these two work together.
This is vaguely how pytest-django handles it's database access btw, you could try and look at the code there but it is far from a simple example and has many of it's own weird things. | 0 | 26,370 | false | 0 | 1 | In which order are pytest fixtures executed? | 25,709,928 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm trying to use abbrev-mode in python-mode.
Abbrevs work fine in python code but doesn't work in python comments.
How can I make them work in python comments too?
Thanks. | 0 | python,emacs,comments,abbreviation,python-mode | 2014-09-05T07:58:00.000 | 0 | 25,681,118 | IIRC this was a bug in python.el and has been fixed in more recent versions. Not sure if the fix is in 24.3 or in 24.4.
If it's still in the 24.4 pretest, please report it as a bug. | 0 | 67 | false | 0 | 1 | Emacs: how to make abbrevs work in python comments? | 25,692,694 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I want know if some IDEs or tools exists who can help to code and debug Modules of OpenERP 7 i tried tools of Python in Eclipse but it still slow and useless.
There is some powerful tools dedicated for OpenERP developers ? | 0 | python,ide,openerp | 2014-09-07T10:21:00.000 | 0 | 25,709,205 | Slow and useless? We have 4-5 devs on this platform and it works well. Could there is a problem with your setup? What do you mean by "tools of python" - I am using the pydev plugin.
My laptop is Win 7 but I use VirtualBox and run an Ubuntu VM with Eclipse with Pydev and postgres in the VM. I find debugging works well and performance is pretty good allowing for the fact it is a laptop, certainly good enough for most things. The biggest hold up is usually adding/removing columns to tables with a lot of rows as Postgres actually creates a new table in the background and copies the rows across into the new table, but this is a Postgres effect and will be the same no matter what.
There are no dedicated tools that I am aware of. I think most people work like this but there is always pycharm or komodo or even go the pure editor way with sublime. | 0 | 131 | false | 0 | 1 | Tools for Coding and Debbugging Modules of Open ERP | 25,716,447 |
1 | 2 | 0 | -2 | 3 | 1 | -0.197375 | 0 | My situation is as follows:
I have a locally installed version of python. There exists also a global one, badly installed, which I do not want to use. (I don't have admin priviliges).
On /usr/local/lib/site-packages there is a x.pth file containing a path to a faulty installation of numpy
My PYTHONPATH does not have any of those paths. However, some admin generated script adds /usr/local and /usr/local/bin to my PATH (this is an assumption, not a known fact).
This somehow results in the faulty-numpy-path added to my sys.path. When I run python -S, it's not there.
site.PREFIXES does not include /usr/local. I have no idea why the aforementioned pth file is loaded.
I tried adding a pth file of my own, to the local installation's site-packages dir, doing import sys; sys.path.remove('pth/to/faulty/numpy') This fails because when that pth file is loaded, the faulty path is not yet in sys.path.
Is there a way for me to disable the loading of said pth file, or remove the path from sys.path before python is loaded?
I've tried setting up virtualenv and it does not suit my current situation. | 0 | python,python-2.7,sys.path | 2014-09-07T13:32:00.000 | 0 | 25,710,724 | There is no way to delete pth file.
Standard python will search for pth files in /usr/lib and /usr/local/lib.
You can create isolated python via viartualenv though. | 0 | 961 | false | 0 | 1 | prevent python from loading a pth file | 25,710,953 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I use Python api to insert message into the RabbitMQ,and then use go api to get message from the RabbitMQ.
Key 1: RabbitMQ ACK is set false because of performance.
I insert into RabbitMQ about over 100,000,000 message by python api,but when I use go api to get
message,I find the insert number of message isn’t equal to the get number.The insert action and the
get action are concurrent.
Key 2:Lost message rate isn’t over 1,000,000 percent 1.
Insert action has log,python api shows that all inserted message is successful.
Get action has log,go api shows that all get message is successful.
But the number isn’t equal.
Question1:I don’t know how to find the place where the message lost.Could anyone give me a suggestion how to find where the message lost?
Question2:Is there any strategy to insure the message not lose? | 0 | python,go,rabbitmq | 2014-09-09T16:21:00.000 | 0 | 25,749,566 | In order for you to test that all of your messages are published you may do it this way:
Stop consumer.
Enable acknowledgements in publisher. In python you can do it by adding extra line to your code: channel.confirm_delivery(). This will basically return a boolean if message was published. Optionally you may want to use mandatory flag in basic_publish.
Send as many messages as you want.
Make sure that all of the basic_publish() methods returnes True.
Count number of messages in Rabbit.
Enable Ack in consumer by doing no_ack = False
Consume all the messages.
This will give you an idea where your messages are getting lost. | 0 | 2,544 | true | 0 | 1 | RabbitMQ message lost | 25,749,840 |
1 | 2 | 0 | 12 | 5 | 1 | 1.2 | 0 | Does python require header files like C/C++ ?
What are the differences between including header files and importing packages ? | 0 | python,python-2.7,python-3.x | 2014-09-10T11:22:00.000 | 0 | 25,764,091 | No, Python does not have header files nor similar. Neither does Java, despite your implication that it does.
Instead, we use "docstrings" in Python to make it easier to find and use our interfaces (with the built-in help() function). | 0 | 30,600 | true | 0 | 1 | Does python have header files like C/C++? | 25,764,175 |
1 | 3 | 0 | 0 | 1 | 1 | 0 | 0 | I have recently installed the FiPy package onto my Macbook, with all the dependencies, through MacPorts. I have no troubles calling FiPy and NumPy as packages in Python.
Now that I have it working I want to go through the examples. However, I cannot find the "base directory" or FiPy Directory in my computer.
How can I find the base directory?
Do I even have the base directory if I have installed all this via Macports?
As a note I am using Python27.
Please, help! Thanks. | 0 | python,python-2.7,fipy | 2014-09-10T22:08:00.000 | 0 | 25,775,880 | Since I had the same issue with Ubuntu I post this here.
If you have installed it using miniconda or anaconda, it will be:
/home/username/miniconda<version>/envs/<name of the environemnt you installed fipy in>
if you get the error that fipy module not found, you dont need to export the path but you just need to:
conda activate <nameOfEnvironment you installed fipy there> | 0 | 458 | false | 0 | 1 | Where is the FiPy "base directory"? | 56,643,433 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I'm using Python to send mass emails. It seems that I send too many and too fast that I am getting SMTPRecipientsRefused(senderrs) errors.
I used
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
Any ideas? Thanks! | 0 | python,smtp | 2014-09-11T02:20:00.000 | 0 | 25,778,029 | It turns out that I was sending to empty recepients. | 0 | 361 | false | 0 | 1 | Getting SMTPRecipientsRefused(senderrs) because of sending too many? | 25,808,100 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | is it possible to let a web server, for example my Raspberry Pi, start a Script when a specific QR-Code gets scanned from my Mobile Device connected to my Home-Network?
e.x.: I want the Pi to delete spezific data from a Database if the QR-Code gets scanned from a Mobile Device inside my Network. | 0 | python,flask,raspberry-pi,qr-code | 2014-09-11T06:22:00.000 | 0 | 25,780,445 | First of all, QR codes aren't magic. All they contain is a string of text. That text could say "Hello", or be a phone number, email address, or URL.
It is up to the QR scanner to decide what to do with the text it encounters.
For example, you could build a QR scanner which tells your Pi to delete data when it scans the text "123abc".
Or, you could have a URL like http://192.168.0.34/delete?data=abc123 where the IP address is the internal network address of your Pi. | 0 | 602 | false | 1 | 1 | Starting Script on Server when QR-Code gets scanned | 25,781,952 |
1 | 2 | 0 | 0 | 1 | 0 | 1.2 | 0 | I need to test whether several .py scripts (all part of a much bigger program) work after updating python. The only thing I have is their path. Is there any intelligent way how to find out from which other scripts are these called? Brute-forece grepping wasn't as good aas I expected. | 0 | python,bash,shell | 2014-09-11T16:09:00.000 | 1 | 25,792,285 | I combined two things:
ran automated tests with the old and new version of pythona nd compared results
used snakefood to track the dependencies and ran the parent scripts
Thanks for the os.walk and os.getppid suggestion, however, I didn't want to write/use any additional code. | 0 | 497 | true | 0 | 1 | How to find out from where is a Python script called? | 25,974,955 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I want to disable mode_python error, traceback, module cache details and any other warning notice !
I am using simple mod_python module on ubuntu server with apache, without any framework (Django ect ...).
I made lot of searchs on GOOGLE but no one speak about this :)
I want alternative to error_reporting(0); on PHP or any config changes on the server side.
Thank you in advance. | 0 | python,apache,ubuntu | 2014-09-12T18:04:00.000 | 0 | 25,814,206 | Heyy,
For ubuntu, this is what you can do:
sudo a2dismod mod_python
sudo /etc/init.d/apache2 restart
Hope that helps :) | 0 | 505 | false | 0 | 1 | Disable mod_python error and traceback | 25,820,393 |
1 | 1 | 0 | 1 | 1 | 1 | 0.197375 | 0 | I have a python program, stored on Dropbox, which runs via cron on a couple of different machines. For some reason, recently one of the .pyc files is being created with root as the owner, which means that Dropbox doesn't have permission to sync it anymore.
Why would it do that, and how do I change it? | 0 | python,cron,crontab | 2014-09-15T16:09:00.000 | 0 | 25,852,330 | That would happen if you're running the python program as root (which would happen if you're using root's crontab).
To fix it, just remove it with sudo rm /path/to/file.pyc, and make sure to run the program as your user next time. If you want to keep using root's crontab, you could use su youruser -c yourprogram, but the cleanest way would be simply to use your user's crontab | 0 | 935 | false | 0 | 1 | Python pyc files created with root as owner | 25,852,354 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | Does anybody have any experience rendering web pages in weasyprint that are styled using twitter Bootstrap? Whenever I try, the html renders completely unstyled as if there was no css applied to it. | 0 | python,twitter-bootstrap,pdf,flask,weasyprint | 2014-09-17T22:48:00.000 | 0 | 25,901,582 | I figured out what the problem was. When I declared the style sheet i set media="screen", I removed that tag element and it seemed to fix it. Further research indicated I could also declare a separate stylesheet and set media="print". | 0 | 512 | true | 1 | 1 | Weasprint and Twitter Bootstrap | 25,938,231 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | In my project, I have bridged C++ module with python using plain (windows) sockets with proto buffer as serializing/ de-serializing mechanism. Below are requirements of this project:-
1) C++ module will have 2 channels. Through one it will accept request and send appropriate reply to python module. Through other it will send updates which it gets from backend to python side.
2) Today we are proposing it for 100 users ( i.e request/reply for 100 users + updates with each messages around 50 bytes ).
But I want to make sure it works fine later even with 100 K users. Also, I am planning to use ZMQ with this but I don't know much about its performance / latency / bottlenecks.
Can anyone please suggest me if its appropriate choice OR are there better tools available.
Thanks in advance for you advice. | 0 | python,c++ | 2014-09-18T04:32:00.000 | 0 | 25,904,218 | Instead of doing IPC you can simply call Python from C++. You can do this either using the Python C API, or perhaps easier for some people, Boost.Python. This is referred to as "embedding" Python within your application, and it will allow you to directly invoke functions and pass data around without undue copying between processes. | 0 | 114 | true | 0 | 1 | C++ and python communication | 25,904,243 |
1 | 1 | 0 | 3 | 3 | 0 | 1.2 | 0 | I am close to finishing an ORM for RethinkDB in Python and I got stuck at writing tests. Particularly at those involving save(), get() and delete() operations. What's the recommended way to test whether my ORM does what it is supposed to do when saving or deleting or getting a document?
Right now, for each test in my suite I create a database, populate it with all tables needed by the test models (this takes a lot of time, almost 5 seconds/test!), run the operation on my model (e.g.: save()) and then manually run a query against the database (using RethinkDB's Python driver) to see whether everything has been updated in the database.
Now, I feel this isn't just right; maybe there is another way to write these tests or maybe I can design the tests without even running that many queries against the database. Any idea on how can I improve this or a suggestion on how this has to be really done? | 1 | python,unit-testing,testing,orm,rethinkdb | 2014-09-18T15:32:00.000 | 0 | 25,916,839 | You can create all your databases/tables just once for all your test.
You can also use the raw data directory:
- Start RethinkDB
- Create all your databases/tables
- Commit it.
Before each test, copy the data directory, start RethinkDB on the copy, then when your test is done, delete the copied data directory. | 0 | 484 | true | 1 | 1 | Testing an ORM for RethinkDB | 25,922,468 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I have 3 machines (raspberry pi's). One has a database of sensor readings on, the other two are 'slave' devices that read/run various sensors. what is the best solution to allow the 'master' pi to access sensor readings on the 'slave' pis- so it can save the values to the database.
All the pis are on the same internal network, and will never be on the internet
The 'slave' pis return integers to the master pi, that is all.
It has to be python3 (because the software that queries the sensors is)
What is the very simplest way?
Some kind of web service? I've so far failed to get get pysimplesoap and cherrypy to work on python3.
Something else? Pyro? It seems a bit complicated just to get back 2 integers.
Roll my own with sockets (that can't be the easiest way?!)
Give up and put a mysql database on each pi, then make the 'sensor-value-reporting-website' stretch across 3 databases/hosts. | 0 | python,raspberry-pi | 2014-09-19T22:28:00.000 | 0 | 25,943,256 | I would have a mySQL database on the master only and have the slaves write their own tables to that database using the cymysql python3 module
(pip3 install cymysql) | 0 | 198 | false | 0 | 1 | Python machine-communication on raspberry pi | 30,958,127 |
1 | 1 | 0 | 3 | 2 | 0 | 0.53705 | 1 | Say, If I am hosting a website say www.mydomain.com on EC2 instance then apache would be running on port 80. Now If I want to host a RESTful API (say mydomain.com/MyAPI) using a python script(web.py module). How can I do that? Wouldn't running a python script cause a port conflict? | 0 | python,web-services,api,amazon-web-services,amazon-ec2 | 2014-09-20T13:39:00.000 | 0 | 25,949,382 | No.
Apache is your doorman. Your python script are the workers inside the building. The public comes to the door and talks to the doorman. The doorman hands everything to the workers inside the building, and when the work is done, hands it back to the appropriate person.
Apache manages the coming and going of individual TCP/IP messages, and delegates the work that each request needs to do to your script. If the request asks for the API it hands it to the api script; if the request asks for the website it hands it to the website script. Your script passes back the response to apache, which handles the job of giving it to the client over port 80.
As @Lafada comments: you can have a backdoor—another port—but apache is still the doorman. | 0 | 607 | false | 0 | 1 | Amazon AWS EC2 : How to host an API and a website on EC2 instance | 25,949,451 |
1 | 1 | 0 | 1 | 2 | 0 | 0.197375 | 1 | How can you search Instagram by hashtag and filter it by searching again using its location?
I checked the Instagram API, but I don't see a solution. | 0 | python,geolocation,instagram,hashtag | 2014-09-21T09:24:00.000 | 0 | 25,957,739 | If u can't find ready solution for this in api, try to search by location first.
As u receive location search results, you can manually filter them by hashtags.
The idea is that every photo has tags and location attributes you can filter by. But everything depends on your specific task here. | 0 | 1,078 | false | 0 | 1 | Search hashtag in instagram then filter by location | 26,262,484 |
1 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 | The subject is package imports and the __init__ file:
There is one quote in the specification that says
these files serve to prevent directories with common names from
unintentionally hiding true modules that appear later on the module
search path. Without this safeguard, Python might pick a directory
that has nothing to do with your code, just because it appears nested
in an earlier directory on the search path.
Can you give me practical examples of this ? | 0 | python,python-3.x,module,directory | 2014-09-25T11:34:00.000 | 0 | 26,037,592 | If you allow any directory to be seen as a package, then trying to import a module that exists both as a directory and as a module on the search path could pick the directory over the module.
Say you have a images directory and an images.py module. import images would find the images directory if it was found earlier on the search path.
By requiring a __init__.py to mark packages you make it possible to include such data directories next to your Python code without having to worry about masking genuine modules with the same name. | 0 | 158 | false | 0 | 1 | Hide true modules | 26,037,677 |
1 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I am using a Python script with the pysftp module to connect to SFTP server.
Apparently I can pass a SFTP password explicitly which I don't like.
My question: How could I replace the password for an encrypted one to avoid someone from being visible to the world? | 0 | python,encryption,sftp,pysftp | 2014-09-25T21:47:00.000 | 0 | 26,048,682 | Do you mean you do not want the password to be visible in code or configuration file? There's no way to encrypt anything in a way that allows automatic decryption. All you can do is to obfuscate the password, but not really encrypt it. If anyone can access your code he/she can always reverse-engineer your "encryption". | 0 | 2,397 | false | 0 | 1 | How to encrypt password in Python pysftp? | 26,053,268 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | Various outlets, along with Apple, are assuring OS X users that they are not at particular risk from the Shellshock bash exploit. However, I use Python frequently on my system and wonder if that would increase my risk; and whether there is anything I can do to mitigate it (short of installing a different bash).
I use Apple's (2.7.5) Python and bash, on OS X 10.9.5. | 0 | security,python-2.7,osx-mavericks,shellshock-bash-bug | 2014-09-26T11:50:00.000 | 1 | 26,058,888 | Using python does not increase your risk of being exposed to shellshock.
The best way to protect your computer is to make sure your computer has up-to-date patches and updates installed.
Microsoft, Apple and various Linux distributions have all released updtes/patches which are supposed to fix the problem. | 0 | 61 | false | 0 | 1 | Does using Python on OS X expose me to Shellshock? | 26,074,861 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I would like to run python files from terminal without having to use the command $ python. But I would still like to keep the ability of using '$ python to enter the python interpreter. For example if I had a file named 'foo.py', I can use $ foo.py rather than $ python foo.py to run the file.
How can I do this? Would I need to change the bash file or the paths? And is it possible to have both commands available, so I can use both $ foo.py and $ python foo.py?
I am using ubuntu 14.04 LTS and my terminal/shell uses a '.bashrc' file. I have multiple versions of python installed on my computer, but when running a python file I want the default version to be the latest version of 2.7.x. If what I am asking is not possible or not recommended, I want to at least shorten the command $ python to $ py.
Thank you very much for any help! | 0 | python,bash,shell,python-2.7,command | 2014-09-26T17:28:00.000 | 1 | 26,065,129 | sharkbyte,
It's easy to insert '#!/usr/bin/env python' at the top of all your Python files. Just run this sed command in the directory where your Python files live:
sed -i '1 i\#! /usr/bin/env python\n' *.py
The -i option tells sed to do an in-place edit of the files, the 1 means operate only on line 1, and the i\ is the insert command. I put a \n at the end of the insertion text to make the modified file look nicer. :)
If you're paranoid of stuffing up, copy some files to an empty directory first & do a test run.
Also, you can tell sed to make a backup of each original file, eg:
sed -i.bak '1 i\#! /usr/bin/env python\n' *.py
for MS style, or
sed -i~ '1 i\#! /usr/bin/env python\n' *.py
for the usual Linux style. | 0 | 85 | false | 0 | 1 | How can I eliminate the need of the command 'python' when running a python file in terminal? | 26,069,546 |
2 | 2 | 0 | 0 | 2 | 0 | 0 | 1 | telnet
server ←————→ device
↑
| SSH
↓
localhost (me)
I have a device that is connected with one server computer and I want to talk to the device by ssh'ing into the server computer and send telnet commands to my device. How do I setup things in Python to make this happen? | 0 | python,ssh,telnet,forwarding | 2014-09-30T03:11:00.000 | 0 | 26,112,179 | Use tunneling by setting up a SSH session that tunnels the Telnet traffic:
So ssh from localhost to server with the option -L xxx:deviceip:23
(xxx is a free port on localhost, device is the IP address of "device", 23 is the telnet port).
Then open a telnet session on your localhost to localhost:xxx. SSH will tunnel it to the device and response is sent back to you. | 0 | 2,315 | false | 0 | 1 | How to run telnet over ssh in python | 44,082,308 |
2 | 2 | 0 | 3 | 2 | 0 | 1.2 | 1 | telnet
server ←————→ device
↑
| SSH
↓
localhost (me)
I have a device that is connected with one server computer and I want to talk to the device by ssh'ing into the server computer and send telnet commands to my device. How do I setup things in Python to make this happen? | 0 | python,ssh,telnet,forwarding | 2014-09-30T03:11:00.000 | 0 | 26,112,179 | You can use Python's paramiko package to launch a program on the server via ssh. That program would then in turn receive commands (perhaps via stdin) and return results (via stdout) from the controlling program. So basically you'll use paramiko.SSHClient to connect to the server and run a second Python program which itself uses e.g. telnetlib to talk to the device. | 0 | 2,315 | true | 0 | 1 | How to run telnet over ssh in python | 26,112,392 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a problem in which I have to pass input to a Python script that must not terminate. Therefore I need to pass arguments to the Python script while it is running. The Python script will also not be executed in the same PHP script so it cannot incorporate any sort of expect script due to expect requiring that you spawn the process in the same script! I have full access to the PHP and Python scripts so I can modify them at will. I also would not prefer hacky ways like making the PHP write to a text file and having the Python script read the commands from the file. Is their any less hacky way to do this? | 0 | php,python | 2014-09-30T13:06:00.000 | 0 | 26,121,676 | You can use screen command to run the python script, then connect to the screen session anytime later from php with expect. | 0 | 53 | true | 0 | 1 | PHP Passing Input to Runing Python Script | 26,121,763 |
1 | 2 | 0 | 2 | 8 | 0 | 0.197375 | 0 | As the title says I want to programmatically check if a DNS response for a domain are protected with DNSSEC.
How could I do this?
It would be great, if there is a pythonic solution for this.
UPDATE:
changed request to response, sorry for the confusion | 0 | python,dns,dnssec | 2014-10-01T08:19:00.000 | 0 | 26,137,036 | To see if a particular request is protected, look at the DO flag in the request packet. Whatever language and library you use to interface to DNS should have an accessor for it (it may be called something else, like "dnssec").
The first answer is correct but incomplete if you want to know if a certain zone is protected. The described procedure will tell you if the zone's own data is signed. In order to check that the delegation to the zone is protected, you need to ask the parent zone's name servers for a (correctly signed) DS record for the zone you're interested in. | 0 | 6,398 | false | 0 | 1 | Programmatically check if domains are DNSSEC protected | 26,139,060 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I've uploaded my .html chat file on my Ubuntu VPS, all that remains is to execute/run the python script pywebsock.py which will run the python server. I've uploaded the pywebsock.py to /bin/www and now I want to run it but I have no idea where to start.
When I run the pywebsock.py on my desktop it opens up a terminal saying "waiting connection".
This is what I've done so far to try and run it:
Downloaded Putty
Downloaded WinSCP
Installed version of Python according to .py (2.7)
Any ideas? | 0 | python,ubuntu,vps | 2014-10-04T19:06:00.000 | 1 | 26,196,165 | Running a script on the server is the same as running locally.
python script.py
You may check if you have python installed first using which python (should return the python location)
If not, get it using sudo apt-get install python
after that, go to www directory and run it. | 0 | 2,380 | true | 0 | 1 | Executing/Running python script on ubuntu server | 26,196,763 |
1 | 3 | 0 | 0 | 6 | 1 | 0 | 0 | I'm using Hostgator shared as a production environment and I had a problem installing some python modules, after using:
pip install MySQL-python
pip install pillow
results in:
unable to execute gcc: Permission denied
error: command 'gcc' failed with exit status 1
server limitations
no root access
sudo doesnt work (sudo: effective uid is not 0, is sudo installed setuid root?)
no gcc
questions
is there an alternative package for pillow. I want this to use django ImageField. (just like pymysql is an equally capable alternative for mysql-python)
i have modules like mysql-python and pil installed in root, i.e. pip freeze without any virtualenv lists these modules. but i cannot install my other required modules in this root environment and in my virtualenv i cannot install mysql-python and pil. can something be done? can we import/use packages installed in root somehow in a virtualenv?
is hostgator shared only good for PHP and not for python/django webapps. we have limited traffic so we are using hostgator shared. should we avoid hostgator or shared hosting? aren't they good enough for python/django (i had no problems in hosting static/PHP sites ever). are they too many problems and limitations or performance issues (FCGI)? if yes, what are the alternatives? | 0 | python,django,gcc,hosting,virtualenv | 2014-10-05T09:00:00.000 | 0 | 26,201,141 | You need root access to install the necessary packages to run your python application.
PAAS like Heroku are another option but the free package at Heroku is only good for developing your application and it is not intended for hosting it once you get traffic and users.
I strongly suggest you get a VPS at DigitalOcean.com. For 5$ per month you will get root access and more power. You will also control your full stack. I use Nginx+Gunicorn to host about 10 Django projects on DigitalOcean right now. | 0 | 4,250 | false | 0 | 1 | installing python modules that require gcc on shared hosting with no gcc or root access | 27,670,939 |
1 | 5 | 0 | 1 | 28 | 0 | 0.039979 | 1 | I'm wondering if it's possible to combine coverage.xml files into 1 file to see global report in HTML output.
I've got my unit/functional tests running as 1 command and integration tests as the second command. That means my coverage for unit/functional tests are overridden by unit tests.
That would be great if I had some solution for that problem, mainly by combining those files into 1 file. | 0 | python,unit-testing,code-coverage,coverage.py,python-coverage | 2014-10-06T10:10:00.000 | 0 | 26,214,055 | I had similar case where I had multiple packages and each of them had its tests and they were run using own testrunner. so I could combine all the coverage xml by following these steps.
Indivisually generate the coverage report.
You would need to naviagte to each package and generate the report in that package. This would create .coverage file. You can also add [run]parallel=True in your .coveragerc to create coverage file appended with machine name and processid.
Aggregate all the reports.
You need to copy all the .coverage files to for these packages to a seaparte folder. You might want to run a batch or sh script to copy all the coverage files.
Run combine.
Now naviagte tp the folder when you have all the report files and then run coverage combine. This will delete all the coverage files and combine it to one .coverage file. Now you can run coverage html and coverage xml. | 0 | 19,392 | false | 0 | 1 | combine python coverage files? | 63,373,136 |
1 | 2 | 0 | 5 | 4 | 0 | 1.2 | 1 | Is there a way to change your twitter password via the python-twitter API or the twitter API in general? I have looked around but can't seem to find this information... | 0 | python,twitter,python-twitter | 2014-10-06T20:21:00.000 | 0 | 26,224,243 | Fortunately not! Passwords are very confidential information which only Twitter itself wants to handle.
Think about a third-party developer suggesting to change your Twitter password:
would you trust them and let them see your new password?
how could you make sure they are really going to set your new password and not another one? | 0 | 1,863 | true | 0 | 1 | Password Change with the Twitter API | 26,224,375 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | on my beaglebone i have installed hostapd, iscdhcp and lighttpd so i can build a router and webserver.
let say my network is not secured so every on can be connected to it, and will get an ip-address from the beaglebone-server.
After the person is connected to the network and he starts a browser he should be redirected to a homepage to give his password, if he is not authenticated yet.
do i need a webframe-work (like django) for this purpose?? if not what else??
i am familiar with programming in python, php, html and java.
thanks | 0 | php,python,django,web-frameworks | 2014-10-07T14:14:00.000 | 0 | 26,238,097 | You can use any of these framework since the have authetications nativly implemented but you don't need to, you can build your own authentication system: You will need a database/file to store credentails and a authentication program verifying credentials against this given storage in any language your webserver can use.
However I would strongly recommend you to use an established framework for authetication. | 0 | 50 | true | 1 | 1 | authentication on webserver | 26,238,910 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am trying to save a whole slew of emails as html in Outlook Express, but the program only lets me save them individually. If I click "Save As" on more than one email Outlook only lets me save them all in one .txt file. I was able to drag all of the files into a folder outside of outlook, but their file type is Outlook Item. Thus, I am trying to find a way to save all of these files in html. I have looked at a number of programs, such as SAFE PST Backup, but they didn't seem to have this functionality.
If there were a way to do this in either Python or JavaScript that would be awesome, but I'm not sure where to start. | 0 | javascript,python,html,email,outlook | 2014-10-07T21:18:00.000 | 0 | 26,245,476 | Outlook Express, unlike real Outlook, does not provide any API to enumerate its messages. The only API exposed by OE is Simple MAPI. | 0 | 123 | true | 1 | 1 | How can I save every Outlook Express Email as an html without clicking "Save As" for every email | 26,245,689 |
1 | 1 | 0 | 0 | 3 | 1 | 0 | 0 | Is there a way to run a Ruby program with iruby? I want to run a script instead of entering my code in iruby notebook console.
I assume that iruby would be the same with ipython. | 0 | python,ruby,ipython,ipython-notebook | 2014-10-09T00:38:00.000 | 1 | 26,268,586 | Require ./your_program works well for me | 0 | 435 | false | 0 | 1 | Run a Ruby or Python script from iruby or ipython notebook? | 33,757,780 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I'm working on a project that spins off several long-running workers as processes. Child workers catch SIGINT and clean up after themselves - based on my research, this is considered a best practice, and works as expected when terminating scripts.
I am actively developing this project, which means that I am regularly testing changes in the interpreter. When I'm working in an interpreter, I often hit CTRL+C to clear currently written text and get a fresh prompt. Unfortunately, if I do this while a subprocess is running, SIGINT is sent to that worker, causing it to terminate.
Is there a solution to this problem other than "never hit CTRL+C in your interpreter"? | 0 | python,multithreading,multiprocessing,ipython,interpreter | 2014-10-09T02:27:00.000 | 1 | 26,269,366 | One option is to set a variable (e.g. environment variable, commandline option) when debugging. | 0 | 47 | false | 0 | 1 | Handling SIGINT (ctrl+c) in script but not interpreter? | 26,269,376 |
3 | 3 | 0 | 1 | 0 | 0 | 1.2 | 1 | I need to build a Python application that receives highly secure data, decrypts it, and processes & stores in a database. My server may be anywhere in the world so direct connection is not feasible. What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security). I know this is quite vague but part of the reason for that is to not limit the scope of answers received.
Basically, if you were building an app between two banks (this has nothing to do with banks but just for reference), how would you securely transmit the data?
Sorry, I should also add SFTP probably will not cut it since this python app must fire when it is pinged from the other server with a secure data transmission. | 0 | python,ssl,network-programming,network-security | 2014-10-09T02:50:00.000 | 0 | 26,269,514 | What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security)
It depends on your threat model, but intrasite VPN is sometimes used to tunnel traffic like this.
If you want to move up in the protocol stack, then mutual authentication with the client pinning the server's public key would be a good option.
In contrast, I used to perform security architecture work for a US investment bank. They did not use anything - they felt the leased line between data centers provided enough security. | 0 | 697 | true | 0 | 1 | Most secure server to server connection | 26,291,779 |
3 | 3 | 0 | 1 | 0 | 0 | 0.066568 | 1 | I need to build a Python application that receives highly secure data, decrypts it, and processes & stores in a database. My server may be anywhere in the world so direct connection is not feasible. What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security). I know this is quite vague but part of the reason for that is to not limit the scope of answers received.
Basically, if you were building an app between two banks (this has nothing to do with banks but just for reference), how would you securely transmit the data?
Sorry, I should also add SFTP probably will not cut it since this python app must fire when it is pinged from the other server with a secure data transmission. | 0 | python,ssl,network-programming,network-security | 2014-10-09T02:50:00.000 | 0 | 26,269,514 | Transmission and encryption need not happen together. You can get away with just about any delivery method, if you encrypt PROPERLY!
Encrypting properly means using a large, randomly generated keys, using HMACs (INSIDE! the encryption) and checking for replay attacks. There may also be a denial of service attack, timing attacks and so forth; though these may also apply to any encrypted connection. Check for data coming in out of order, late, more than once. There is also the possibility (again, depending on the situation) that your "packets" will leak data (e.g. transaction volumes, etc).
DO NOT, UNDER ANY CIRCUMSTANCES, MAKE YOUR OWN ENCRYPTION SCHEME.
I think that public key encryption would be worthwhile; that way if someone collects copies of the encrypted data, then attacks the sending server, they will not have the keys needed to decrypt the data.
There may be standards for your industry (e.g. banking industry), to which you need to conform.
There are VERY SERIOUS PITFALLS if you do not implement this sort of thing correctly. If you are running a bank, get a security professional. | 0 | 697 | false | 0 | 1 | Most secure server to server connection | 26,269,821 |
3 | 3 | 0 | 0 | 0 | 0 | 0 | 1 | I need to build a Python application that receives highly secure data, decrypts it, and processes & stores in a database. My server may be anywhere in the world so direct connection is not feasible. What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security). I know this is quite vague but part of the reason for that is to not limit the scope of answers received.
Basically, if you were building an app between two banks (this has nothing to do with banks but just for reference), how would you securely transmit the data?
Sorry, I should also add SFTP probably will not cut it since this python app must fire when it is pinged from the other server with a secure data transmission. | 0 | python,ssl,network-programming,network-security | 2014-10-09T02:50:00.000 | 0 | 26,269,514 | There are several details to be considered, and I guess the question is not detailed enough to provide a single straight answer. But yes, I agree, the VPN option is definitely a safe way to do it, provided you can set up a VPN.If not, the SFTP protocol (not FTPS) would be the next best choice, as it is PCI-DSS compliant (secure enough for banking) and HIPAA compliant (secure enough to transfer hospital records) and - unlike FTPS - the SFTP protocol is a subsystem of SSH and it only requires a single open TCP port on the server side (22). | 0 | 697 | false | 0 | 1 | Most secure server to server connection | 26,320,472 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm having application using python 2.7 in openshift and trying to copy file using ftplib.
When I try it in local virtenv everything is OK. But after deployment on openshift getting 500 on web site. Removing code related to ftplib makes it working (it is enough to comment out import ftplib).
It looks like openshift is missing ftplib. Anybody with similar problem? How to get it there? | 0 | python,python-2.7,openshift,ftplib | 2014-10-09T19:37:00.000 | 0 | 26,286,604 | Have you added it to your dependencies for your application?
Python now supports the use of requirements.txt to handle your dependencies, although python handles things a little different from php/perl. Your requirements.txt can be in your app’s root directory. If both your setup.py and requirements.txt exist within your repo, then both will be processed. | 0 | 128 | false | 0 | 1 | ftplib is missing in openshift | 26,286,904 |
1 | 1 | 0 | 4 | 1 | 1 | 1.2 | 0 | I'm using pydev in eclipse. When new a .py file, there will be file info(author, create date etc.)generated like below:
"""
Created on Fri Oct 10 13:50:18 2014
@author: XXXX
"""
How to change the format? | 0 | python,pydev | 2014-10-10T06:22:00.000 | 0 | 26,293,138 | Window-Preferences-PyDev-Editor-Templates-Change your Empty template | 0 | 2,065 | true | 0 | 1 | auto-generation of python file info(author, create date etc.) | 26,293,169 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I have a temperature logger that measures the records temperature values at specified time intervals. Currently I push these to a google spreadsheet but would like to display the values automatically on a web-page.
I have no experience with anything to do with web-pages, except setting up a few Wordpress sites but am reasonably comfortable with C++, Python, Matlab and Java.
A complicating factor is that the machine is in a VPN, so that access it via SSH I need to join the VPN.
I suspect the best way is to have a Python script that periodically send a up-to-date file to the web-server via ftp and then some script on the server that plots this.
My initial though was to use Python via something like CGI to read the data and create a plot on the server. However, I have no idea what the best approach on the server side would be. Is it worth to learn some PHP? Or should I write a Java Applet? Or CGI is the way to go?
Thank you for your help | 0 | php,python,wordpress,ssh,webpage | 2014-10-10T19:58:00.000 | 1 | 26,307,163 | Have a program monitor the file, either locally or via SSH. Have that program push updates into your web backend, via HTTP API or such. | 0 | 856 | false | 1 | 1 | Best way to display real-time data from an SSH accesible file on webpage | 26,307,516 |
1 | 2 | 0 | 3 | 2 | 0 | 1.2 | 1 | I'm building an installation that will run for several days and needs to get notifications from a GMail inbox in real time. The Gmail API is great for many of the features I need, so I'd like to use it. However, it has no IDLE command like IMAP.
Right now I've created a GMail API implementation that polls the mailbox every couple of seconds. This works great, but times out after a while (I get "connection reset by peer"). So, is it reasonable to turn off the sesson and restart it every half an hour or so to keep it active (like with IDLE)? Is that a terrible, terrible hack that will have google busting down my door in the middle of the night?
Would the proper solution be to log in with IMAP as well and use IDLE to notify my GMail API module to start up and pull in changes when they occur? Or should I just suck it up and create an IMAP only implementation? | 0 | python,imap,long-polling,gmail-api | 2014-10-10T20:06:00.000 | 0 | 26,307,256 | Would definitely recommend against IMAP, note that even with the IMAP IDLE command it isn't real time--it's just polling every few (5?) seconds under the covers and then pushing out to the connection. (Experiment yourself and see the delay there.)
Querying history.list() frequently is quite cheap and should be fine. If this is for a sizeable number of users you may want to do a little bit of optimization like intelligent backoff for inactive mailboxes (e.g. every time there's no updates backoff by an extra 5s up to some maximum like a minute or two)?
Google will definitely not bust down your door or likely even notice unless you're doing it every second with 1M users. :)
Real push notifications for the API is definitely something that's called for. | 0 | 1,369 | true | 0 | 1 | Long polling with GMail API | 26,307,963 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I have deployed my local python web service project to Heroku. I have all my dependencies in requirement.txt file. But, one module psycopg2 is not getting installed properly and I am getting installation error. So, I removed it from requirement.txt and thought I will push everything to heroku first, and then I will manually copy the psycopg2 module folder in /app/.heroku/python/lib/python2.7/site-packages folder. But I don't know how to access this folder!
Can you please help? | 0 | python,heroku,importerror,psycopg2 | 2014-10-11T15:34:00.000 | 0 | 26,316,273 | You could keep the psycopg2 directory in the same directory of apps, but that's actually a hack and you should try fix on installing psycopg2 on Heroku. | 0 | 248 | false | 1 | 1 | How to manually copy folder to /app/.heroku/python/lib/python2.7/site-packages/? | 26,317,020 |
1 | 1 | 0 | 2 | 4 | 1 | 1.2 | 0 | I am using Z3 Python bindings to create an And expression via z3.And(exprs) where exprs is a python list of 48000 equality constraints over boolean variables. This operation takes 2 seconds on a MBP with 2.6GHz processor.
What could I be doing wrong? Is this an issue with z3 Python bindings? Any ideas on how to optimize such constructions?
Incidentally, in my experiments, the constructions of such expressions is taking more time than solving the resulting formulae :) | 0 | python,z3,z3py | 2014-10-15T06:21:00.000 | 0 | 26,375,763 | Using Z3 over Python is generally pretty slow. It includes parameter checks and marshaling (_coerce_expr among others).
For scalability you will be better of using one of the other bindings or bypass the Python runtime where possible. | 0 | 283 | true | 0 | 1 | Why is z3.And() slow? | 26,385,910 |
1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | So I have 2 processing units, one runs on Python and the other runs of C++. The first one will generate a set of data of around 3 - 5 values, either as a list of ints or a string. I want this value to be passed to C++, what is the best method? Like do I have to create a file in python then load it in C++? or there are an another way? This process would repeat every second, so I wish the transmission to be fast enough. | 0 | python,c++ | 2014-10-15T09:59:00.000 | 0 | 26,379,658 | I'd suggest using python's struct to pack your values in python which could be viewed as a 'struct' in c++, then send it from python to c++ using zeromq. | 0 | 76 | false | 0 | 1 | Transmitting data from Python to C++ | 26,379,890 |
1 | 1 | 0 | 2 | 0 | 1 | 1.2 | 0 | I would like to make use of a couple of C++ libraries from within Python to provide a Python API for scripting a C++ application. So I am wondering if there are any performance implications of passing a Python tuple in place of a C++ struct to a C++ function? Also, are the two data structures the same? If the two are the same, how can I prevent a tuple with incompatible member types from being passed? Would that impact the performance? If that impacts performance, what are the best practices for dealing with such situations in Python. Please note that I am clearly not the author of the libraries. | 0 | python,c++,performance,struct,tuples | 2014-10-15T17:52:00.000 | 0 | 26,388,938 | There's almost no relationship between a Python tuple and
a struct in C++. The elements of a tuple are neither named
nor typed, and must be accessed (in C++) through functions in
the Python C API (PyTuple_GetItem, etc.). Internally
(although you don't have access to it directly), a tuple is an
array of pointers to objects. Even for types like int and
float.
Because of the function calls and the added levels of
indirection, using a Python tuple will be slower than using
a struct. The "best practice" is to write a wrapper function,
which extracts the information from the tuple, doing the dynamic
type checking, etc., and use it to initialize the struct which
you then send on the the C++ function. There's really no other
way. | 0 | 79 | true | 0 | 1 | Are there any performance implications of passing a Python tuple in place of a C++ struct function argument? | 26,389,438 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | We're running a Linux server running Apache2 with mod_python. One mod_python script inserts an entry in a database logging table. The logging table is large can be a point of disk-write contention, or it can be temporarily unavailable during database maintenance. We would like to spin off the logging as an asynchronous background task, so that the user request can complete before the logging is done.
Ideally there would be a single background process. The web handler would pass its log request to the background process. The background process would write log entries to the database. The background process would queue up to a hundred requests and drop requests when its queue is full.
Which techniques are available in Python to facilitate communicating with a background process? | 0 | python,background-process,interprocess,mod-python | 2014-10-16T13:02:00.000 | 1 | 26,405,171 | you can use celery / redis task queue. | 0 | 76 | false | 0 | 1 | How can you message a background process from mod_python? | 26,405,300 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I have created Python classes by an XML schema file using generateDS2.12a. I am using these classes to create XML files. My module works well with a Python 2.7 environment.
Now, due to some reason my environment is changed to Python 3.0.0. Now when I try to export the XML object it is throwing me following error:
Function : export(self, outfile, level, namespace_='', name_='rootTag', namespacedef_='', pretty_print=True)
Error : s1 = (isinstance(inStr, basestring) and inStr or
NameError: global name 'basestring' is not defined
Is there a change I need to do to export XML in Python 3.0.0 or a new version of GenerateDS to be used for Python 3.0.0? | 0 | python,python-3.x,xml,python-generateds | 2014-10-16T16:36:00.000 | 0 | 26,409,544 | You could run generateDS to get your Python file then run, e.g.,
"2to3 -w your_python_file.py" to generate a Python 3 version of your generateDS file.
I'm went through the same process and I had luck with this. It looks to work just fine. | 0 | 866 | false | 0 | 1 | What version of generateDS is to be used for Python 3.0.0? | 26,551,274 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a python script which takes a config file on command line and gives an output.
I am trying to see how I can use nosetests to run all these files.
I read through the nosetests info on google but i could not follow how to run them with the config file.
Any ideas on where I could get started? | 0 | python,testing,config,nose | 2014-10-16T18:38:00.000 | 0 | 26,411,571 | I didnt have to do any of those. Just nosetests by themselves execute any test beginning with "test_....py" and make sure you use "--exe" if they are executable if not you can skip that option. The nosetests help page on wiki really helps. | 0 | 1,036 | true | 0 | 1 | running nose tests with a regular python script | 26,785,951 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I'm working on developing a test automation framework. I need to start a process(a C++ application) on a remote linux host from a python script. I use the python module "paramiko" for this. However my c++ application takes sometime to run and complete the task assigned to it. So till the application completes processing, I cannot close the connection to the paramiko client. I wan thinking if I could do something like "the c++ application executing a callback(or some kind of signalling mechanism) and informing the script on completion of the task" Is there a way I can achieve this ?
I'm new to python, so any help would be much appreciated.
thanks!
Update: Is it not possible to have event.wait() and event.set() mechanism between the c++ application and the python script ? If yes, can somebody explain how it can be achieved ?
thanks in advance! | 0 | python,c++,linux,signals,paramiko | 2014-10-16T20:39:00.000 | 0 | 26,413,505 | The best way I can think of to do this is to run both of them in a web server. Use something like Windows Web Services for C++ or a native CGI implementation and use that to signal the python script.
If that's not a possibility, you can use COM to create COM objects on both sides, one in Python, and one in C++ to handle your IPC, but that gets messy with all the marshalling of types and such. | 0 | 331 | false | 0 | 1 | Can a "C++ application signal python script on completion"? | 26,413,679 |
1 | 2 | 0 | 1 | 5 | 0 | 0.099668 | 0 | I am trying to learn python by making a simple program which generates a typical type of practice problem, organic chemistry students usually face on exams: the retro-synthesis question.
For those unfamiliar with this type of question: the student is given the initial and final species of a series of chemical reactions, then is asked to determine which reagents/reactions were performed to the initial reactant to obtain the final product.
Sometimes you are only given the final product and asked to list the reactions necessary to synthesize given some parameters (start only with a compound that has 5 carbons or less, only use alcohol, etc.)
So far, I've done some research, and I think RDkit w/Python is a good place to start. My plan is to use the SMILE format for reading molecules (since I can manipulate it as I would a string), then define functions for each reaction, finally I'll need a database of chemical species which the program can randomly select species from (for the inital and final species in the problem). The program then selects a random species from the database, applies a bunch of reactions to it (3-5, specified by the user) then displays the final product. The user then solves the question himself, and the program then shows the path it took (using images of the intermediates and printing the reagents used to obtain them). Simple. In principle.
But once I started actually coding the functions I ran in to some problems, first of all it is very tedious to write a function for every single reaction, second while SMILE can handle virtually all molecular complications thrown at it (stereo-chemistry, geometry, etc.) it has multiple forms for certain molecules and I'm having trouble keeping the reactions specific. Third, I'm using the "replace" method to manipulate the SMILE strings and this gets me into trouble when I have regiospecific reactions that I want to make universal
For example: Sn2 reactions react well with primary alkyl halides, but not all with tertiary ones (steric hinderance), how would I create a function for this reaction?
Another problem, I want the reactions to be tagged by their respective reagents, thus I've taken to naming the functions by the reagents used. But, this becomes problematic when there are reagents which can take many different forms (Gringard reagents for example).
I feel like there is a better, less repetitive and tedious way to tackle this thing. Looking for a nudge in the right direction | 0 | python,functional-programming,chemistry | 2014-10-17T15:41:00.000 | 0 | 26,428,639 | It might be helpful if you will look for a free or if possible with you a commercial software(written in python) which solves the same or a problem close to it, learn its functionality, problem solving approach and if possible obtain its source code. I find this to be helpful in many ways. | 0 | 2,635 | false | 0 | 1 | How to make a python organic chemistry retro-synthesis generator? | 29,393,699 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I made a python program, and froze it to make an executable. The only problem I can see, it that it cannot read/write the contents of several support files. I know that this is permission error because the Program Files (x86) folder is protected. I would prefer to keep my supporting files in the same folder as my executable, so that the users cannot alter them, and so my python program can look locally for them.
I have tried changing the permissions, but I'm not sure which one controls whether my executable can read/write to the local folder. | 0 | python,windows,permissions,file-permissions | 2014-10-19T23:29:00.000 | 1 | 26,456,543 | If I understand correctly, you want your program to both run and edit files in its current folder. Programs that users invoke run using the user credentials by default.
If you want to prevent users from editing those application config files there are a few tricks:
Wrap your app in a DOS batch file. In that batch file use "runas" to start your app using a different account that has both execute and write permissions to those configs. Ensure the invoking user does not have write permissions. That should solve your problem.
Instead of flat text files for configs, how about using SQLite? Or encrypt the file. Either way, the same result, the user maybe able to open the file but not know what they are looking at using a typical text editor. | 0 | 1,108 | false | 0 | 1 | Executable read permissions for "Program Files (x86)" | 26,456,745 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I have a python script (analyze.py) which takes a filename as a parameter and analyzes it. When it is done with analysis, it waits for another file name. What I want to do is:
Send file name as a parameter from PHP to Python.
Run analyze.py in the background as a daemon with the filename that came from PHP.
I can post the parameter from PHP as a command line argument to Python but I cannot send parameter to python script that already runs at the background.
Any ideas? | 0 | php,python,python-2.7,background-process,python-daemon | 2014-10-20T18:53:00.000 | 1 | 26,472,868 | The obvious answer here is to either:
Run analyze.py once per filename, instead of running it as a daemon.
Pass analyze.py a whole slew of filenames at startup, instead of passing them one at a time.
But there may be a reason neither obvious answer will work in your case. If so, then you need some form of inter-process communication. There are a few alternatives:
Use the Python script's standard input to pass it data, by writing to it from the (PHP) parent process. (I'm not sure how to do this from PHP, or even if it's possible, but it's pretty simple from Python, sh, and many other languages, so …)
Open a TCP socket, Unix socket, named pipe, anonymous pipe, etc., giving one end to the Python child and keeping the other in the PHP parent. (Note that the first one is really just a special case of this one—under the covers, standard input is basically just an anonymous pipe between the child and parent.)
Open a region of shared memory, or an mmap-ed file, or similar in both parent and child. This probably also requires sharing a semaphore that you can use to build a condition or event, so the child has some way to wait on the next input.
Use some higher-level API that wraps up one of the above—e.g., write the Python child as a simple HTTP service (or JSON-RPC or ZeroMQ or pretty much anything you can find good libraries for in both languages); have the PHP code start that service and make requests as a client. | 0 | 1,520 | true | 0 | 1 | Send Parameter to a Python Script Running at Background From PHP | 26,473,069 |
1 | 2 | 0 | 1 | 0 | 1 | 1.2 | 0 | in myModule.py I am importing environ from os , like
from os import environ since I am only using environ, but when I do dir(myModule) it shows environ as publicly visible , how ever should it be imported as protected assuming some other project may also have its own environ function ? | 0 | python | 2014-10-21T06:06:00.000 | 0 | 26,479,928 | If you're doing from os import environ, then you'll reference it as environ.
If you do import os, it's os.environ.
So depending on your needs, the second option might be better. The first will look better and read easier, whereas the second avoids namespace pollution. | 0 | 114 | true | 0 | 1 | when importing functions from inside builtins like os or sys is it good practice to import as protected? | 26,480,045 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I am using suds-jurko==0.6, but its very slow when i try to connect to remote SOAP Server with caching and local WSDL.
Can anyone suggest faster, more active/recent SOAP client for Python? | 0 | python,soap-client,suds | 2014-10-21T13:20:00.000 | 0 | 26,487,774 | While I couldn't find an alternate lib, I was able to run suds over multiprocessing using pathos.multiprocessing package. | 0 | 342 | false | 1 | 1 | python '14 fast SOAP Client | 26,581,414 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 1 | So I have a list of public IP addresses and I'd like to see if they are a public IP that is associated with our account. I know that I can simply paste each IP into the search box in the AWS EC2 console. However I would like to automate this process via a Python program.
I'm told anything you can do in the console, you can do via CLI/program, but which function do I use to simply either return a result or not, based on whether it's a public IP that's associated with our account?
I understand I may have to do two searches, one of instances (which would cover non-EIP public IPs) and one of EIPs (which would cover disassociated EIPs that we still have).
But how? | 0 | python,amazon-web-services,amazon-ec2,boto | 2014-10-21T18:00:00.000 | 0 | 26,493,207 | Here's the method I have come up with:
To look up all IPs to see if they are EIPs associated with our AWS account
Get a list of all our EIPs
Get a list of all instances
Build list of all public IPs of instances
Merge lists/use same list
Check desired IPs against this list.
Comments welcome. | 0 | 1,555 | false | 1 | 1 | How to tell if my AWS account "owns" a given IP address in Python boto | 26,495,168 |
1 | 1 | 0 | 0 | 4 | 1 | 0 | 0 | I am using the python package mailbox, and I am trying to extract the messages and clean the data. I am running into the problem that for large databases, I can call the constructor with my sample file, but when I try to print any messages my program hangs. I assume it is because the file I am trying to read is over 7GB. How can I deal with this problem? | 0 | python,mbox | 2014-10-22T01:32:00.000 | 0 | 26,499,051 | Consider splitting the mailbox manually. The format is fairly easy to process (as long as you only need read-only access) by reading it line-per-line; and you can use the existing classes for the actual parsing of individual messages.
Look up the definition of the mbox format - lines beginning with "From" start a new mail. You can split the huge file at these markers, then use the mailbox package to read only one file at a time. | 0 | 654 | false | 0 | 1 | Python mailbox on large mbox datasets | 26,501,853 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I downloaded Snappy library sources for working with compression and everything was great on one machine, but it didn't work on another machine. They have completely same configurations of hardware/OS + python 2.7.3.
All I was doing is "./configure && make && make install".
There were 0 errors during any of these processes and it installed successfully to the default lib directory, but python cant see it anyhow. help('modules') and pip freeze doesn't show snappy on the second machine and as the result I cant import it.
I tried even 'to break' structure and install it to different lib catalogs, but even that didn't work. I don't think if its related to system environment variables, since python should have completely same configuration on any of these machines (Amazon EC2).
Anyone knows how to fix this issue? | 0 | python,python-2.7,amazon-ec2,module,snappy | 2014-10-23T10:43:00.000 | 0 | 26,526,365 | I just found python-snappy on github and installed it via python. Not a permanent solution, but at least something. | 0 | 95 | true | 0 | 1 | Python cant see installed module | 26,531,727 |
1 | 2 | 0 | 3 | 3 | 0 | 1.2 | 1 | Is there a way to create a Kerberos ticket in Python if you know the username/password?
I have MIT Kerberos in place and I can do this interactively through KINIT but want to do it from Python. | 0 | python,kerberos,mit-kerberos | 2014-10-23T17:57:00.000 | 0 | 26,534,348 | From what i learned when working with kerberos (although in my work i used C) is that you can hardly replace KINIT. There are two ways you can simulate KINIT behaviour using programming and those are: Calling kinit shell command from python with the appropriate arguments or (as I did) calling one method that pretty much does everything:
krb5_get_init_creds_password(k5ctx, &cred, k5princ, password, NULL, NULL, 0, NULL, NULL);
So this is a C primitive but you should find one for python(i assume) that will do the same. Basically this method will receive the kerberos session, a principal(built from the username) and the password. In order to fully replace KINIT behaviour you do need a bit more than this though(start session, build principal, etc). Sorry, since i did not work with python my answer may not be what you want but i hope i shed you some light. Feel free to ask any conceptual question about how kerberized-applications work. | 0 | 9,619 | true | 0 | 1 | How can I get a Kerberos ticket in Python | 26,548,064 |
1 | 2 | 0 | 2 | 3 | 1 | 1.2 | 0 | I know the best practise is to hash user passwords, and I do that for all my other web apps, but this case is a bit different.
I'm building an app that sends email notifications to a company's employees.
The emails will be sent from the company's SMTP servers, so they'll need to give the app email/password credentials for an email account they allocate for this purpose.
Security is important to me, and I'd rather not store password that we can decrypt, but there seems like no other way to do this.
If it makes any difference, this is a multi-tenant web app.
What's the best way in python to encrypt these passwords since hashing them will do us no good in trying to authenticate with the mail server?
Thanks!
Note: The mailserver is not on the same network as the web app | 0 | python,email,password-encryption | 2014-10-23T18:28:00.000 | 0 | 26,534,848 | I've faced this issue before as well. I think that ultimately, if you are stuck being able to produce a plain-text password inside your app, then all of the artifacts to produce the password must be accessible by the app.
I don't think there is some encryption-magic to do here. Rely on file-system permissions to prevent anyone from accessing the data in the first place. Notice that your SSH private key isn't encrypted in your home dir. It is just in your home dir and you count on the fact that Linux won't let just anyone read it.
So, make a user for this app and put the passwords in a directory that only that user can access. | 0 | 603 | true | 0 | 1 | Safest way in python to encrypt a password? | 26,679,300 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | So I'm relatively new to USB and PyUSB. I am trying to communicate with a bluetooth device using PyUSB. To initialize it, I need to send a command and read back some data from the device. I do this using dev.write(0x02, msg) and ret = dev.read(0x81, 0x07). I know the command structure and the format of a successful response. The response should have 7 bytes, but I only get back 2.
There is a reference program for this device that runs on windows only. I have run this and used USBPcap/wireshark to monitor the traffic. From this I can see that after my command is sent, the device responds several times with a 2 byte response, and then eventually with the full 7 byte response. I'm doing the python work on a Raspberry Pi, so I can't monitor the traffic as easily.
I believe the issue is due to the read expecting 7 bytes and then returning the result after the default timeout is reached, without receiving the follow up responses. I can set the bytes to 2 and do multiple readings, but I have no way of knowing if the message had more bytes that I am missing. So, what I'm looking for is a way to check the length of the message in the buffer before requesting the read so I can specify the size.
Also, is there a buffer or anything to clear to make sure I am reading next message coming through. It seems that no matter how many times I run the read command I get the same response back. | 0 | python,python-3.x,usb,pyusb | 2014-10-23T21:45:00.000 | 1 | 26,538,004 | Solved my own issue. After running my code on a full linux machine, capturing the data and comparing it to the wireshark trace I took on the windows application, I realized the length of the read was not the issue. The results were very similar and the windows app was actually requesting 4096 bytes back instead of 2 or 7 and the device was just giving back whatever it had. The problem actually had to do with my Tx message not being in the correct format before it was sent out. | 0 | 2,500 | true | 1 | 1 | PyUSB read multiple frames from bulk transfer with unknown length | 26,553,765 |
1 | 1 | 0 | 6 | 6 | 0 | 1 | 0 | I've been learning Python using version 3.4. I recently started learning Web.py so have been using Python 2.7 for that, since web.py not supported in Python 3.4. I have nose 1.3.4 module installed for both Python 3.4 and 2.7. I need to run the nosetests command on some Python code written in 2.7 that uses the Web.py module. However, when I type nosetests command it automatically uses Python 3.4, so is throwing an error as unable to import the Web.py module in my Python code. Is there a way to force nosetests to use Python 2.7? Macbook Pro running OS X Yosemite. | 0 | python,python-2.7,python-3.x,web.py,nosetests | 2014-10-27T00:42:00.000 | 0 | 26,579,670 | As @dano suggeted in his comment:
Try python2.7 -m nose instead of running nosetests. | 0 | 2,718 | false | 0 | 1 | Force Nosetests to Use Python 2.7 instead of 3.4 | 26,580,687 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | I have a survey that went out to 100 recipients via the built-in email collector. I am trying to design a solution in python that will show only the list of recipients (email addresses) who have not responded (neither "Completed" nor "Partial"). Is there any way to get this list via the SurveyMonkey API?
One possible solution is to store the original list of 100 recipients in my local database, get the list of recipients who have already responded using the get_respondent_list api, and then do a matching to find the people who have not responded. But I would prefer to not approach it this way since it involves storing the original list of recipients locally.
Thanks for the help! | 0 | python,api,email,automation,surveymonkey | 2014-10-27T20:37:00.000 | 0 | 26,596,396 | There is currently not a way to do this via the SurveyMonkey API - it sounds like your solution is the best way to do things.
I think your best bet is to go with your solution and email [email protected] and ask them about the feasibility of adding this functionality in future. It sounds like what you need is a get_collector_details method that returns specific details from a collector, which doesn't currently exist. | 0 | 165 | true | 0 | 1 | Getting list of all recipients in an email collector for a survey via SurveyMonkey API | 26,597,085 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | I wrote a web scraper in Scrapy that I call with scrapy crawl scrape_site and a twitter bot in Twython that I call with python twitter.py. I have all the proper files in a directory called ScrapeSite. When I execute these two commands on the command line while in the directory ScrapeSite they work properly.
However, I would like to move the folder to a server and have a job run the two commands every fifteen minutes. I've looked into doing a cron job to do so, but the cronjobs are located in a different parent directory, and I can only call scrapy in a directory with Scrapy files (e.g. ScrapeSite). Can I make a cron job to run a file in the ScrapeSite directory that in turn can call the two commands at the proper level?
How can I programmatically execute command line commands at a different leveled directory at a certain time interval? | 0 | python,command-line,cron,scrapy,twython | 2014-10-29T02:00:00.000 | 0 | 26,621,636 | Use a cron job. This allows you to run a command line command at a time interval. You can combine commands with && and as a result can change directories with the normal bash command cd.
So in this case you can call cd /directory/folder && scrapy crawl scrape_site && python twitter.py
To run this every fifteen minutes make the cron job run like */15 * * * *
So the full cron job would look like */15 * * * * cd /directory/folder && scrapy crawl scrape_site && python twitter.py | 0 | 190 | true | 1 | 1 | How can I programmatically execute command line commands at a different leveled directory at a certain time interval? | 26,708,022 |
1 | 2 | 0 | 6 | 4 | 0 | 1.2 | 1 | I work on a project in which I need a python web server. This project is hosted on Amazon EC2 (ubuntu).
I have made two unsuccessful attempts so far:
run python -m SimpleHTTPServer 8080. It works if I launch a browser on the EC2 instance and head to localhost:8080 or <ec2-public-IP>:8080. However I can't access the server from a browser on a remote machine (using <ec2-public-IP>:8080).
create a python class which allows me to specify both the IP address and port to serve files. Same problem as 1.
There are several questions on SO concerning Python web server on EC2, but none seems to answer my question: what should I do in order to access the python web server remotely ?
One more point: I don't want to use a Python web framework (Django or anything else): I'll use the web server to build a kind of REST API, not for serving HTML content. | 0 | python,amazon-web-services,amazon-ec2,webserver | 2014-10-29T08:37:00.000 | 0 | 26,625,845 | you should open the 8080 port and ip limitation in security croups, such as:
All TCP TCP 0 - 65535 0.0.0.0/0
the last item means this server will accept every request from any ip and port, you also | 0 | 1,858 | true | 1 | 1 | Accessing Python webserver remotely on Amazon EC2 | 26,626,875 |
1 | 1 | 0 | -2 | 4 | 0 | -0.379949 | 1 | I am connecting to a host for the first time using its private key file. Do I need to call load_host_keys function before connecting to the host? Or Can I just skip it? I have the autoAddPolicy for missing host key but how can python know the location of the host key file?
Hence my question, when to use the function load_host_key? | 0 | python,ssh,paramiko | 2014-10-29T09:09:00.000 | 0 | 26,626,347 | Load host keys from a local host-key file. Host keys read with this method will be checked after keys loaded via load_system_host_keys, but will be saved back by save_host_keys (so they can be modified). The missing host key policy AutoAddPolicy adds keys to this set and saves them, when connecting to a previously-unknown server.
This method can be called multiple times. Each new set of host keys will be merged with the existing set (new replacing old if there are conflicts). When automatically saving, the last hostname is used.
Read a file of known SSH host keys, in the format used by openssh. This type of file unfortunately doesn't exist on Windows, but on posix, it will usually be stored in os.path.expanduser("~/.ssh/known_hosts"). | 0 | 3,777 | false | 0 | 1 | When and why to use load_host_keys and load_system_host_keys? | 26,708,001 |
1 | 1 | 0 | -1 | 0 | 0 | -0.197375 | 0 | I am interfacing with a Websphere MQ system, using Python/pymqi. From time to time, I will need to:
check the status of an MQ channel
start/restart a channel that is not running
How can I achieve the above?
Pymqi documentation doesn't appear to cover this, despite having very good coverage of dealing with MQ queues. | 0 | python,ibm-mq,pymqi | 2014-10-29T10:57:00.000 | 0 | 26,628,565 | You should never need to check the status of a channel or start it, from an application. This is probably why the documentation does not have much coverage on this as it is not an expected thing for an application to do.
Instead, your channel should be configured to automatically start when messages need to be moved across it, so it is always running when you need it to be. This is known as Triggering.
If there is an issue with the network connection the channel is using, the channel will retry and remake the connection again too, so you do not need to check the status of the channel from your application. | 0 | 195 | false | 0 | 1 | How can I restart an MQ channel using pymqi? | 26,640,062 |
Subsets and Splits