Available Count
int64
1
31
AnswerCount
int64
1
35
GUI and Desktop Applications
int64
0
1
Users Score
int64
-17
588
Q_Score
int64
0
6.79k
Python Basics and Environment
int64
0
1
Score
float64
-1
1.2
Networking and APIs
int64
0
1
Question
stringlengths
15
7.24k
Database and SQL
int64
0
1
Tags
stringlengths
6
76
CreationDate
stringlengths
23
23
System Administration and DevOps
int64
0
1
Q_Id
int64
469
38.2M
Answer
stringlengths
15
7k
Data Science and Machine Learning
int64
0
1
ViewCount
int64
13
1.88M
is_accepted
bool
2 classes
Web Development
int64
0
1
Other
int64
1
1
Title
stringlengths
15
142
A_Id
int64
518
72.2M
1
2
0
1
1
0
1.2
1
This might be bad practice so forgive me, but when python ends on non telnet lib exception or a non paramiko (SSH) exception, will the SSH or Telnet connection automatically close? Also, will sys.exit() close all connections that the script is using?
0
python,ssh,telnet,paramiko
2015-10-08T13:54:00.000
0
33,017,847
Yes, the system (Linux and Windows) keeps track of all of the resources your process uses. It can be files, mutexes, sockets, anything else. When process dies, all of the resources are freed. It doesn't really matter which programming language do you use and how you terminate your application. There are several subtle exceptions for this rule like WAIT_CLOSE state for server sockets or resources held by zombie processes, but in general you can assume that whenever your process is terminated, all of the resources are freed. UPD. As it was mentioned in comments, OS cannot guarantee that the resources were freed properly. In network connections case it means that there are no guarantee that the FIN packet was sent, so although everything was cleaned up on your machine, the remote endpoint can still wait for data from you. Theoretically, it can wait infinitely long. So it is always better practice to use "finally" statement to notify the other endpoint about closing connection.
0
702
true
0
1
Does python automatically close SSH and Telnet
33,018,073
1
1
0
2
0
1
1.2
0
I have a large text file stored in a shared directory on a server in which different other machines have access to that. I'm running various analysis on this text file without changing or updating it. I'd like to know whether I can run different python scripts on different machines in which all of them reading that large text file? None of the scripts make any change to that file, they just need to read it.
0
python
2015-10-09T01:12:00.000
0
33,028,390
You should be able to do multiple read access, but it might get really, really, really slow, compared to reading the same file by several scripts on the same computer (obviously, the degree will very much depend on how much reading you are doing). You may want to copy the file over before processing.
0
42
true
0
1
Python: Can I read a large text file from different scripts?
33,028,435
1
1
0
0
1
0
1.2
0
We have a python script that ftp's (downloads) files. However, target folder contents are deleted when the source is empty. How do you not delete the files in the target dir when the source is empty? We are using shutil.copy2 -- can that be the cause? Are there alternatives that preserve metadata?
0
python
2015-10-09T14:30:00.000
0
33,040,867
Turns out a boolean was being evaluated against a string. This caused a True; it when into the if block and was purging the target dir.
0
57
true
0
1
Python: shutil.copy2 empties target dir
33,045,585
1
1
0
1
1
1
0.197375
0
I have a application where it is required to read data very fast from a COM-Port. The data arrives with 10kHz (1.25MBaud) in 8 byte packages. Therefore the data capturing (getting the data from the COM-Port buffer) and processing must be as fast as possible. I think my code is quite optimised but I still loose sometimes some data packages because the serial buffer overflows. Because of this I thought of porting the pyserial package (or at least the parts I use) to Cython. Is it possible to port the pyserial package to Cython? And even more important: would there be a speed improvement if the code is written in Cython? Are there other, possibly easier methods, to improve the performance?
0
python,cython,pyserial
2015-10-10T00:55:00.000
0
33,049,137
Doubtful porting would remedy the problem you are encountering. The problem with using a UART is the relatively small OS-provided buffer for the incoming data. As an alternative, you might try one of the Ethernet / Serial converters to allow serial I/O through an Ethernet port. The advantage of this approach is the use of the network driver's much larger buffer. If your application can't readily ingest the data at the rate it's arriving, no amount of buffer will help. In this case, if you can't accept some packet loss, you should try to lower the data rate.
0
610
false
0
1
Is it possible and useful to port pyserial to cython
33,049,637
1
3
0
0
1
0
0
0
I want to do the following. I want to have a button on a HTML page that once It gets pressed a message is sent to some python script I'm running. For example, once the button is pressed some boolean will turn true, we will call the boolean bool_1. Then that boolean is sent to my python code, or written to a text file. Then in my python code I want to do something depending on that value. Is there a way to do this? Ive been looking at many thing but they haven't worked. I know that in javascript you can't write a text files because of security issues. My python code is constantly running, computing live values from sensors.
0
javascript,python,html,raspberry-pi,raspberry-pi2
2015-10-11T00:05:00.000
0
33,060,256
Maybe you can try to create a nodejs script that create a websocket. You ca connect to the websocket with python and so, you are able to send data from your website to nodejs and from nodejs to python in real-time. Have a nice day
0
1,183
false
1
1
Sending a message between HTML and python on the raspberry pi
33,235,245
1
3
0
0
3
1
0
0
I have read numerous articles about running code in the Atom code editor, however, I cannot seem to understand how this can be done. Could anyone explain it in simpler terms? I want to run my Python code in it and I have downloaded the 'python-tools-0.6.5' and 'atom-script-2.29.0' files from the Atom website and I just need to know how to get them working.
0
python,windows,atom-editor
2015-10-11T18:15:00.000
0
33,068,445
From Atom > Preferences > Install: Search for atom-runner package and install it. Close the atom editor and reopen. This helps the atom editor set the right path and will solve the issue. If this doesnt help, manually copy the folder of the python installation directory and add the path to system PATH. This will solve the issue.
0
17,470
false
0
1
Run Code In Atom Code Editor
43,814,862
1
3
0
-3
0
0
-0.197375
0
I have a very simple iOS and Android applications that download a txt file from a web server and preset it to the user. I'm looking for a way that only the application will be able to read the file so no one else can download and use it. I would like to take the file that is on my computer, encrypt it somehow, upload the result to the server and when the client will download the file it will know how to read it. What is the simplest way to to this kind of thing? Thanks a lot!
0
javascript,python,ios,swift,encryption
2015-10-12T11:32:00.000
0
33,080,063
It's very wide question and the variety of ways to do that. First of all, you need to choose a method of the encryption and purpose why you encrypt the data. There are 3 main encryption methods: 1. Symmetric Encryption Encrypter and Decrypter have access to the same key. It's quite easy, but it has one big drawback - key needs to be shared, so if you put it to the client it can be stolen and abused. As a solution, you would need to use some another method to send the key and encrypt it on the client. 2. Asymmetric Encryption With asymmetric encryption, the situation is different. To encrypt data you need to use public/private key pair. The public key is usually used to encrypt data, but decryption is possible only with a private key. So you can hand out public key to your clients, and they would be able to send some encrypted traffic back to you. But still you need to be sure that you are talking to the right public key to not abuse your encryption. Usually, it's used in TLS (SSL), SSH, signing updates, etc. 3. Hashing (it's not really encryption) It's the simplest. With hashing, you produce some spring that can't be reverted, but with a rule that same data will produce the same hash. So you could just pick the most suitable method and try to find appropriate package in the language you use.
0
1,327
false
1
1
encrypting data on server and decrypting it on client
33,080,475
1
2
1
2
2
0
0.197375
0
I'm interesting in making games in my future, and I've heard that my favourite game's engine is made with c++, but its embedded with python. I have little experience with programming, but I greatly understand how object orientation works.
0
python,c++,scripting,embedded-language
2015-10-12T18:44:00.000
0
33,088,102
Why would someone need/want to embed a scripting language into a programming language? The main reason obviously is to allow to provide extensions to the game engine without need to recompile the entire game executable program, but have the extensions loaded and interpreted at run time. Many game engines provide such feature for extensibility. ... but I greatly understand how object orientation works. Object orientation comes in with the interfaces declared, how to interact with the particular scripts. So python is itself an object oriented language which supports OOP principles quite well. For instance integration of non OOP scripting languages, like e.g. lua scripts (also oftenly used for extensions), makes that harder, but not impossible after all.
0
159
false
0
1
What is the purpose of embedding a scripting language into a programming language in a game engine?
33,088,189
1
1
0
0
1
1
1.2
0
In my current project I have modules communicating using simple request/reply form of RPC (remote procedure calls). I want to automatically retry failed requests if and only if there is a chance that a new attempt might be successfull. Vast majority of errors are permanent, but errors like timeouts and I/O errors are not. I have defined two custom exceptions - RPCTransientError and RPCPermanentError - and currently I map all errors to one of these two exceptions. If in doubt, I choose the transient one. I do not want to reinvent the wheel. My question is: is there any existing resource regarding classification of standard exceptions to transient and permanent errors? I'm using Python 3.3+ with the new OS and IO related exception hierarchy that I like a lot. (PEP 3151). Don't care about previous versions.
0
python-3.x,error-handling
2015-10-13T20:50:00.000
0
33,112,374
No, there is not for the simple reason an error may be transit for you while for someone else it may be seen as permanent, depending on the usage. If in doubt, I choose the transient one. If even you can't be sure about the errors from the software you designed, how can someone make this choice to be universal? One approach is to deal with only the most basic subclasses. You say you want to retry on IO Errors and timeouts; Since Python 3.3, IOError and some other exceptions like socket.error have been merged onto OSError. So, you can simply check for OSError and it will apply for those and many other subclasses like TimeoutError, ConnectionError, FileNotFoundError... You can see the affected classes with OSError.__subclasses__(). Try on a python shell (do this recursively to find all of them).
0
359
true
0
1
Is there any classification of standard Python exceptions to transient and permanent errors?
33,115,056
3
6
0
1
17
0
0.033321
1
I am running a cron job which executes the python script for reading gmail (2 min interval). I have used imaplib for reading the new mails. This was working fine until yesterday. Suddenly its throwing below error imaplib.error: [AUTHENTICATIONFAILED] Invalid credentials (Failure) and sometimes i am getting the below error raise self.abort(bye[-1]) imaplib.abort: [UNAVAILABLE] Temporary System Error When i run the same script on a different machine. Its working fine. I am assuming that the host has been blacklisted or something like that. What are my options ? I cant generate the Credentials (Gmail API) as this is under company domain account.
0
python,authentication,gmail-api,imaplib
2015-10-14T07:53:00.000
0
33,119,667
Thanks guys. Its working now. The issue was that the google blocked our network.. because of multiple attempts. I tried that unlock URL from a different URL and it didnt work. The catch is that, we have to run that URL in the machine where you are trying to run the script. Hope it may help someone :)
0
23,571
false
0
1
reading gmail is failing with IMAP
33,492,711
3
6
0
2
17
0
0.066568
1
I am running a cron job which executes the python script for reading gmail (2 min interval). I have used imaplib for reading the new mails. This was working fine until yesterday. Suddenly its throwing below error imaplib.error: [AUTHENTICATIONFAILED] Invalid credentials (Failure) and sometimes i am getting the below error raise self.abort(bye[-1]) imaplib.abort: [UNAVAILABLE] Temporary System Error When i run the same script on a different machine. Its working fine. I am assuming that the host has been blacklisted or something like that. What are my options ? I cant generate the Credentials (Gmail API) as this is under company domain account.
0
python,authentication,gmail-api,imaplib
2015-10-14T07:53:00.000
0
33,119,667
Got the same error and it was fixed by getting the new google app password. Maybe this will work for someone
0
23,571
false
0
1
reading gmail is failing with IMAP
41,952,132
3
6
0
24
17
0
1
1
I am running a cron job which executes the python script for reading gmail (2 min interval). I have used imaplib for reading the new mails. This was working fine until yesterday. Suddenly its throwing below error imaplib.error: [AUTHENTICATIONFAILED] Invalid credentials (Failure) and sometimes i am getting the below error raise self.abort(bye[-1]) imaplib.abort: [UNAVAILABLE] Temporary System Error When i run the same script on a different machine. Its working fine. I am assuming that the host has been blacklisted or something like that. What are my options ? I cant generate the Credentials (Gmail API) as this is under company domain account.
0
python,authentication,gmail-api,imaplib
2015-10-14T07:53:00.000
0
33,119,667
Some apps and devices use less secure sign-in technology. So we need to enable Less secure app access option from gmail account. Steps: Login into Gmail Go to Google Account Navigate to Security section Turn on access for Less secure app access By following above steps, issue will be resolved.
0
23,571
false
0
1
reading gmail is failing with IMAP
60,630,329
1
1
0
0
0
0
0
0
I have a python script which uses pygsm as the library for sending and receiving sms. However, I wish to put this script on auto-run upon booting up the raspberry pi. But the connecting time of the huawei modem usually takes a while, hence causing the shell script to skip the step. How do I make it so that it would confirm that it is connected to perform the python script?
0
python,serial-port
2015-10-15T05:55:00.000
0
33,140,894
This is not a python-related question, try a linux forum on boot sequence timeout
0
65
false
0
1
Python Auto-Start pyGSM Huawei
33,141,447
2
3
0
0
0
1
0
0
I m using the command testFile = open("test.txt") to open a simple text file and received the following: Does such errors occur due to the version of python one uses? IOError: [Errno 2] No such file or directory: 'Test.txt
0
python
2015-10-15T17:02:00.000
0
33,154,410
The error is independent from the version, but it is not clear what you want to do with your file. If you want to read from it and you get such an error, it means that your file is not where you think it is. In any case you should write a line like testFile = open("test.txt","r"). If you want to create a new file and write in it, you will have a line like testFile = open("test.txt","w"). Finally, if your file already exists and you want to add things on it, use testFile = open("test.txt","a") (after having moved the file at the correct place). If your file is not in the directory of the script, you will use commands to find your file and open it.
0
43
false
0
1
python 2.7: Testing text file in idle
33,154,751
2
3
0
0
0
1
0
0
I m using the command testFile = open("test.txt") to open a simple text file and received the following: Does such errors occur due to the version of python one uses? IOError: [Errno 2] No such file or directory: 'Test.txt
0
python
2015-10-15T17:02:00.000
0
33,154,410
Syntax for opening file is: file object = open(file_name [, access_mode][, buffering]) As you have not mentioned access_mode(optional), default is 'Read'. But if file 'test.txt' does not exists in the folder where you are executing script, it will through an error as you got. To correct it, either add access_mode as "a+" or give full file path e.g. C:\test.txt (assuming windows system)
0
43
false
0
1
python 2.7: Testing text file in idle
33,154,742
1
1
0
3
3
0
0.53705
0
I'm using the Vim editor to edit my python script on some remote clusters.I'm sure the Vim editors on both clusters have syntax highlight on. However, on one cluster I could see that all python keywords have been highlighted, while on the other one I could see only some of the python keywords are highlighted, and some keywords such as "range", "open" and "float" are not highlighted. Is there anything I can do in the .vimrc file such that all python keywords can be highlighted on that machine? I don't know if this is related to the version of the Vim editor. On the machine that does not highlight all python keywords, the version of Vim is 7.2. While for the other machine, the version of Vim is 7.4. Thanks.
0
python,vim
2015-10-15T21:23:00.000
0
33,158,779
This has been solved by adding "let python_highlight_builtins=1" in the .vimrc file.
0
62
false
0
1
Vim only highlights part of the python syntax
33,159,164
1
3
0
0
4
0
0
0
Using APScheduler version 3.0.3. Services in my application internally use APScheduler to schedule & run jobs. Also I did create a wrapper class around the actual APScheduler(just a façade, helps in unit tests). For unit testing these services, I can mock the this wrapper class. But I have a situation where I would really like the APScheduler to run the job (during test). Is there any way by which one can force run the job?
0
python,apscheduler
2015-10-19T10:14:00.000
1
33,211,867
Another approach: you can write logic of your job in separate function. So, you will be able to call this function in your scheduled job as well as somewhere else. I guess that this is a more explicit way to do what you want.
0
1,592
false
0
1
Can we force run the jobs in APScheduler job store?
43,736,075
1
1
0
0
0
0
0
0
I have an assignment every week of the heuristic problem solving course. The assignments are taking up at least 3-4 days of my week (I want to reduce this time). The questions asked in the assignment are computationally intensive and we need to give our best answer within a program execution time of 2 min. I started doing assignments in c++ for good runtime performance. Fine. But, I would have to end up using pointers etc so as not to create copies of data everywhere. But this usually resulted in more debugging time. So I switched to java for my next assignment. A little low on performance compared to c++ but is saving my weekends. I profiled my java program and saw that a single function was taking up 95% of the cpu time. In this context I want to ask, if I use python to write my assignment solution, profile it, find out the functions using up the most cpu time, implement them using c-modules.. can I do any better? I can decrease my development time (bcz I personally find development on python to be faster) and since I would implement the functions which take up 95% of cpu time in c-modules I should not take a hit on performance. Is this something I can try? I can try this out (python + c-modules) and check out for myself (without asking for help here), but if I fail I might not have time to re-implement my whole assignment in c++ or java.
0
java,python,c++,python-module
2015-10-20T05:45:00.000
0
33,228,905
If performance is what you are looking for, you should know that Python is 10 to 100 times slower than C++. Depending on how much performance you are looking for, you may come around by optimizing your code or use some 3rd party libraries for number crunching as scipy. Using Cython would be an option for consideration, but you were looking for decreasing the development time by using python? Using C-Modules would introduce more complexity and Cython has a different syntax anyway.
0
51
false
0
1
Performance of Python with c modules
33,229,221
1
2
0
0
0
0
0
0
OS: Fedora 21 Python: 2.7.6 I run a python script as root or using sudo it runs fine. If I run it as just the user I get the following: Traceback (most recent call last): File "/home/user/dev_ad_list.py", line 12, in import ldap ImportError: No module named ldap selinux=disabled -- What other security is preventing a user from running a python script that imports ldap
0
python,python-2.7,fedora-21
2015-10-20T20:52:00.000
1
33,246,572
Path to python was different than other user. User was pointing to canopy.
0
51
false
0
1
Python Script not Running - Has to be something simple
33,246,962
1
1
0
1
1
0
0.197375
0
I cannot open .py file through Google VM SSH Console. Kwrite and sudo apt-get install xvfb are installed. My command: kwrite test.py I get the following error: kwrite: Cannot connect to X server. Do I need to change the command/install additional software? Thanks
0
python,ssh,google-compute-engine
2015-10-21T16:05:00.000
1
33,264,119
X-Windows (X11 nowadays) is a client-server architecture. You can forward connections to your x server with a -X (uppercase) option to ssh (ie $ ssh -X [email protected]). This should work if everything is installed correctly on the server (apt-get usually does a good job of this, but I don't have a lot of experience with kwrite). EDIT from the ssh man page X11 forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the user's X authorization database) can access the local X11 display through the forwarded connection. An attacker may then be able to perform activities such as keystroke monitoring. For this reason, X11 forwarding is subjected to X11 SECURITY extension restrictions by default. Please refer to the ssh -Y option and the ForwardX11Trusted directive in ssh_config(5) for more information. and the relevant -Y -Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls.
0
72
false
0
1
Cannot open .py file in Google Virtual Machine SSH Terminal
33,264,941
1
1
0
0
0
1
1.2
1
How to list name of users who tweeted with given keyword along with count of tweets from them ? I am using python and tweepy. I used tweepy to list JSON result in a file by filter(track["keyword"]) but doesn't know how to list users who tweeted given keyword.
0
python,api,twitter,tweepy,twitter-streaming-api
2015-10-22T05:22:00.000
0
33,273,885
Once your data has been loaded into JSON format, you can access the username by calling tweet['user']['screen_name']. Where tweet is whatever varibale you have assigned that holds the JSON object for that specific tweet.
0
578
true
0
1
how to list all users who tweeted a given keyword using twitter api and tweepy
33,290,201
1
1
0
2
3
0
1.2
0
What are the best methods to set a .py file to run at one specific time in the future? Ideally, its like to do everything within a single script. Details: I often travel for business so I built a program to automatically check me in to my flights 24 hours prior to takeoff so I can board earlier. I currently am editing my script to input my confirmation number and then setting up cron jobs to run said script at the specified time. Is there a better way to do this? Options I know of: • current method • put code in the script to delay until x time. Run the script immediately after booking the flight and it would stay open until the specified time, then check me in and close. This would prevent me from shutting down my computer, though, and my machine is prone to overheating. Ideal method: input my confirmation number & flight date, run the script, have it set up whatever cron automatically, be done with it. I want to make sure whatever method I use doesn't include keeping a script open and running in the background.
0
python,windows,cron,crontab,job-scheduling
2015-10-22T12:13:00.000
1
33,280,783
cron is best for jobs that you want to repeat periodically. For one-time jobs, use at or batch.
0
50
true
0
1
Methods to schedule a task prior to runtime
33,283,524
1
1
0
1
3
0
0.197375
0
I use Debian + Nginx + Django + UWSGI. One of my function us fork() in the file view.py (the fork works well), then immediately written return render (request, ... After the fork() the page loads for a long time and after that browser prints error - "Web page not available». On the other hand the error doesn’t occur if i reload the page during loading (because i don’t launch the fork() again). The documentation UWSGI there is - uWSGI tries to (ab)use the Copy On Write semantics of the fork() call whenever possible. By default it will fork after having loaded your applications to share as much of their memory as possible. If this behavior is undesirable for some reason, use the lazy-apps option. This will instruct uWSGI to load the applications after each worker’s fork(). Beware as there is an older options named lazy that is way more invasive and highly discouraged (it is still here only for backward compatibility) I do not understand everything, and I wrote in a configuration option uWSGI lazy-apps: lazy-apps: 1 in my uwsgi.yaml. It does not help that I'm wrong? What do I do with this problem? P.S. other options besides fork() is that I do not fit .. PP.S. Sorry, I used google translate ..
0
python,django-views,fork,uwsgi
2015-10-22T21:17:00.000
1
33,290,927
use lazy-apps = true instead of 1
0
2,374
false
1
1
How to enable the lazy-apps in uwsgi to use fork () in the code?
58,931,038
2
4
0
1
2
0
0.049958
0
I need to call a Python script from Labview, someone know which is the best method to do that? I've tried Labpython, but it is not supported on newest versions of Labview and I'm not able to use it on Labview-2014. Definitevly, I'm looking for an advice about python integration: I know this two solutions: 1)Labpython: is a good solution but it is obsolete 2)execute python script with shell_execute block in Labview. I think that it isn't the best solution because is very hard to get the output of python script
0
python,labview
2015-10-23T12:51:00.000
1
33,302,773
Why not use the System Exec.vi in Connectivity->Libraries and Executables menu? You can execute the script and get the output.
0
2,470
false
0
1
Python and Labview
33,306,025
2
4
0
0
2
0
0
0
I need to call a Python script from Labview, someone know which is the best method to do that? I've tried Labpython, but it is not supported on newest versions of Labview and I'm not able to use it on Labview-2014. Definitevly, I'm looking for an advice about python integration: I know this two solutions: 1)Labpython: is a good solution but it is obsolete 2)execute python script with shell_execute block in Labview. I think that it isn't the best solution because is very hard to get the output of python script
0
python,labview
2015-10-23T12:51:00.000
1
33,302,773
You can save Python script as a large string constant(or load from text file) within the LabVIEW vi so that it can be manipulated within LabVIEW and then save that to a text file then execute the Python script using command line in LabVIEW. Python yourscript enter
0
2,470
false
0
1
Python and Labview
55,798,097
1
2
0
0
0
0
0
1
I am using Python v2.7 on Win 7 PC. I have my robot connected to computer and COM 4 pops out in device manager. My plan is to send API to robot through COM 4. Here is the question, how could python identify which serial port is for which device? So far, I can list all the available ports in python, but I need to specifically talk to COM 4 to communicate with robot. As a newie, any help would be appreciated.
0
python
2015-10-26T20:28:00.000
0
33,354,977
ser = serial.Serial(3) # open COM 4 print ser.name # check which sport was really used and 'ser' is the serial object Here is the python code to open specific serial port.
0
64
false
0
1
Serial Port Identity in Python
33,374,589
1
2
0
1
0
0
0.099668
0
I am new to Rundeck, so I apologize if I ask a question that probably has an obvious answer I'm overlooking. I've installed Rundeck on my Windows PC. I've got a couple of Python scripts that I want to execute via Rundeck. The scripts run fine when I execute them manually. I created a job in Rundeck, created a single step (script file option) to test the python script. The job failed after six seconds. When I checked the log, it was because it was executing it line by line rather than letting python run it as an entire script. How do I fix this?
0
python,rundeck
2015-10-30T00:34:00.000
1
33,427,081
okay, so I changed the step type to a command rather than script file and it worked. I guess my understanding of what a script file is was off.
0
10,632
false
0
1
Rundeck :: Execute a python script
33,427,554
2
3
0
2
1
1
0.132549
0
I have Python 2.7 installed on my windows machine. I've downloaded and installed Immunity Debugger. But when I try to import immlib or immutils module in python, it says no such module. How to install these modules? Using pip, it says no such repository. Please help.
0
python,debugging,python-2.x
2015-10-30T15:11:00.000
0
33,439,189
Type "system settings" in the cortana/start bar search box Click "View Advanced System Settings In the Advanced tab in the bottom right corner click "Environment Variables" Create a new variable named "PYTHONPATH" Add these two directories to the PYTHONPATH variable: "C:\Program Files (x86)\Immunity Inc\Immunity Debugger" "C:\Program Files (x86)\Immunity Inc\Immunity Debugger\Libs"
0
1,919
false
0
1
How to install immlib module in python?
40,102,123
2
3
0
1
1
1
1.2
0
I have Python 2.7 installed on my windows machine. I've downloaded and installed Immunity Debugger. But when I try to import immlib or immutils module in python, it says no such module. How to install these modules? Using pip, it says no such repository. Please help.
0
python,debugging,python-2.x
2015-10-30T15:11:00.000
0
33,439,189
It goes with Immunity Debugger. Install immunity debugger and your immlib.py will be there.
0
1,919
true
0
1
How to install immlib module in python?
34,432,268
1
1
0
1
0
0
0.197375
1
I have made a simple IRC bot for myself in python which works great, but now some friends has asked me if the bot can join their IRC channel too. Their IRC channels are very active, it is Twitch chat(IRC wrapper), which means a lot of messages. I want them to use my bot, but I have no idea how it will perform, this is my first bot I've made. Right now my code is like this: Connect to IRC server & channel while true: Receive data from the socket (4096, max data to be received at once) do something with data received What changes should I do to make it perform better? 1. Should I have a sleep function in the loop? 2. Should I use threads? 3. Any general dos and don'ts? Thank you for reading my post.
0
python,performance,scaling,bots,irc
2015-10-31T01:13:00.000
0
33,447,032
Threading is one option but it doesn't scale beyond a certain point (google Python GIC limitation). Depending on how much scaling you want to do, then you need to do multi-process (launch multiple instances). One pattern is to have a pool of worker threads that process a queue of things to do. There's a lot of overhead to creating and destroying threads in most languages.
0
186
false
0
1
IRC Bot, performance and scaling
33,447,076
1
2
0
1
5
1
0.099668
0
I am using Sublime Text Editor for Python development. If I create a new file by Ctrl + N, the default language setting for this file is Plain Text, so how to change the default language setting for the new file to be Python ? Another question :If I write some code in the new file and have not save it to disk, it is impossible to run it and get the running result, is there a solution to remove this restriction so that we can run code in the new file without saving it to disk first?
0
python,sublimetext2,sublimetext3,sublimetext,sublime-text-plugin
2015-10-31T09:20:00.000
0
33,449,969
The second question about running without saving to hard disk: 1-press ctrl + shift + p 2- write install package and install it 3- write "auto save" and install it 4- go to preferences> package settings> Auto-save> Settings Default and copy all the code 5- go to preferences> package settings> Auto-save> Settings user and past it and change the first code from: "auto_save_on_modified": false, to "auto_save_on_modified": true, good luck
0
2,066
false
0
1
Sublime Text: run code in a new file without saving to disk and the default language setting for a new file
55,436,658
1
1
0
2
1
0
0.379949
0
I have a program that runs when the Pi is booted. The print statements are not displaying on additional terminal sessions. I can only get the print statements when I kill the auto-booted process and restart the program. Is there a method to broadcast print messages to all users - like the message displayed then typing 'Halt'? Thx
0
python,raspberry-pi
2015-10-31T12:20:00.000
0
33,451,504
You are looking for the wall BSD function. NAME wall -- write a message to users SYNOPSIS wall [-g group] [file] DESCRIPTION The wall utility displays the contents of file or, by default, its standard input, on the terminals of all currently logged in users.
0
71
false
0
1
Raspberry Pi & Python 2.7 - trying to print to all users
33,451,582
1
1
0
1
0
0
0.197375
0
I am trying to utilise mailer.py script to send mails after a SVN Commit. In mailer.py svn module has been used. I think the svn module is present in /opt/CollabNet_Subversion/lib-146/svn-python/svn and I tried to append it to the sys path using sys.path.append. For once it is getting appended and when I do sys.path I can see the appended path but after that the path is removed and I am getting import error: No Module named SVN. Am I missing something?
0
python,linux,svn
2015-11-03T07:02:00.000
1
33,493,178
Like setting environment variable in bash, if you close session it will be disapear. So just add sys.path.append it will add path in runtime.
0
192
false
0
1
Is it possible to add the module path to the python environment variable in linux with out root access?
33,493,376
1
1
0
8
9
0
1.2
1
OK, I am trying to use client certificates to authenticate a python client to an Nginx server. Here is what I tried so far: Created a local CA openssl genrsa -des3 -out ca.key 4096 openssl req -new -x509 -days 365 -key ca.key -out ca.crt Created server key and certificate openssl genrsa -des3 -out server.key 1024 openssl rsa -in server.key -out server.key openssl req -new -key server.key -out server.csr openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt Used similar procedure to create a client key and certificate openssl genrsa -des3 -out client.key 1024 openssl rsa -in client.key -out client.key openssl req -new -key client.key -out client.csr openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt Add these lines to my nginx config server { listen 443; ssl on; server_name dev.lightcloud.com; keepalive_timeout 70; access_log /usr/local/var/log/nginx/lightcloud.access.log; error_log /usr/local/var/log/nginx/lightcloud.error.log; ssl_certificate /Users/wombat/Lightcloud-Web/ssl/server.crt; ssl_certificate_key /Users/wombat/Lightcloud-Web/ssl/server.key; ssl_client_certificate /Users/wombat/Lightcloud-Web/ssl/ca.crt; ssl_verify_client on; location / { uwsgi_pass unix:///tmp/uwsgi.socket; include uwsgi_params; } } created a PEM client file cat client.crt client.key ca.crt > client.pem created a test python script import ssl import http.client context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) context.load_verify_locations("ca.crt") context.load_cert_chain("client.pem") conn = http.client.HTTPSConnection("localhost", context=context) conn.set_debuglevel(3) conn.putrequest('GET', '/') conn.endheaders() response = conn.getresponse() print(response.read()) And now I get 400 The SSL certificate error from the server. What am I doing wrong?
0
python,ssl,nginx
2015-11-03T16:50:00.000
0
33,504,746
It seems that my problem was that I did not create the CA properly and wasn't signing keys the right way. A CA cert needs to be signed and if you pretend to be top level CA you self-sign your CA cert. openssl req -new -newkey rsa:2048 -keyout ca.key -out ca.pem openssl ca -create_serial -out cacert.pem -days 365 -keyfile ca.key -selfsign -infiles ca.pem Then you use ca command to sign requests openssl genrsa -des3 -out server.key 1024 openssl req -new -key server.key -out server.csr openssl ca -out server.pem -infiles server.csr
0
16,933
true
0
1
Doing SSL client authentication is python
33,526,221
1
1
0
0
1
0
0
1
I'm having trouble importing tweepy. I've looked through so many previous questions and still can't find a correct solution. I think it has something to do with how tweepy is being downloaded when I install but I'm not sure. I get an import error saying that "tweepy is not a package". I have tweepy library connected to the interpreter and all that but, it is saved as a compressed EGG file instead of a file folder like the rest of my packages. I think that has something to do with it but I'm not too sure. Also, tweepy works in my command line but not in eclipse.
0
python,eclipse,tweepy
2015-11-03T20:35:00.000
0
33,508,572
If you recently installed the package maybe just reconfigure your pydev (Window->Preferences->PyDev->Interpreters->Python Interpreter: Quick Auto Configure).
0
321
false
0
1
Python import error in eclipse (Package works in command line but not eclipse)
33,510,764
1
1
0
0
0
0
0
0
I've got several MR-3020's that I have flashed with OpenWRT and mounted a 16GB ext4 USB drive on it. Upon boot, a daemon shell script is started which does two things: 1) It constantly looks to see if my main program is running and if not starts up the python script 2) It compares the lasts heartbeat timestamp generated by my main program and if it is older than 10 minutes in the past kills the python process. #1 is then supposed to restart it. Once running, my main script goes into monitor mode and collects packet information. It periodically stops sniffing, connects to the internet and uploads the data to my server, saves the heartbeat timestamp and then goes back into monitor mode. This will run for a couple hours, days, or even a few weeks but always seems to die at some point. I've been having this issue for nearly 6 months (not exclusively) I've run out of ideas. I've got files for error, info and debug level logging on pretty much every line in the python script. The amount of memory used by the python process seems to hold steady. All network calls are encapsulated in try/catch statements. The daemon writes to logread. Even with all that logging, I can't seem to track down what the issue might be. There doesn't seem to be any endless loops entered into, none of the errors (usually HTTP request when not connected to internet yet) are ever the final log record - the device just seems to freeze up randomly. Any advice on how to further track this down?
0
python,linux
2015-11-05T17:25:00.000
1
33,550,976
It could be related to many things: things that I had to fix also: check the external power supply of the router, needs to be stable, the usb drives could drain too much current than the port can handle, a simple fix is to add a externally powered usbhub or the same port but with capacitors in parallel to the powerline and at the beginning of the usb port where the drive is, maybe 1000uF
0
61
false
0
1
Crashing MR-3020
36,415,642
1
1
0
1
2
0
1.2
1
I have access to a S3 bucket. I do not own the bucket. I need to check if new files were added to the bucket, to monitor it. I saw that buckets can fire events and that it is possible to make use of Amazon's Lambda to monitor and respond to these events. However, I cannot modify the bucket's settings to allow this. My first idea was to sift through all the files and get the latest one. However, there are a lot of files in that bucket and this approach proved highly inefficient. Concrete questions: Is there a way to efficiently get the newest file in a bucket? Is there a way to monitor uploads to a bucket using boto? Less concrete question: How would you approach this problem? Say you had to get the newest file in a bucket and print it's name, how would you do it? Thanks!
0
python,api,amazon-web-services,amazon-s3,boto
2015-11-05T17:34:00.000
0
33,551,143
You are correct that AWS Lambda can be triggered when objects are added to, or deleted from, an Amazon S3 bucket. It is also possible to send a message to Amazon SNS and Amazon SQS. These settings needs to be configured by somebody who has the necessary permissions on the bucket. If you have no such permissions, but you have the ability to call GetBucket(), then you can retrieve a list of objects in the bucket. This returns up to 1000 objects per API call. There is no API call available to "get the newest files". There is no raw code to "monitor" uploads to a bucket. You would need to write code that lists the content of a bucket and then identifies new objects. How would I approach this problem? I'd ask the owner of the bucket to add some functionality to trigger Lambda/SNS/SQS, or to provide a feed of files. If this wasn't possible, I'd write my own code that scans the entire bucket and have it execute on some regular schedule.
0
4,530
true
1
1
How to monitor a AWS S3 bucket with python using boto?
33,590,521
1
1
0
3
3
1
1.2
0
I like Dronekit for controlling my copter, and I like Mission Planner for monitoring my copter during a flight. I'd really like to have both sets of functionality. Is there a way to connect Dronekit and Mission Planner to the Pixhawk at the same time? I'm using the 3DR radio set to connect from a laptop on the ground. If that's not possible, is there a way to relay the connection through Dronekit to Mission Planner?
0
python,dronekit-python,dronekit
2015-11-09T16:43:00.000
0
33,613,948
Use mavproxy to make the initial connection, and then fork it to DK and MP. mavproxy.py --master=/dev/ttyUSB0 --out=127.0.0.1:14550 --out=127.0.0.1:14551 Connect mission planner to UDP port 14550. Connect DK to port 14551.
0
1,931
true
0
1
Connect Dronekit and Mission Planner simultaneously to Pixhawk over 3DR radio
33,615,308
1
1
0
1
1
0
1.2
0
I added a new template file from my project. Now I don't know how to make the languages update or get the new template file. I've read that 2.5 has update_against_templates but it's not in 2.7. How will update my languages?
0
python,django,pootle,translate-toolkit
2015-11-10T12:36:00.000
0
33,630,137
Template updates now happen outside of Pootle. The old update_against_template had performance problems and could get Pootle into a bad state. To achieve the same functionality as update_against_templates do the following. Assuming your project is myproject and you are updating language af: sync_store --project=myproject --language=af pot2po -t af template af update_store --project=myproject --language=af You can automate that in a script to iterate through all languages. Use list_languages --project=myproject to get a list of all the active languages for that project.
0
75
true
1
1
what is update_against_templates in pootle 2.7?
38,104,213
1
2
0
0
3
0
0
0
I want to check if a python script is running with admin permissions on windows, without using the ctypes module. It is important for me not to use ctypes for some reasons. I have looked with no luck. Thanks.
0
python
2015-11-11T14:41:00.000
1
33,652,909
If you install the pywin tool set, the function win32.shell.IsUserAnAdmin() can be used to see if you a a member of the administrators group.
0
97
false
0
1
How to know if a python script is running with admin permissions in windows?
33,656,984
1
1
0
0
0
0
0
0
I am unable to connect to my Pixhawk drone with 3DR radios and Dronekit 2 and Python code. I am able to connect with a USB cable attached to Pixhawk. I suspect the problem is the baud rate with the radios are too high. There seems to be no way to change the baud rate with the radios in the connect command. Please advise. Windows 8.1 Thank you!
0
dronekit-python
2015-11-11T18:16:00.000
0
33,657,161
Found out from another website that there was a bug in the releases prior to Dronkit 2.0.0.rc9. They forgot to put in any way to adjust baud rate! Latest release Dronekit 2.0.0.rc10 has the fix.
0
204
false
1
1
No way to adjust baud rate in connect() with Dronekit 2 and 3DR Radios and Pixhawk?
33,693,815
1
1
0
1
1
1
0.197375
0
I run my Python scripts with python -m unittest discover command. I'd like to passing current time yyyymmddhhmmssas parameter into execution command. How can do that?
0
python,python-2.7,parameters,arguments,python-unittest
2015-11-12T07:51:00.000
0
33,666,572
If you want to sample the current time on the system running the puthon code, you can use datetime.datetime.now() (local time) or datetime.datetime.utcnow() (UTC). Then you can format as string using .strftime("%Y%m%d%H%M%S").
0
1,036
false
0
1
Passing currenttime as parameter into Python command line
33,666,696
1
2
0
1
0
0
1.2
1
I have 10+ test cases at the moment and planning on creating several more. That being said is there a way that I can modify the URL variable once and that would change the variable in all my other scripts? I have this in all my of test scripts: class TestCase1(unittest.TestCase): def setUp(self): self.driver = webdriver.Firefox() self.driver.implicitly_wait(30) self.base_url = "http://URL" self.verificationErrors = [] self.accept_next_alert = True I want to be able to be able to modify self.base_url = http://URL. But I don't want to have to do that 10+ times.
0
python,unit-testing,url,selenium
2015-11-12T18:09:00.000
0
33,678,351
There are 2 good approaches to doing this, 1) Using a Configuration Manager, a singleton object that stores all your settings. 2) Using a basetest, a single base test that all your tests inherit from. My preference is towards a Configuration Manager. And within that configuration manager, you can put your logic for grabbing the base url form configuration files, system environments, command line params, etc...
0
1,381
true
0
1
Python Selenium Having to Change URL Variable On All of My Test Cases
36,340,048
2
2
0
1
0
0
0.099668
0
I would like to push sensor data from the raspberry pi to localhost phpmyadmin. I understand that I can install the mysql and phpmyadmin on the raspberry pi itself. But what I want is to access my local machine's database in phpmyadmin from the raspberry pi. Would it be possible?
1
mysql,python-2.7,phpmyadmin,raspberry-pi2
2015-11-14T01:29:00.000
0
33,704,183
Well, from what I understand, you'd like to save the sensor data arriving in your Raspberry Pi to a database and access it from another machine. What I suggest is, install a mysql db instance and phpmyadmin in your Raspberry Pi and you can access phpmyadmin from another machine in the network by using the RPi's ip address. Hope this is what you wanted to do.
0
1,407
false
0
1
Push sensor data from raspberry pi to local host phpmyadmin database
33,705,227
2
2
0
0
0
0
0
0
I would like to push sensor data from the raspberry pi to localhost phpmyadmin. I understand that I can install the mysql and phpmyadmin on the raspberry pi itself. But what I want is to access my local machine's database in phpmyadmin from the raspberry pi. Would it be possible?
1
mysql,python-2.7,phpmyadmin,raspberry-pi2
2015-11-14T01:29:00.000
0
33,704,183
Sure, as long as they're on the same network and you have granted proper permission, all you have to do is use the proper hostname or IP address of the MySQL server (what you call the local machine). In whatever utility or custom script you have that writes data, use the networked IP address instead of 127.0.0.1 or localhost for the database host. Depending on how you've installed MySQL, you may not have a user that listens for non-local connections, incoming MySQL connections may be blocked at the firewall, or your MySQL server may not listen for incoming network connections. You've asked about using phpMyAdmin from the Pi, accessing your other computer, which doesn't seem to make much sense to me (I'd think you'd want to run phpMyAdmin on your desktop computer, not a Pi), but if you've got a GUI and compatible web browser running on the Pi then you'd just have phpMyAdmin and the webserver run on the same desktop computer that has MySQL and access that hostname and folder from the Pi (such as http://192.0.2.15/phpmyadmin). If you're planning to make the MySQL server itself public-facing, you should really re-think that decision unless you know why that's a bad idea and how to properly secure it (but that may not be a concern; for instance I have one at home that is available on the local network, but my router blocks any incoming connections from external sources).
0
1,407
false
0
1
Push sensor data from raspberry pi to local host phpmyadmin database
33,716,584
1
3
0
0
1
1
0
0
I am on a Raspberry Pi, and by default the following symbolic links were created in /usr/bin: /usr/bin/python -> /usr/bin/python2.7 /usr/bin/python2 -> /usr/bin/python2.7 /usr/bin/python3 -> /usr/bin/python3.2 Most of my work is done in Python 3, so I decided to recreate /usr/bin/python to point to /usr/bin/python3.2 instead. Does this have any negative consequences when I install packages or run pip? Are there utilities that depend on the alias python in the search path and end up doing the wrong things?
0
python,python-3.x,raspberry-pi,raspbian
2015-11-14T08:31:00.000
1
33,706,579
Yes, there are many applications and scripts that is written for python 2, and they usually come pre-installed in your linux distribution. Those applications expect python binary to be version 2. And they will most likely break if you force them to run on python 3.
0
392
false
0
1
Is the symbolic link python important?
33,706,723
1
1
0
1
2
0
0.197375
0
How can I configure my Django server to run tests from tests.py when starting the server with python manage.py runserver? Right now, I have to run tests through python manage.py test articles. (Note: I am using Django 1.8)
0
python,django
2015-11-14T21:18:00.000
0
33,713,481
Answer from @limelights: Create a bash alias or run them in sequence? I've adapted that answer to this line of code (for bash): alias runserver="sudo python ~/testsite/manage.py test articles; sudo python ~/testsite/manage.py runserver 192.168.1.245:90 (as one line) Using runserver runs the test suite and opens the server. An added perk is that I can run it from any location without having to go into the ~/testsite directory.
0
189
false
1
1
django - Run tests upon starting server
33,714,252
1
1
0
0
0
0
1.2
0
I have a custom library that is in a different location from the test suite. Meaning the test suite is in "C:/Robot/Test/test_suite.txt" and my library is in "C:/Robot/Lib/library.py". The library has 2 different classes and I need to import both of them. I have tried to import it by "Library | ../Lib/library.py" but I got an error saying that the library contains no keywords. I also tried to import it by "Library | ../Lib/library.Class1" but got a syntax error. Is there any way to do it without changing the PYTHONPATH? Thank you!
0
python,robotframework
2015-11-15T16:17:00.000
0
33,721,893
You have two choices for importing: importing a library via PYTHONPATH importing a library based on the file path to the library. In the first case you can import each class separately. In the second case, it's not possible to import multiple classes from a single file. If you give a path to a python file, that file must contain keywords. It can also include classes, but robot won't know about those classes.
0
1,092
true
1
1
Robot Framework - Import library with 2 classes from different location
33,723,660
3
5
0
6
0
0
1.2
1
I'm developing a Python script but I need to include my public and secret Twitter API key for it to work. I'd like to make my project public but keep the sensitive information secret using Git and GitHub. Though I highly doubt this is possible, is there any way to block out that data in a GitHub public repo?
0
python,git,github
2015-11-16T06:16:00.000
0
33,729,454
Split them out into a configuration file that you don’t include, or replace them with placeholders and don’t commit the actual values, using git add -p. The first option is better. The configuration file could consist of a basic .py file credentials.py in which you define the needed private credentials in any structure you consider best. (a dictionary would probably be the most suitable). You can use the sensitive information by importing the structure in this file and accessing the contents. Others users using the code you have created should be advised to do so too. The hiding of this content is eventually performed with your .gitignore file. In it, you simply add the filename in order to exclude it from being uploaded to your repository.
0
1,206
true
0
1
GitHub public repo with sensitive information?
33,729,510
3
5
0
1
0
0
0.039979
1
I'm developing a Python script but I need to include my public and secret Twitter API key for it to work. I'd like to make my project public but keep the sensitive information secret using Git and GitHub. Though I highly doubt this is possible, is there any way to block out that data in a GitHub public repo?
0
python,git,github
2015-11-16T06:16:00.000
0
33,729,454
The twitter API keys are usually held in a JSON file. So when your uploading your repository you can modify the .gitignore file to hide the .json files. What this does is it will not upload those files to the git repository. Your other option is obviously going for private repositories which will not be the solution in this case.
0
1,206
false
0
1
GitHub public repo with sensitive information?
33,729,574
3
5
0
4
0
0
0.158649
1
I'm developing a Python script but I need to include my public and secret Twitter API key for it to work. I'd like to make my project public but keep the sensitive information secret using Git and GitHub. Though I highly doubt this is possible, is there any way to block out that data in a GitHub public repo?
0
python,git,github
2015-11-16T06:16:00.000
0
33,729,454
No. Instead, load the secret information from a file and add that file to .gitignore so that it will not be a part of the repository.
0
1,206
false
0
1
GitHub public repo with sensitive information?
33,729,502
1
1
0
0
0
0
1.2
0
example : lets say I have a python script test.py so when I run the script python test.py pylint should execute first and If pylint executes successfully It should execute tesy.py else should give pylint errors
0
python
2015-11-17T08:57:00.000
0
33,752,729
@falsetru : Thank you for the answer, another solution is that we can write a script in which we will run both the commands ie. pylint test.py and if the rate of the code(output of the pylint) is greater than x(lets say x = 8) will run python test.py else show the pylint errors. ie instead of python test.py we will run my_script test.py my_script is the script which contains the above mentioned code.
0
58
true
0
1
Can we run pylint while executing python script, such that when the pylint passes the code will execute else it will show pylint errors?
33,778,025
1
1
0
1
1
0
0.197375
0
I'm curious, how a good automated workflow could look like for the process of automating issues/touched file lists into a confluence page. I describe my current idea here: Get all issues matching my request from JIRA using REST (DONE) Get all touched files related to the matching Issues using Fisheye REST Create a .adoc file with the content Render it using asciidoctor-confluence to a confluence page I'm implementing the this in python (using requests etc.) and I wonder how I could provide proper .adoc for the ruby-based asciidoctor. I'm planning to use asciidoctor for the reason it has an option to render directly to confluence using asciidocter-confluence. So, is there anybody who can kindly elaborate on my idea?
0
python,automation,jira,confluence,asciidoctor
2015-11-17T12:06:00.000
0
33,756,512
I did something similar - getting info from Jira and updating confluence info. I did it in a bash script that ran on Jenkins. The script: Got Jira info using the Jira REST API Parsed the JSON from Jira using jq (wonderful tool) Created/updated the confluence page using the Confluence REST API I have not used python but the combination of bash/REST/jq was very simple. Running the script from Jenkins allowed me to run this periodically, so confluence is updated automatically every 2 weeks with the new info from Jira.
0
649
false
1
1
Programmatically create confluence content from jira and fisheye
33,761,564
1
1
0
6
4
1
1.2
0
I want to create a Python package that has multiple subpackages. Each of those subpackages contain files that import the same specific module that is quite large in size. So as an example, file A.py from subpackage A will import a module that is supposedly named LargeSizedModule and file B.py from subpackage B will also import LargeSizedModule. Similarly with C.py from subpackage C. Does anyone know how I can efficiently import the same exact module across multiple subpackages? I would like to reduce the 'loading' time that comes from those duplicate imports.
0
python
2015-11-17T15:47:00.000
0
33,761,192
By doing import LargeSizedModule everywhere you need it. Python will only load it once.
0
1,283
true
0
1
How to efficiently import the same module into multiple sub-packages in python
33,761,211
1
1
0
2
0
0
0.379949
0
Using win32com.client in python 3.x, I'm able to access email stored in Outlook 2013. I'm able to access all of the information I need from the emails, except for the email address of the recipients of the email (to, cc, and bcc). I'm able to access the names of the recipients, but not their email addresses. For example, I can see that an email was sent to "John Smith", but not that the email was sent to "[email protected]". Is there a way to access this information?
0
python,outlook
2015-11-17T21:56:00.000
0
33,767,792
Instead of reading the MailItem.To/CC/BCC properties, loop through all items in the MailItem.Recipients collection and read the Recipient.Address property. You might also need Recipient.Type property (olTo, olCC, olBCC) and Recipient.Name.
0
3,567
false
0
1
Accessing email recipient addresses from outlook using python
33,768,124
1
1
0
0
1
1
0
0
I followed Zed Shaw's instruction in his book, "Learn Python the Hard Way 3rd Edition" On my Windows Powershell, nosetests do nothing, I just saw the cursor blinked until the end of the world. Why is that? How can I solve this?
0
python-2.7,powershell,nosetests
2015-11-19T10:48:00.000
0
33,801,732
According to the author, the cause of the issue was ... trivial : Darn I'm so silly hahaha, I ran nosetests on the wrong directory. Thank you for your answer :) It takes time to run, my Avast will do scan, maybe 15-20 seconds. – mdominic 1 hour ago
0
456
false
0
1
Nosetests on Windows Powershell do nothing?
33,804,412
1
1
0
1
1
0
1.2
0
I have deployed a python-flask web app on the worker tier of AWS. I send some data into the associated SQS queue and the daemon forwards the request data in a POST request to my web app. The web app takes anywhere between 5 mins to 6 hours to process the request depending upon the size of posted data. I have also configured the worker app into an auto scaling group to scale based on CPU utilization metrics. When I send 2 messages to the queue in quick succession, both messages start showing up as in-flight. I was hoping that the daemon will forward the first message to the web app and then wait for it to be processed before pulling the second message out. In the meantime, auto scaling will spin up another instance (which it is but since the second message is also in-flight, it is not able to pull that message) and the new instance will pull and process the second message. Is there a way of achieving this?
0
python,amazon-web-services,flask,amazon-sqs,worker
2015-11-21T17:29:00.000
1
33,846,425
Set the HTTP Connection setting under Worker Configuration to 1. This should prevent each server from receiving more than 1 message at a time. You might want to look into changing your autoscaling configuration to monitor your SQS queue depth or some other SQS metric instead of worker CPU utilization.
0
179
true
1
1
AWS worker daemon locks multiple messages even before the first message is processed
33,846,596
1
1
0
2
4
0
1.2
0
I'm working on a Python project and I'm looking for a nice module to do the following : Draw some bezier curves, on an existing JPEG image, from a given list of points. Then use this image to present it in a PDF. I have to be able to draw shapes, fill them, and set the opacity of the fill color. I also have to be able to draw images (from existing files) inside the JPEG. A module that allows me to use drawing paths would be great. I started using Wand, but I'm not satisfied with the results (quality loss on the areas of the image containing the drawn curves, and filling a path doesn't work as I expected (It draws horizontal lines but doesn't entirely fill the shape), or maybe I didn't use it correctly ?). I think I'm going to use ReportLab for the PDF part. ReportLab can be used to draw bezier curves, but I would prefer generating the images with the curves before including them inside the PDF. There are a lot of modules for drawing using Python out there, but it's not easy to determine which module is the best for what I want. I just started looking into pyCairo, but if you know of any other module that can achieve what I want, please feel free to share.
0
python,drawing,reportlab,pycairo,wand
2015-11-23T10:26:00.000
0
33,868,684
Pygame has some decent drawing capabilities; I'd suggest looking at that and playing with the pygame.draw module. PyCairo is more featureful however, and seems to be the more popular choice. Python Imaging Library (PIL) also might be worth looking into.
0
1,376
true
0
1
Good python module to draw on images
33,868,968
1
1
0
0
0
0
0
0
Is it possible to create a hyperlink/button that calls a bash/python script on the user/local machine. I did search on the topic but there is a lot of discussion about the opening a port to a server (even the local port) but I don't want to open a port but execute everything locally. Is this even possible? Thanks
0
python,html,bash,jinja2
2015-11-23T19:05:00.000
1
33,878,648
No. That is not possible. Nor desirable, due to the security implications.
0
41
false
0
1
create a hyperlink that executes a bash/python on the user machine
33,879,045
1
1
0
0
0
0
0
1
I'm trying to do some home/life automation scripting and I'd like to be able to get the location of my Android phone through some API. This would be useful to do things such as turn on my home security camera, or to route my home calls to my phone if I'm away. This would preferably a RESTful one, or an API with good Python interop. However, I'm not averse to using any tool to get the job done. I considered checking my router to see if my phone was connected, which will work for some things, but it would hinder me in implementing other things. I know I could probably write an Android app that would phone home to do this, but I wanted to see if there were any alternatives first. My Google-Fu came up short on this one (if it exists). Thanks in advance!
0
android,python
2015-11-23T21:37:00.000
0
33,881,172
secrets of mobile hackers..what does google chrome use? Google uses three approaches..the third approach is your option find a site that does or allows triangulation involving using cell tower maps.. Or lets see mobile web page to get location via google services and than hook that into your python script on the server side
0
58
false
0
1
Android Phone Location API
33,881,298
1
1
0
0
0
1
0
0
I am trying to install tweepy for python 3.4 on my raspberry pi, but when I run pip install tweepy, tweepy installs, but only for python 2.7. What is the command/procedure for installing tweepy for python 3.4? Any help will be appreciated. Thanks.
0
linux,python-3.x,raspberry-pi,pip
2015-11-24T22:38:00.000
0
33,905,229
Try pip3.4 install tweepy. Pip supports pip{version} to install packages in multiple versions.
0
360
false
0
1
How to install python34 module tweepy onto raspberry pi with pip
33,905,290
2
3
0
7
2
0
1.2
0
I've started a SimpleHTTPServer via the command python -m SimpleHTTPServer 9001. I'd like to stop it without having to force quit Terminal. What's the keystrokes required to stop it?
0
python,simplehttpserver
2015-11-25T07:11:00.000
1
33,910,489
CTRL + C is usually the right way to kill the process and leave your terminal open.
0
10,625
true
0
1
How do you stop a python SimpleHTTPServer in Terminal?
33,910,508
2
3
0
3
2
0
0.197375
0
I've started a SimpleHTTPServer via the command python -m SimpleHTTPServer 9001. I'd like to stop it without having to force quit Terminal. What's the keystrokes required to stop it?
0
python,simplehttpserver
2015-11-25T07:11:00.000
1
33,910,489
Use CTRL+C. This is a filler text because answer must be 30 characters.
0
10,625
false
0
1
How do you stop a python SimpleHTTPServer in Terminal?
33,910,517
1
1
0
1
0
0
0.197375
0
I'm using shedksin to convert a python file (that is dependent on numpy) to a C++ file. When executing through command prompt I get the error. Any ideas on what might be the problem ?
0
python,c++,numpy,module,shedskin
2015-11-26T04:15:00.000
0
33,930,490
I have found an answer. From Shedskin implementation: Library Limitations Programs to be compiled with Shed Skin cannot freely use the Python standard library. Only about 17 common modules are currently supported. Note that Shed Skin can be used to build an extension module, so the main program can use arbitrary modules (and of course all Python features!). See Compiling an Extension Module. In general, programs can only import functionality that is defined in the Shed Skin lib/ directory. The following modules are largely supported at the moment: bisect collections ConfigParser copy datetime fnmatch getopt glob math os (some functionality missing under Windows) os.path random re socket string sys time
0
611
false
0
1
Shedskin can't locate module numpy
35,816,211
1
1
0
0
0
1
1.2
0
I'm using Tornado as a coroutine engine for a periodic process, where the repeating coroutine calls ioloop.call_later() on itself at the end of each execution. I'm now trying to drive this with unit tests (using Tornado's gen.test) where I'm mocking the ioloop's time with a local variable t: DUT.ioloop.time = mock.Mock(side_effect= lambda: t) (DUT <==> Device Under Test) Then in the test, I manually increment t, and yield gen.moment to kick the ioloop. The idea is to trigger the repeating coroutine after various intervals so I can verify its behaviour. But the coroutine doesn't always trigger - or perhaps it yields back to the testing code before completing execution, causing failures. I think should be using stop() and wait() to synchronise the test code, but I can't see concretely how to use them in this situation. And how does this whole testing strategy work if the DUT runs in its own ioloop?
0
python,unit-testing,testing,asynchronous,tornado
2015-11-26T13:56:00.000
0
33,940,518
In general, using yield gen.moment to trigger specific events is dicey; there are no guarantees about how many "moments" you must wait, or in what order the triggered events occur. It's better to make sure that the function being tested has some effect that can be asynchronously waited for (if it doesn't have such an effect naturally, you can use a tornado.locks.Condition). There are also subtleties to patching IOLoop.time. I think it will work with the default Tornado IOLoops (where it is possible without the use of mock: pass a time_func argument when constructing the loop), but it won't have the desired effect with e.g. AsyncIOLoop. I don't think you want to use AsyncTestCase.stop and .wait, but it's not clear how your test is set up.
0
196
true
0
1
Unit-testing a periodic coroutine with mock time
33,975,984
1
1
0
1
0
0
0.197375
1
I have these values to connect to an IMAP sever: hostname username password I want to auto-detect the details with Python: port ssl or starttls If the port is one of the well-known port numbers there should be not too many possible combinations. Trying all would be a solution, but maybe there is a better way?
0
python,imap
2015-11-26T20:42:00.000
0
33,946,694
You pretty much have to brute force it, but there's really only three setups, and it only requires two connects: First connect on port 993 with regular SSL/TLS. If this works: 993/TLS. If this fails: Connect to port 143, and check if CAPABILITY STARTTLS exists. If it does: try StartTLS. If this works: 143/STARTTLS. Else: See if you can log in on port 143. if this fails, no good configuration. This wouldn't be secure anyway, so should be discouraged. SMTP is a bit more complex: You can try 587 with StartTLS, 465 with TLS, or 25 with StartTLS, plain, or no authentication at all. Note: autodetecting STARTTLS is dangerous, as it allows a MITM attack, where the attacker hides the STARTTLS capability so that you attempt to login without it. You may want to ask the user if they wish to connect insecurely, or provide a 'disable security' setting that must be opted into.
0
237
false
0
1
Autodetect IMAP connection details
33,947,171
1
3
0
9
11
0
1.2
1
Is there any way to have Boto seek for the configuration files other than the default location, which is ~/.aws?
0
python,python-3.x,boto,boto3
2015-11-27T06:43:00.000
0
33,951,619
It's not clear from the question whether you are talking about boto or boto3. Both allow you to use environment variables to tell it where to look for credentials and configuration files but the environment variables are different. In boto3 you can use the environment variable AWS_SHARED_CREDENTIALS_FILE to tell boto3 where your credentials file is (by default, it is in ~/.aws/credentials. You can use AWS_CONFIG_FILE to tell it where your config file is (by default, it is in ~/.aws/config. In boto, you can use BOTO_CONFIG to tell boto where to find its config file (by default it is in /etc/boto.cfg or ~/.boto.
0
13,555
true
1
1
Boto3: Configuration file location
33,959,240
2
2
0
0
0
1
0
0
While surfing stack exchange, I have seen many people mentioning speed efficiency in their answers. How is one code faster than the other one which does the same function? What makes the code run faster? Less lines? Does importing mean loss of performance? What things should I keep in mind to write performance efficient code? Why do I need a performance efficient code? I have also seen people writing Loose on speed to gain on beauty? Why are beautiful codes slow?
0
python,performance
2015-11-27T18:05:00.000
0
33,962,771
Picking a better algorithm can often lead to remarkable speed improvements. For example, replacing bubble sort with quicksort; same job, better algorithm, much faster. The faster algorithm may be harder to understand, and there are other costs in time to write and maintain the code; but I have seen a job which took 4 days to run reduced to 25 seconds by a better algorithm. For more detail, search on "Big O Notation" - a way of describing the asymptotic performance of algorithms.
0
131
false
0
1
Write speed efficient code in python
33,962,888
2
2
0
3
0
1
1.2
0
While surfing stack exchange, I have seen many people mentioning speed efficiency in their answers. How is one code faster than the other one which does the same function? What makes the code run faster? Less lines? Does importing mean loss of performance? What things should I keep in mind to write performance efficient code? Why do I need a performance efficient code? I have also seen people writing Loose on speed to gain on beauty? Why are beautiful codes slow?
0
python,performance
2015-11-27T18:05:00.000
0
33,962,771
I'll try to give you a concrete answer. Your question is too abroad and involves a lot of things that are not trivial to explain. How is one code faster than the other one which does the same function? For example, you have to find an element in an array. You can look for it in every cell until the end or stop when you have found it. This is the most trivial example I can give you, but I think that is good to a initial idea. Well, now you would look for "bubble sort" and "quickSort" as examples of algorithm to sort arrays. They do the same job, but the second one much faster. What makes the code run faster? Less lines? Not really, you may be interested in learn something about complexity of algorithms. Search on "Big O Notation", which is a way of describing the asymptotic performance of algorithms. Does importing mean loss of performance? It depends on the context. If you need to save resources because of hardware where you will execute your code may yes. In another situations may the differences about performance is very little so you don't have a real problem. What things should I keep in mind to write performance efficient code? You have to learn differents skills about algorithms. (backtracking algorithms, divide and conquer algorithms, dynamic programming algorithms, greedy algorithms, branch and bound algorithms...) Before that, you will usually use brute force algorithms whick makes your code less efficient. Why do I need a performance efficient code? You have a cpu which is not God. If the mayority of the code which is actually run in your computer aren't efficient you would have a problem, everything will be slower. And in some cases impossible to maintain. Loose on speed to gain on beauty? Beautiful code is usually code that is easy to understand. Often more efficient algorithm are harder to understand, and there are other costs in time to write and maintain the code. But you can reduce that effect if you get used to write Clean Code. Your code will be much better, beauty and more efficient in a lot of ways. It's very important trying to make your code understandable, but if you can't, give some information in comments or documentation about what are you doing in these lines. I hope this helps you.
0
131
true
0
1
Write speed efficient code in python
33,963,532
1
1
0
2
3
0
1.2
0
When indexing and searching for query words in whoosh does the program index every time it is ran? I am making a web interface with it so it can display certain results to the user. To do so I am using php to call the python file in the html. I have 1GB of data to index so is it going to take a long time everytime I run the file or the first time will be long and the rest significantly faster than the first due to the fact the program won't need to index all documents from start.
0
python,indexing,whoosh
2015-11-30T13:46:00.000
0
34,000,107
In your python code, you should separate the Indexer from the Searcher. Configure your php file to call the Searcher only; run the indexer manually from time to time when there is new data added or old data altered. The key idea is index only when you really need it ; not at every search operation.
0
637
true
0
1
Whoosh indexing
34,003,883
2
2
0
0
1
1
0
0
I have a handler (subclass of RequestHandler) which handles GET, POST, PUT and DELETE requests. The class also has independent functions which operates on DB. I am writing unit test for the class, but I am not able to initialize the class since it requires 2 arguments. How can I do it? Note: I don't have issues while testing rest calls.
0
python-2.7,tornado
2015-12-01T06:24:00.000
0
34,013,905
The two arguments are the tornado.web.Application and the tornado.httputil.HTTPServerRequest. Normally, rather than constructing a RequestHandler directly, Tornado applications are tested via tornado.testing.AsyncHTTPTestCase which will create the handlers as needed. (You could construct the application and request by hand, but I wouldn't recommend it) Do the functions you want to test need the application or request objects? If not, you could move them out of the subclass of RequestHandler to test them in isolation. If they do need either of these objects, then AsyncHTTPTestCase is the simplest way to get them.
0
239
false
0
1
Python - Tornado - How to write unit test for independent functions?
34,033,888
2
2
0
1
1
1
1.2
0
I have a handler (subclass of RequestHandler) which handles GET, POST, PUT and DELETE requests. The class also has independent functions which operates on DB. I am writing unit test for the class, but I am not able to initialize the class since it requires 2 arguments. How can I do it? Note: I don't have issues while testing rest calls.
0
python-2.7,tornado
2015-12-01T06:24:00.000
0
34,013,905
I solved the issue by making my testcase class a subclass of the class under testing.
0
239
true
0
1
Python - Tornado - How to write unit test for independent functions?
34,308,339
1
3
0
0
0
0
0
0
I recently switched from Matlab to Numpy and love it. However, one really great thing I liked about Matlab was the ability to complete commands. There are two ways that it does this: 1) tab completion. If I have a function called foobar(...), I can do 'fo' and it will automatically fill in 'foobar' 2) "up-button" completion (I'm not sure what to call this). If I recently entered a command such as 'x = linspace(0, 1, 100); A = eye(50);' and then I wish to quickly type in this same command so that I can re-evaluate it or change it slightly, then I simply type 'x =' then press up and it will cycle through all previous commands you typed that started with 'x ='. This was an awesome awesome feature in Matlab (and if you have heard of Julia, it has done it even better by allowing you to automatically re-enter entire blocks of code, such as when you are defining functions at the interactive prompt) Both of these features appear to not be present in the ordinary python interactive shell. I believe tab autocomplete has been discussed before and can probably be enabled using the .pythonrc startup script and some modules; however I have not found anything about "up-button" completion. Python does have rudimentary up-button functionality that simply scrolls through all previous commands, but you can't type in the beginning of the command and have that narrow down the range of commands that are scrolled through, and that makes a huge difference. Anyone know any way to get this functionality on the ordinary python interactive shell, without going to any fancy things like IPython notebooks that require separate installation?
0
python,matlab,python-interactive,interactive-shell
2015-12-01T22:43:00.000
0
34,031,623
Use iPython or some other Python shell. There are plenty. You may even program your own that will do whatever you want.
0
191
false
0
1
python "up-button" command completion, matlab/julia style
34,031,693
1
2
0
3
1
1
0.291313
0
I have a program in Python that takes in several command line arguments and uses them in several functions. How can I use cProfile (within my code)to obtain the running time of each function? (I still want the program to run normally when it's done). Yet I can't figure out how, for example I cannot use cProfile.run('loadBMPImage(sys.argv[1])') to test the run time of the function loadBMPImage. I cannot use sys.argv[1] as an argument. Any idea how I can use cProfile to test the running time of each function and print to stdout, if each function depends on command line arguments? Also the cProfile must be integrated into the code itself. Thanks
0
python,profiler,cprofile
2015-12-01T23:17:00.000
0
34,032,055
I run the python program with -m cProfile example: python -m cProfile <myprogram.py> This will require zero changes to the myprogram.py
0
2,680
false
0
1
How can I use the cProfile module to time each funtion in python?
59,113,102
1
2
0
0
0
1
1.2
0
I'm working on an application that needs to scan files from 3rd parties and process them. Sometimes these are compressed, so I've created a function that checks the file extension (tar.gz, gz, zip) and uncompresses accordingly. Some of the .zip files return this error: NotImplementedError: compression type 12 (bzip2). Is there a better way for me to identify the the compression type other than the file extension?
0
python,python-2.7,zip,bzip2
2015-12-03T23:50:00.000
0
34,078,498
Turns out the zipfile module in Python 2.7 doesn't support a later version of PKZIP that has bzip2 support. Switching to Python 3.3 and using zipfile module works fine.
0
288
true
0
1
Python Bzip2 File Hiding as a Zip file
34,079,563
1
1
0
0
0
0
0
0
I am working on an algorithm in python for a problem which take multiple hours to finish. I want to accept some details using html/php from user and then use those to run the algorithms in python. Even when the user closes the browser, I want the python script to be running at the server side and when user logins again, and then it displays the result. Is this possible using apache server and php? Can server created using nodejs be solution? Plz help. Any help would be appreciated.
0
php,python,node.js,apache
2015-12-04T17:27:00.000
0
34,094,063
Write your processing code as a completely independent software not tied to web server at all. Web server application will only add tasks into some database and return immediately. Your process will run as a service, polling database for new tasks, executing them and pushing some in-progress updates and final results back to the database. Web server application could see that the processing started, display in-progress and final results only looking up the database, which is fast.
0
84
false
1
1
Running python script on the apache server end
34,103,974
2
2
0
2
1
1
0.197375
0
I am trying to deploy my first Python project onto a Raspberry Pi 2 B via Visual Studio 2015 Community Edition. The Pi runs Windows 10 Core IoT and I can connect to it via SSH and the web interface with no problem. My issue is that I get Error DEP6200 during deployment. VS asks for a PIN during deployment that I cannot provide. It's not the OS's login password nor any standard PIN you might expect (eg 0000 or 1234). Any hint is appreciated.
0
python,visual-studio,deployment,raspberry-pi,windows-10-iot-core
2015-12-04T20:40:00.000
0
34,097,042
try changing Authentication to None in App Properties while deploying/ debugging. Can be found Debug -> Properties -> Authentication (Select 'None')
0
476
false
0
1
PIN required - Cannot deploy Python code out of Visual Studio 2015 onto Raspberry Pi 2 B
34,219,439
2
2
0
0
1
1
0
0
I am trying to deploy my first Python project onto a Raspberry Pi 2 B via Visual Studio 2015 Community Edition. The Pi runs Windows 10 Core IoT and I can connect to it via SSH and the web interface with no problem. My issue is that I get Error DEP6200 during deployment. VS asks for a PIN during deployment that I cannot provide. It's not the OS's login password nor any standard PIN you might expect (eg 0000 or 1234). Any hint is appreciated.
0
python,visual-studio,deployment,raspberry-pi,windows-10-iot-core
2015-12-04T20:40:00.000
0
34,097,042
I was still getting errors after switching the authentication to "None". I had to add the port number after the remote machine name. MyPi3:8116 I also noticed that msvcmon.exe was running as administrator on the pi. I switched it to run as the DefaultAccount. I'm not sure if this was necessary. I'll switch it back to administrator later. You can shut it down from the processes link then on the debugging link, start it back up and check the box for DefaultAccount.
0
476
false
0
1
PIN required - Cannot deploy Python code out of Visual Studio 2015 onto Raspberry Pi 2 B
42,187,331
1
1
0
0
1
0
1.2
0
In my project, I use pytest to write unit test cases for my program. But later I find I there are many db operation, ORM stuff in my program. I known unit-testing should be run fast, but what is the different between unit-testing and auto integration-testing except fast. Should I just use the database fixture instead of mocking them?
0
python,unit-testing,integration-testing,pytest-django
2015-12-05T09:15:00.000
0
34,103,210
The main difference between unit tests and integration tests are that integration testing deal with the interactions between two or more "units". As in, a unit test doesn't particularly care what happens with the code surrounding it, just as long as the code within the unit test operates as it's designed to. As for your second question, if you feel the database and fixtures in your unit test suite is taking too long to run, mocking is a great solution.
0
653
true
1
1
Django testing method, fixture or mock?
34,551,957
1
2
0
1
2
0
0.099668
0
I've got a script that accepts a path as an external argument and I would like to attach it to Total Commander. So I need a script/plugin for TC that would pass a path of opened directory as an argument and run a Python script which is located for example at C:/Temp How can I achieve this? Best Regards, Marek
0
python,total-commander
2015-12-05T17:45:00.000
1
34,108,753
You can add a new button to the button bar. Right-click on an existing Icon and copy the icon by choosing "copy" from drop-down menu. Paste it into the button bar by right-clicking on it, choosing "paste" from the menu. Right-click on this copied icon and choose "modify" (or similar). This opens a window that allows you the choose a program and a parameter. Note: My version is set to a different language so the names of the menu items might be a bit different.
0
1,275
false
0
1
Run Python script with path as argument from total commander
34,111,757
1
2
0
1
2
0
1.2
0
Pure tones in Psychopy are ending with clicks. How can I remove these clicks? Tones generated within psychopy and tones imported as .wav both have the same problem. I tried adding 0.025ms of fade out in the .wav tones that I generated using Audacity. But still while playing them in psychopy, they end with a click sound. Now I am not sure how to go ahead with this. I need to perform a psychoacoustic experiment and it can not proceed with tone presentation like that.
0
python,psychopy
2015-12-06T03:50:00.000
0
34,113,812
Clicks in the beginning and end of sounds often occur because the sound is stopped mid-way so that the wave abruptly goes from some value to zero. This waveform can only be made using high-amplitude high-frequency waves superimposed on the signal, i.e. a click. So the solution is to make the wave stop while on zero. Are you using an old version of psychopy? If yes, then upgrade. Newer versions add a Hamming window (fade in/out) to self-generated tones which should avoid the click. For the .wav files, try adding (extra) silence in the end, e.g. 50 ms. It might be that psychopy stops the sound prematurely.
0
481
true
1
1
Pure tones in Psychopy end with unwanted clicks
34,122,112
1
1
0
0
1
0
0
0
I am working on a Python WebSocket server. I initiate it by running the python server.py command in Terminal. After this, the server runs fine and actually pretty well for what I'm using it for. The server runs on port 8000. My question is, if I keep the server.py file outside of my localhost directory or any sub-directory, can the Python file be read and the code viewed by anyone else? Thanks.
0
python,websocket,server
2015-12-08T05:34:00.000
1
34,148,739
It is hard to give a definite yes or no answer, because there are a million ways in which your server may expose the .py file. The crucial point is though, that your server needs to actively expose the file to the outside world. A computer with no network-enabled services running does not expose anything on the network, period. Only physical access to the computer would allow you access to the file. From this absolute point, it's a slow erosion of security with every additional service that offers a network component. Your Python server itself (presumably) doesn't expose its own source code; it only offers the services it's programmed to offer. However, you may have other servers running on the machine which actively do offer the file for download, or perhaps can be tricked into doing so. That's where an absolute "No" is hard to give, because one would need to run a full audit of your machine to be able to give a definitive answer. Suffice it to say that a properly configured server without gaping security holes will not enable users to download the underlying source code through the network.
0
123
false
0
1
Can Python server code be read?
34,150,245
1
1
0
0
0
0
0
0
I have the following system setup client app running on my computer server app running on my computer publisher Raspberry-Pi unit subscriber Raspberry-Pi unit the client app sends a message to the server who then sends a message to the publisher which forwards this message to the subscriber which then returns the message back to the server app I am trying to measure the elapsed time in seconds using time.time() or timeit.default_timer() however both returned same results. i measure time in 4 points: Message arriving from client to server app. Message arriving at publisher from server Message arriving at subscriber from publisher Message arriving at server app from the publisher What happens is that the first and the last time make sense how ever both time stamps on the publisher and subscriber happen before the first time-stamp on the server, which makes no sense, unless that raspberry pi traveled back in time.These are the times measured: [1449606796.36039, 1449606784.0, 1449606784.0, 1449606804.49233] When i measure time.time() on the different machines manually everything seems to be in sync. Any idea what's going wrong here ?
0
python-2.7,time,raspberry-pi2
2015-12-08T20:48:00.000
0
34,165,743
I don't know what the problem was, but i switched to python DateTime.now() and everything seems to work fine, no weird times now.
0
401
false
0
1
Raspberry Pi (Python) Measuring time between units
34,198,704
1
3
0
2
1
0
1.2
0
when i start uwsgi 2.0.11.2 under pyenv 2.7.11 i get: ImportError: /home/user/.pyenv/versions/2.7.11/envs/master2/lib/python2.7/lib-dynload/_io.so: undefined symbol: _PyCodecInfo_GetIncrementalEncoder also uwsgi prints Python version: 2.7.10 (default, May 30 2015, 13:57:08) [GCC 4.8.2] not sure how to fix it
0
python,uwsgi,pyenv
2015-12-08T22:48:00.000
1
34,167,557
I had the same (or better: a similar) problem with uwsgi when upgrading Python from 2.7.3 to 2.7.10: The module that I tried to import was socket (socket.py) Which in turn tried to import _socket (_socket.so) - and the unresolved symbol was _PyInt_AsInt The problem is a mismatch between some functions between Python minor minor releases (which doesn't break any backward compatibility, BTW). Let me detail: Build time: when your uwsgi was built, the build was against Python 2.7.10 (as you specified). Python could have been compiled/built: statically - most likely, the PYTHON LIBRARY (from now on, I am going to refer to it as PYTHONCORE as it's named by its creators) in this case: (libpython2.7.a) is in a static lib which is included in the python executable resulting a huge ~6MB executable dynamically - PYTHONCORE (libpython2.7.so) is a dynamic library which python executable (~10KB of bytes large, this time) uses at runtime Run time: the above uwsgi must run in an Python 2.7.11 environment Regardless of how Python is compiled, the following thing happened: between 2.7.10 and 2.7.11 some internal functions were added/removed (in our case added) from both: PYTHONCORE Dynamic (or extension) modules (written in C) - .so files located in ${PYTHON_LIB_DIR}/lib-dynload (e.g. /home/user/.pyenv/versions/2.7.11/envs/master2/lib/python2.7/lib-dynload); any dynamic module (.so) is a client for PYTHONCORE So, basically it's a version mismatch (encountered at runtime): 2.7.10 (which uwsgi was compiled against): PYTHONCORE - doesn't export PyCodecInfo_GetIncrementalEncoder _io.so (obviously) doesn't use the exported func (so, no complains at import time) 2.7.11 (which uwsgi is run against): PYTHONCORE - still (as it was "embedded" in uwsgi at compile (build) time, so it's still 2.7.10) doesn't export PyCodecInfo_GetIncrementalEncoder _io.so - uses/needs it resulting a situation where a Python 2.7.11 dynamic module was used against Python 2.7.10 runtime, which is unsupported. As a conclusion make sure that your uwsgi buildmachine is in sync (from Python PoV) with the runmachine, or - in other words - build uwsgi with the same Python version you intend to run it with!
0
23,361
true
0
1
uwsgi fails under pyenv/2.7.11 with _io.so: undefined symbol: _PyCodecInfo_GetIncrementalEncoder
34,168,578
1
2
0
1
0
0
0.099668
0
I'm working on an IoT App which will do majority of the basic IoT operations like reading and writing to "Things". Naturally, it only makes sense to have an event-driven server than a polling server for real-time updates. I have looked into many options that are available and read many articles/discussions too but couldn't reach to a conclusion about the technology stack to use for the backend. Here are the options that i came across: Meteor Python + Tornado Node.js + Socket.io Firebase PubNub Python + Channel API (Google App Engine) I want to have as much control on the server as possible, and of course at the best price. What options do i have? Am i missing something out? Personally, i prefer having a backend in Python from my prior experience.
0
python,firebase,backend,iot,real-time-data
2015-12-09T11:02:00.000
1
34,177,156
You're comparing apples to oranges here in your options. The first three are entirely under your control, because, well, you own the server. There are many ways to get this wrong and many ways to get this right, depending on your experience and what you're trying to build. The last three would fall under Backend-As-A-Service (BaaS). These let you quickly build out the backend of an application without worrying about all the plumbing. Your backend is operated, maintained by a third party so you lose control when compared to your own server. ... and of course at the best price AWS, Azure, GAE, Firebase, PubNub all have free quotas. If your application becomes popular and you need to scale, at some point, the BaaS options might end up being more expensive.
0
586
false
1
1
Real-time backend for IoT App
34,178,035
1
2
1
0
0
0
0
0
I have a Python script and a C++ program running at the same time, both accessing the GPIO pins (not the same ones, though) in this order: C++ Python C++ The access of the C++ program worked (I used wireless transmitters and received the message). After that the Python access (light up an LED) worked as well. But when I tried to send another message using the wireless transmitters with C++, nothing happened, I don't receive messages anymore. Is there a way to find out, whether the GPIO pins are blocked or something?
0
python,c++,raspberry-pi,gpio
2015-12-10T10:58:00.000
0
34,200,159
If you clean up the GPIO Headers in both scripts, it should be possible, otherwise it wont work. You can clean up in python by using GPIO.cleanup(), then it sould work, cause it is clean again to your c++ Code.
0
250
false
0
1
Is it possible to access GPIO pins from a Python script and a C++ program at the same time?
40,125,816
1
4
0
1
1
0
0.049958
0
I've got a Django project running on an Ubuntu server. There are other developers who have the ability to ssh into the box and look at files. I want to make it so that the mysql credentials and api keys in settings.py are maybe separated into a different file that's only viewable by the root user, but also usable by the django project to run. My previous approach was to make the passwords file only accesible to root:root with chmod 600, but my settings.py throws an ImportError when it tries to import the password file's variables. I read about setuid, but that doesn't seem very secure at all. What's a good approach for what I'm trying to do? Thanks.
0
python,mysql,linux,django,passwords
2015-12-11T01:49:00.000
0
34,214,908
I'd place the password file in a directory with 600 permissions owned by the Django user. The Django user would be able to do what it needed to and nobody else would even be able to look in the directory (except root and Django) Another thing you could do would be to store it in a database and set it so that the root user and the Django user in the DB have unique passwords, that way only the person with those passwords could access it. IE system root is no longer the same as DB root.
0
885
false
1
1
Linux/Python: How can I hide sensitive information in a Python file, so that developers on the environment won't be able to access it?
34,215,479
1
2
0
0
1
0
0
0
I'm trying for ages to access a 32bit C compiled lib within an 64bit Ubuntu. I'm using python and CDLL lib in order to make it happen but with no success so far. I can easily open the same 32bit lib on a 32bit OS, and the 64bit version on a 64bit OS. So, what I'm asking is if anyone knows a way to encapsulate/sandbox/wrap the lib so I can achieve my goal. That way I can use a single 64bit server to access the 32 and 64bit versions of those libs. If someone knows another python lib that can make the trick please let me know.
0
python,ubuntu,debian,32bit-64bit,ctypes
2015-12-11T10:36:00.000
1
34,221,468
I am not sure if you can do this in the same process - we are talking about arithmetic here: 32bit pointers are different from 64bit pointers, so trying to reference them in the same process ... well, I am not sure what happens when trying to access a memory area which is not accessible or which is not supposed to be accessed (I guess Segmentation fault? ). The only solution I can think of is it to have a separate Python 32 bit instance that runs in its own process. Then, with some form of IPC you can call the python32 bit instance from your 64 bit instance.
0
285
false
0
1
Accessing a 32bit with python on a Debian 64bit with CDLL lib (or other)
34,221,649
1
2
0
5
2
0
1.2
0
I have this python script, in one line I have a 1000 character long string. I have syntax highlighting on, vim hangs on this line. If I change the file extension to c++ than it works. I suspect problems with syntax highlighting plugin is is causing the hang. Can this be fixed somehow? I'm using vim version 7.4.52
0
python,vim,syntax-highlighting,vim-syntax-highlighting
2015-12-11T11:59:00.000
0
34,223,068
Overly long lines can dramatically slow down Vim's syntax highlighting; usually, this is a fault of the syntax script, and you should inform its author (found in the $VIMRUNTIME/syntax/python.vim script header). Vim 7.4 includes the :syntime command, which greatly helps with troubleshooting and finding the problematic regular expression. It might help to :set synmaxcol=... to a value lower than the default 3000.
0
995
true
0
1
Vim python syntax highlighting hangs for very long lines
34,225,084
1
2
0
1
1
1
0.099668
0
I am studying in machine learning now and I want to build up a recommender system. First, I would like to make a top-N recommendation using two existing methods and they are both written in C++ code. As the file are huge and complex, I want to call them with Python, instead of adding code on that. Which tool is suitable for my case? Thank you in advance!
0
python,c++
2015-12-11T16:55:00.000
0
34,228,646
You can use standard python api , Cython or Boost.python. It is much easier to work with boost.python. You have to add very little code to your c++ library and compile it as a module library which then you can call from python. with boost you can easily add your classes and their methods. Additionally you can introduce vector of an object which makes it easier to pass data to python and back to your library. I recommend boost.python but you can look for yourself. There are a lot of tutorials on both cython and boost.python if you google it.
0
200
false
0
1
How can I call C++ code using Python?
34,230,076
1
3
0
0
4
1
0
0
I'm working on a School Project. I've done a lot of Python Script before and I was wondering if I could like import python in html like javascript? How should I do it? Example is importing time. I want to show a Time clock in my webpage from python script.
0
python,html,python-2.7
2015-12-13T13:24:00.000
0
34,251,551
I is not possible to import Python code in html as you import JavaScript code. JavaScript is executed by the browser of the client and the browsers don't have an included Python interpreter. You have to do it with JavaScript if you want to do it on the Client side.
0
281
false
1
1
Running Python Script in HTML
34,251,582
1
1
0
1
2
0
1.2
0
As the title says, I’ am trying to run Flask alongside a PHP app. Both of them are running under Apache 2.4 on Windows platform. For Flask I’m using wsgi_module. The Flask app is actually an API. The PHP app controls users login therefore users access to API. Keep in mind that I cannot drop the use of the PHP app because it controls much more that the logging functionality [invoicing, access logs etc]. The flow is: User logs in via PHP app PHP stores user data to a database [user id and a flag indicating if user is logged in] User makes a request to Flask API Flask checks if user data are in database: If not, redirects to PHP login page, otherwise let user use the Flask API. I know that between steps 2 and 3, PHP have to share a session variable-cookie [user id] with Flask in order Flask app to check if user is logged in. Whatever I try fails. I cannot pass PHP session variables to Flask. I know that I can’t pass PHP variables to Flask, but I’m not sure for that. Has anyone tried something similar? What kind of user login strategy should I implement to the above setup?
1
php,python,session,flask
2015-12-14T11:37:00.000
0
34,266,083
I'm not sure this is the answer you are looking for, but I would not try to have the Flask API access session data from PHP. Sessions and API do not go well together, a well designed API does not need sessions, it is instead 100% stateless. What I'm going to propose assumes both PHP and Flask have access to the user database. When the user logs in to the PHP app, generate an API token for the user. This can be a random sequence of characters, a uuid, whatever you want, as long as it is unique. Write the token to the user database, along with an expiration date if you like. The login process should pass that token back to the client (use https://, of course). When the client needs to make an API call, it has to send that token in every request. For example, you can include it in the Authorization header, or you can use a custom header as well. The Flask API gets the token and searches the user database for it. If it does not find the token, it returns 401. If the token is found, it now knows who the user is, without having to share sessions with PHP. For the API endpoints you will be looking up the user from the token for every request. Hope this helps!
0
1,770
true
1
1
Run Flask alongside PHP [sharing session]
34,272,457
1
1
1
2
2
0
1.2
0
There are multiple questions about "how to" call C C++ code from Python. But I would like to understand what exactly happens when this is done and what are the performance concerns. What is the theory underneath? Some questions I hope to get answered by understanding the principle are: When considering data (especially large data) being processed (e.g. 2GB) which needs to be passed from python to C / C++ and then back. How are the data transferred from python to C when function is called? How is the result transferred back after function ends? Is everything done in memory or are UNIX/TCP sockets or files used to transfer the data? Is there some translation and copying done (e.g. to convert data types), do I need 2GB memory for holding data in python and additional +-2GB memory to have a C version of the data that is passed to C function? Do the C code and Python code run in different processes?
0
python,c++,c,language-binding
2015-12-15T08:39:00.000
1
34,284,421
You can call between C, C++, Python, and a bunch of other languages without spawning a separate process or copying much of anything. In Python basically everything is reference-counted, so if you want to use a Python object in C++ you can simply use the same reference count to manage its lifetime (e.g. to avoid copying it even if Python decides it doesn't need the object anymore). If you want the reverse, you may need to use a C++ std::shared_ptr or similar to hold your objects in C++, so that Python can also reference them. In some cases things are even simpler than this, such as if you have a pure function in C or C++ which takes some values from Python and returns a result with no side effects and no storing of the inputs. In such a case, you certainly do not need to copy anything, because you can read the Python values directly and the Python interpreter will not be running while your C or C++ code is running (because they are all in a single thread). There is an extensive Python (and NumPy, by the way) C API for this, plus the excellent Boost.Python for C++ integration including smart pointers.
0
397
true
0
1
How does calling C or C++ from python work?
34,284,538
1
1
0
1
1
0
1.2
0
I am trying to run a script from a udev rule after any USB drive has been plugged in. When I run the script manually, after the USB is mounted normally, it will run fine. The script calls a python program to run and the python program uses a file on the USB drive. No issues there. If I make the script to simply log the date in a file, that works just fine. So I know my UDEV rule and my script work fine, each on their own. The issue seems to come up when udev calls the script, then script calling the python program and the python program does not run right. I believe it to be that the USB drive has not finished mounting before the python script runs. When watching top, my script begins to run, then python begins to run, they both end, and then I get the window popup of my accessing the files on my USB drive. So I tried having script1.sh call script2.sh call python.py. I tried having script.sh call python1.py call python2.py. I tried adding sleep function both in the script.sh and python.py. I tried in the rule, RUN+="/home/pi/script.sh & exit". I tried exit in the files. I tried disown in the files. What else can I try?
0
python,bash,raspberry-pi2,usb-drive,udev
2015-12-15T17:04:00.000
1
34,295,198
Well you probably described you problem. The mount process is too slow. You can mount your usb device from your script.sh Also you probably need to disable automatic USB device mount for your system or the specific device only. If you add a symlink to your udev rule e.g. SYMLINK+="backup", then you can mount this device by: mkdir -p /path/to/foo mount -t ext4 /dev/backup /path/to/foo
0
1,205
true
0
1
Run script with udev after USB plugged in on RPi
34,300,132
3
3
0
1
0
1
0.066568
0
So I've always wanted to learn code/program since I was 14 or so. I took YouTube and even website tutorials for Java, tried to follow along and everything but just didn't get it. I thought that Java was the best and easiest language to learn for a beginner. Well, not for me. Fast forward to the beginning of this school year, I'm in high school and 16. I started taking programming class and we're going to learn Python. A month or two later and I actually understand the syntax of it, how if-else statements work, variables, functions, I even programmed a function to solve my Physics homework. Do you think now, that having a basic understanding of Python, it would be easier for me to learn another programming language like C, or Java, or something else?
0
python
2015-12-16T01:51:00.000
0
34,302,705
This isn't a very high-quality question, but the answer is quite simple. Always yes. The more languages you learn, the more you'll find similarities between them. It will eventually be a matter of applying different algorithms and data structures to get work done instead of choosing programming languages, for general purpose programming, anyway. When you get into application-specific things, such as embedded programming (imagine, cars, planes, military), then "learning that specific language" will become inescapable and valuable as employable skills. Also, ancient languages such as COBOL apparently fetch a pretty penny. Enjoy the life of computer science and/or software engineering, kid!
0
74
false
0
1
After learning one language, are other languages easier?
34,302,742
3
3
0
1
0
1
0.066568
0
So I've always wanted to learn code/program since I was 14 or so. I took YouTube and even website tutorials for Java, tried to follow along and everything but just didn't get it. I thought that Java was the best and easiest language to learn for a beginner. Well, not for me. Fast forward to the beginning of this school year, I'm in high school and 16. I started taking programming class and we're going to learn Python. A month or two later and I actually understand the syntax of it, how if-else statements work, variables, functions, I even programmed a function to solve my Physics homework. Do you think now, that having a basic understanding of Python, it would be easier for me to learn another programming language like C, or Java, or something else?
0
python
2015-12-16T01:51:00.000
0
34,302,705
Definitely, it certainly helped in my experience, where my first language was Liberty BASIC, and then Python. I found learning Python easier than learning LB. It's really more to with how you think about your programs, your logical thinking/problem solving skills.
0
74
false
0
1
After learning one language, are other languages easier?
34,302,725
3
3
0
1
0
1
0.066568
0
So I've always wanted to learn code/program since I was 14 or so. I took YouTube and even website tutorials for Java, tried to follow along and everything but just didn't get it. I thought that Java was the best and easiest language to learn for a beginner. Well, not for me. Fast forward to the beginning of this school year, I'm in high school and 16. I started taking programming class and we're going to learn Python. A month or two later and I actually understand the syntax of it, how if-else statements work, variables, functions, I even programmed a function to solve my Physics homework. Do you think now, that having a basic understanding of Python, it would be easier for me to learn another programming language like C, or Java, or something else?
0
python
2015-12-16T01:51:00.000
0
34,302,705
Absolutely. knowing one language always helps learning the new one, especially if they are similar. if you have learned python I'd suggest to move to java. avoid C or C++ they are very theoretics and much harder to learn with no stiff teacher and must-do homeworks.. be aware that python is much less strict than other languages so you will need to work a bit harder, but yeah do it, now! learning programming takes a little time and A LOT of practice but it sure no impossible. I would suggest watching some wonderful online lectures on Udacity or Coursera. Good luck ;)
0
74
false
0
1
After learning one language, are other languages easier?
34,302,757
1
1
0
0
0
0
0
1
I am using the Python requests module (requests (2.7.0)) and tracking URL requests. Most of these URL's are supposed to trigger a 301 redirect however for some the domain changes as well. These URL's where the 301 is causing a domain name change i.e. x.y.com ends up as a.b.com I get a certificate verify failed. However I have checked and the cert on that site is valid and it is not a self signed cert. For the others where the domain remains the same I do not get any errors so it does not seem to be linked to SSL directly else the others would fail as well. Also what is interesting is that if I run the same script using curl instead of requests I do not get any errors. I know I can suppress the request errors by setting verify=False but I am wondering why the failure occurs only when there is a domain name change. Regards, AB
0
python-3.x,python-requests
2015-12-17T18:08:00.000
0
34,341,396
This seems to work now. I believe the issue was linked to an old version of openssl. Once I upgraded even the 301 for a different domain goes through with no errors and that was with verify set to True.
0
188
false
0
1
Python requests module results in SSL error for 301 redirects to a different domain
34,343,323
1
1
0
2
2
1
1.2
0
VS2015 with the l&g PTVS looks great. But any non-trivial project runs about 20-50 times slower under a debugger (F5) than without one (Ctrl-F5), which makes it totally unusable for debugging. Any idea why? Is there any way to accelerate the debugger?
0
python,performance,debugging,visual-studio-2015,ptvs
2015-12-19T09:24:00.000
0
34,369,173
As a workaround, try mixed-mode debugging - it is significantly faster (but also more limited).
0
1,501
true
0
1
Why is PTVS so slow?
34,369,441
1
3
0
0
0
0
0
0
This is my first post here. I am a very big fan of Stack Overflow. This is the first time I could not find an answer to one of my questions. Here is the scenario: In my Linux system, I am not an admin or root. When I run a Python script, the output appears in the original folder, however when I run the same Python script as a Cron job, it appears in my accounts home folder. Is there anything I can do to direct the output to a desired folder? I do have the proper shebang path. Thank you!
0
python,linux,path,cron
2015-12-21T14:09:00.000
1
34,397,628
Thanks for the responses after further searching, I found this that worked: */1 * * * * /home/ranveer/vimbackup.sh >> /home/ranveer/vimbackup.log 2>&1
0
120
false
0
1
Cron job output in wrong Linux folder
34,400,781