Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
26,047,185
2014-09-25T20:07:00.000
-1
0
1
0
python,anaconda
26,047,701
3
false
0
0
You should try starting IDLE with the anaconda interpreter instead. AFAIK it's too primitive an IDE to be configurable which interpreter to use. So if anaconda doesn't ship one, use a different IDE instead, such as PyCharm, PyDev, Eric, Sublime2, Vim, Emacs.
2
6
1
I installed numpy, scipy, matplotlib, etc through Anaconda. I set my PYTHONPATH environment variable to include C://Anaconda; C://Anaconda//Scripts; C://Anaconda//pkgs;. import sys sys.path shows that IDLE is searching in these Anaconda directories. conda list in the command prompt shows that all the desired packages are installed on Anaconda. But import numpy in IDLE gives me the error No module named numpy. Suggestions? How do I tell IDLE where to look for modules/packages installed via Anaconda? I feel like I'm missing something obvious but I can't find an answer on any previous Overflow questions.
import anaconda packages to IDLE?
-0.066568
0
0
7,062
26,047,185
2014-09-25T20:07:00.000
2
0
1
0
python,anaconda
26,067,690
3
false
0
0
You need to add those directories to PATH, not PYTHONPATH, and it should not include the pkgs directory.
2
6
1
I installed numpy, scipy, matplotlib, etc through Anaconda. I set my PYTHONPATH environment variable to include C://Anaconda; C://Anaconda//Scripts; C://Anaconda//pkgs;. import sys sys.path shows that IDLE is searching in these Anaconda directories. conda list in the command prompt shows that all the desired packages are installed on Anaconda. But import numpy in IDLE gives me the error No module named numpy. Suggestions? How do I tell IDLE where to look for modules/packages installed via Anaconda? I feel like I'm missing something obvious but I can't find an answer on any previous Overflow questions.
import anaconda packages to IDLE?
0.132549
0
0
7,062
26,048,682
2014-09-25T21:47:00.000
0
1
0
0
python,encryption,sftp,pysftp
26,053,268
3
false
0
0
Do you mean you do not want the password to be visible in code or configuration file? There's no way to encrypt anything in a way that allows automatic decryption. All you can do is to obfuscate the password, but not really encrypt it. If anyone can access your code he/she can always reverse-engineer your "encryption".
1
0
0
I am using a Python script with the pysftp module to connect to SFTP server. Apparently I can pass a SFTP password explicitly which I don't like. My question: How could I replace the password for an encrypted one to avoid someone from being visible to the world?
How to encrypt password in Python pysftp?
0
0
0
2,397
26,049,674
2014-09-25T23:20:00.000
0
0
1
1
python,pip,homebrew
50,116,996
2
false
0
0
A linux command must be in a bin directory so you have no problem. Pip should be store in /usr/local/bin/pip not in /usr/local/share/python/pip.
2
0
0
I installed homebrew and which python gives /usr/local/bin/python . However when I type which pip I get/usr/local/bin/pip and not the desired /usr/local/share/python/pip How do I fix this?
Python Homebrew Error with Pip
0
0
0
140
26,049,674
2014-09-25T23:20:00.000
0
0
1
1
python,pip,homebrew
26,049,920
2
false
0
0
For my python 2.7.6 installation: My pip is installed to: /usr/local/bin/pip What makes you think it should be installed to /usr/local/share/python/pip? Are you seeing a problem when you try and use pip install (whatever)?
2
0
0
I installed homebrew and which python gives /usr/local/bin/python . However when I type which pip I get/usr/local/bin/pip and not the desired /usr/local/share/python/pip How do I fix this?
Python Homebrew Error with Pip
0
0
0
140
26,050,414
2014-09-26T00:53:00.000
1
0
0
1
python-2.7,wxpython
26,051,264
1
false
0
1
I don't know if Cloud9 supports it but normally to run a remote GUI application you would have ssh forward the X11 communication over the ssh connection via a tunnel. So basically the application is running on the remote system and it is communicating with a local X11 server which provides you with the display and handling of the mouse and keyboard. If you run ssh with the -X parameter then it will attempt to set up the X11 tunnel and set $DISPLAY in the remote shell so any GUI applications you run there will know how to connect to the X11 tunnel. Bw aware however that this is something that can be turned off on the remote end, so ultimately it is up to Cloud9 whether they will allow you to do this.
1
0
0
Does anyone know if it is possible to run python-gui apps, like wxPython, on a c9.io remote server? I have my home server set up with c9 via SSH, and no issues logging in and running apps in the terminal on the VM. However, when I try to run GUI apps, I get the following error message. Unable to access the X Display, is $DISPLAY set properly? After searching and searching, I can't seem to find a guide or anything in the docs that detail how to set $DISPLAY in the script. X display is installed and active on my server, but I don't know how to configure the c9 script to access it properly. Any assistance would be appreciated!
Running Python GUI apps on C9.io
0.197375
0
0
686
26,050,645
2014-09-26T01:24:00.000
0
0
0
1
python,jenkins
26,064,841
2
false
0
0
There's no easy way I know of to store the "variable" in jenkins. Your best bet is to use some other place to store this MAC address. A file would be a good place, but it probably needs to be on a shared fileserver somewhere.
1
3
0
I have a Jenkins job which executes a Python script (checkoutDevice.py) via shell. In checkoutDevice script, it connects to a inventory server and check out an available unit, unit's MAC address is available to return to Jenkins job I would like to return unit's MAC address from Python script to Jenkins job, so Jenkins job can pass that MAC address to another Python script. a. How would I store unit's MAC address to Jenkins' environment variable so I can pass it to another Python script in the same job? b. Another solution I am looking at is to write MAC address to a text file during execution of checkoutDevice script, then Jenkins will read that MAC address from the text file to store into a variable then pass to another Python script?
Return value from Python script to Jenkins variable
0
0
0
6,670
26,054,851
2014-09-26T08:04:00.000
0
0
0
1
c#,python,linux
39,438,068
1
false
0
0
For Linux you will find Statvfs in Mono.Unix.Native, in the Mono.Posix assembly.
1
2
0
I have a question. In my c# application I need to get the free space of a directory. According to my research, GetdiskfreespaceEx is proper and it works for me in my windows xp. Now I'm wondering if it works the same in a linux system. Since I wrote my c# program according to a python one, and for this function the developer of the python one made 2 cases: the windows os or else. For the windows case "ctypes.windll.kernel32.GetDiskFreeSpaceExW" is used while in else case "os.statvfs(folder)" is used. I did some more research but haven't found anything saying if GetdiskfreespaceEx could be used for linux. Anyone could tell me that? If no, any ways to get free disk space for linux in c#? Thanks in advance!
Get free space of a directory in linux
0
0
0
854
26,058,888
2014-09-26T11:50:00.000
0
1
0
1
security,python-2.7,osx-mavericks,shellshock-bash-bug
26,074,861
1
false
0
0
Using python does not increase your risk of being exposed to shellshock. The best way to protect your computer is to make sure your computer has up-to-date patches and updates installed. Microsoft, Apple and various Linux distributions have all released updtes/patches which are supposed to fix the problem.
1
1
0
Various outlets, along with Apple, are assuring OS X users that they are not at particular risk from the Shellshock bash exploit. However, I use Python frequently on my system and wonder if that would increase my risk; and whether there is anything I can do to mitigate it (short of installing a different bash). I use Apple's (2.7.5) Python and bash, on OS X 10.9.5.
Does using Python on OS X expose me to Shellshock?
0
0
0
61
26,065,129
2014-09-26T17:28:00.000
0
1
0
1
python,bash,shell,python-2.7,command
26,069,546
2
false
0
0
sharkbyte, It's easy to insert '#!/usr/bin/env python' at the top of all your Python files. Just run this sed command in the directory where your Python files live: sed -i '1 i\#! /usr/bin/env python\n' *.py The -i option tells sed to do an in-place edit of the files, the 1 means operate only on line 1, and the i\ is the insert command. I put a \n at the end of the insertion text to make the modified file look nicer. :) If you're paranoid of stuffing up, copy some files to an empty directory first & do a test run. Also, you can tell sed to make a backup of each original file, eg: sed -i.bak '1 i\#! /usr/bin/env python\n' *.py for MS style, or sed -i~ '1 i\#! /usr/bin/env python\n' *.py for the usual Linux style.
1
0
0
I would like to run python files from terminal without having to use the command $ python. But I would still like to keep the ability of using '$ python to enter the python interpreter. For example if I had a file named 'foo.py', I can use $ foo.py rather than $ python foo.py to run the file. How can I do this? Would I need to change the bash file or the paths? And is it possible to have both commands available, so I can use both $ foo.py and $ python foo.py? I am using ubuntu 14.04 LTS and my terminal/shell uses a '.bashrc' file. I have multiple versions of python installed on my computer, but when running a python file I want the default version to be the latest version of 2.7.x. If what I am asking is not possible or not recommended, I want to at least shorten the command $ python to $ py. Thank you very much for any help!
How can I eliminate the need of the command 'python' when running a python file in terminal?
0
0
0
85
26,068,324
2014-09-26T21:07:00.000
1
0
1
1
python,python-2.7,pygtk,centos6.5,tryton
26,068,565
1
true
0
0
Unfortunately packages that are installed with one minor version of Python are not able to be used with another minor version (as an example, version 2.7.8 is major version 2, minor version 7, micro version 8). Different micro versions are compatible with one another, so packages installed with 2.7.3 will work with 2.7.8, for example. So, while it may seem redundant, anything that you have with 2.6 you'll have to reinstall with 2.7 in order to work with it under 2.7. This is due to changes in the ABI from version to version, and other "under the hood" differences.
1
1
0
I have installed python 2.7 beside 2.6 on a CentOS 6.5 os. The particular application I want to install needs 2.7, but it also needs pygtk (as well as other stuff). If I start an interpreter with 2.6, it imports pygtk fine. But if I start an interpreter with 2.7 it can not find what it needs [pygtk]. There are plenty of helpful posts that address installing duplicate versions of python on CentOS 6, but could someone please help me make the python 2.7 find the other stuff [pygtk]? Why else would I want to install python 2.7 beside python 2.6 on CentOS if I didn't want to use a bunch of the standard things in both?
centos 6.5 python 2.7 can not find pygtk that python 2.6 sees fine
1.2
0
0
1,310
26,069,257
2014-09-26T22:36:00.000
0
0
1
0
python,python-2.7,python-3.x,wxpython,ipython
26,082,455
2
false
0
0
Check out the wxPython demo, I would probably use the wx.lib.masked.numctrl for that.
1
0
0
Hi I am a beginner on Python and was wondering how you can make a user input a number that contains two decimal places or less
Python How to force user input to be to 2dp
0
0
0
130
26,070,040
2014-09-27T00:16:00.000
12
0
0
0
python,postgresql,psycopg2,python-multiprocessing
26,072,257
1
true
0
0
You can't sanely share a DB connection across processes like that. You can sort-of share a connection between threads, but only if you make sure the connection is only used by one thread at a time. That won't work between processes because there's client-side state for the connection stored in the client's address space. If you need large numbers of concurrent workers, but they're not using the DB all the time, you should have a group of database worker processes that handle all database access and exchange data with your other worker processes. Each database worker process has a DB connection. The other processes only talk to the database via your database workers. Python's multiprocessing queues, fifos, etc offer appropriate messaging features for that.
1
4
0
I have a Python script running as a daemon. At startup, it spawns 5 processes, each of which connects to a Postgres database. Now, in order to reduce the number of DB connections (which will eventually become really large), I am trying to find a way of sharing a single connection across multiple processes. And for this purpose I am looking at the multiprocessing.sharedctypes.Value API. However, I am not sure how I can pass a psycopg2.connection object using this API across processes. Can anyone tell me how it might be done? I'm also open to other ideas in order to solve this issue. The reason why I did not consider passing the connection as part of the constructor to the 5 processes is mutual exclusion handling. I am not sure how I can prevent more than one process from accessing the connection if I follow this approach. Can someone tell me if this is the right thing to do?
Share connection to postgres db across processes in Python
1.2
1
0
5,128
26,071,795
2014-09-27T06:00:00.000
0
0
1
0
python,anaconda
26,105,194
1
false
0
0
You don't run the files in site-packages directly to use the modules. You should start a Python console in Spyder or the IPython notebook and use import matplotlib to access matplotlib.
1
1
0
I'd like to run any example .py script that uses matplotlib or pyqtgraph to see how it looks. The problem is, none of the examples included with the Anaconda3-2.0.1-Windows-x86 install work. For example, I open Spyder3.4 that was installed with Anaconda, but when I open/run any of the .py files ( C:\Anaconda3\Lib\site-packages\matplotlib) nothing happens... No error message, nothing in the output tab. I've installed this on a WinXp virtual machine that also has Qt5 creator/designer and PyQt installed. Is there any known conflict or path issues having PyQt installed alongside Anaconda? How can I tell if Anaconda and its packages (namely pyqtgraph, matplotlib and pyqt) installed correctly?
Running Anaconda Python Examples
0
0
0
2,516
26,076,796
2014-09-27T16:33:00.000
2
0
0
0
python,xml
26,076,987
1
true
0
0
An XML file can't run a python script since XML is a descriptive language used to represent data. However you can do the opposite, that's to say, use python to read the XML file and do something. By the way when you say "run the A code of this XML file", i'm not sure you're doing it right, since XML is not supposed to contain "active code".
1
0
0
I was wondering if in an xml file I could launch a python file and based on what python returns run different parts of the xml code. Something like IN THE XML FILE: run the python file if the file retuns A, run the A code of this XML file else run the B code of this XML file thanks!
launch python file from an xml file
1.2
0
1
633
26,079,533
2014-09-27T21:48:00.000
0
0
1
0
python,ipython-notebook
26,085,782
1
false
0
0
The documentation was hard to parse, but I stumbled on the solution, which wasn't obvious to me. On OS X, using ipython notebook, the magic command %matplotlib osx causes any external graphics windows to be interactive. That is, I don't have to type show() and I don't have to close the graphics window to continue working in the notebook.
1
0
0
I am trying to work out the best way to work with figures in ipython notebook. This is a problem on os x but not in windows. I have worked out how to put figures inline and in separate windows. What I'd like to do is put a figure in a separate window, leave it open and keep working on the notebook. But it seems I have to close the figure window to continue working on the notebook. How can I continue working on the notebook without closing the figure window?
working with figures in python notebook in os x
0
0
0
34
26,080,694
2014-09-28T01:13:00.000
1
0
0
0
javascript,python,django,forms,security
26,081,085
1
false
1
0
I have a django app which need to receive post requests from another site using html+javascript. You don't have to ! You can build an API instead ;) You create an API call - small site calls the API of main site. In that situation, the form is handled by a view in small site and the API is called via the server. Check out Django REST Framework. Note: That solution wouldn't stop you from using AJAX but it would avoid cross domain issues.
1
0
0
I have a django app which need to receive post requests from another site using html+javascript. For example i have mydjangosite.com and smallhtmlsite.com What i want is : user visit smallhtmlsite.com and fill a form, then he pushing submit button and mydjangosite.com receive request and create objects(form saving models actually). So i will have a view which will handle this requests. But how can these been done securely?
How to post a form to django site, securely and from another site
0.197375
0
0
389
26,081,164
2014-09-28T02:55:00.000
0
0
1
1
python,wing-ide
26,086,990
1
false
0
0
Open the file in Wing and then select Evaluate File in Python Shell from the Source menu. After that, functions/etc defined in the file can be accessed from the shell. You do need to redo that if you've edited the file. Or you may want to use the Active Range feature that's new in Wing 5.0.9: Select a range of code, and press the + icon in the Python Shell (top right) to make it the active range. Then you can easily reevaluate that range in the Python Shell by pressing the cog icon (or bind a key to the command python-shell-evaluate-active-range). Another option is to set a breakpoint, debug to it, and then use the Debug Probe, which is a shell that lets you interact with the runtime state in the currently selected debug stack frame.
1
1
0
I have a file called tweet.py located on my desktop which contains numerous functions that I would like to use in the wing-IDE. How do I include the file so I can use the functions in the python shell? I looked online but did not find anything to help me. Thanks guys. I'm using ubuntu 14.04, if that helps.
How to include file in wing-IDE
0
0
0
95
26,081,637
2014-09-28T04:32:00.000
2
0
0
0
python,logging,heroku,flask
26,085,702
1
true
1
0
Both outputs (print and logger) end up in a file, the only difference is that a logger typically allows for some filtering of the output and tries to add as little overhead as possible when messages are filtered out. This means its not possible to say without testing, IE comparing the two. You should be able to use the timeit module to time how long it takes to execute a for loop that prints a thousand or million messages. However, the other issue is concurrency: if your flask app is run by separate clients, the performance impact of log vs print may be different depending on how the log is designed vs the capture of print statements. This is harder to test but my guess is you could create a script that uses the multiprocessing module to run a wad of requests in parallel to a flask URL that just has one print/log message, and meadure how many requests your test client is able to make in a given amount of time. The final consideration is that the amount of processing in your flask view/render may be significantly larger than time it takes to either log or print. Eg if your render without any prints takes 100ms, and with print it takes 110ms, then what does it matter that log is twice as fast as print (IE 105ms instead)? As long as you remain frugal in your use of print/log, it won't matter much. This always applies, in any app, not just web.
1
1
0
I've deployed an flask app on heroku and since it's relatively small I never bothered to setup the logger. I have found by using a print statement I can see everything I need on one of herokus add-on loggers. What I'm wondering is does that negatively affect performance vs had I just used the built in logger in flask? Does the print statement add unnecessary overhead to a python program?
When using a flask application does using print negatively affect performance?
1.2
0
0
349
26,081,880
2014-09-28T05:16:00.000
0
0
0
0
python-3.x,data-dump
26,089,798
2
false
0
0
Use one of the python xml modules to parse the .xml file. Unless you have much more that 27GB ram, you will need to do this incrementally, so limit your choices accordingly. Use the csv module to write the .csv file. Your real problem is this. Csv files are lines of fields. They represent a rectangular table. Xml files, in general, can represent more complex structures: hierarchical databases, and/or multiple tables. So your real problem to to understand the data dump format well enough to extract records to write to the .csv file.
1
2
1
I have stack overflow data dump file in .xml format,nearly 27GB and I want to convert them in .csv file. Please somebody tell me, tools to convert xml to csv file or python program
How to convert xml file of stack overflow dump to csv file
0
0
1
704
26,090,117
2014-09-28T22:02:00.000
8
0
1
0
python,django,python-3.x
26,090,123
1
true
1
0
Just make a list of the result by doing list(zip(...)).
1
6
0
I migrated from Python 2.7 to Python 3.3 and zip() does not work as expected anymore. Indeed, I read in the doc that it now returns an iterator instead of a list. So, how I am supposed to deal with this? Can I use the "old" zip() in my Python3 code? Find bellow the way it worked before in a Django project: in views.py: my_zipped_list = zip(list1, list2) in file.html: {{ my_zipped_list.0.1 }} Maybe another solution would be to keep "new" zip() behaviour and change template instead. Thanks for help!
How can I get the "old" zip() in Python3?
1.2
0
0
4,304
26,090,711
2014-09-28T23:39:00.000
0
0
1
0
python,string,letters
26,090,772
4
false
0
0
You can try iterating over the characters of the string and if the character equals s, you take it to build a string. At the end the string would be "sss" if there were 3 's' in the string.
1
0
0
If the string is "43 Lobsters and 3 Crabs" I want to get only the s's from this string. There are three s's so in my new string I must have just "sss".
How do you get x amounts of y in one string and put it in another string in python?
0
0
0
40
26,090,938
2014-09-29T00:18:00.000
0
0
1
0
python,windows,ide
26,090,969
3
true
0
0
The "Edit with IDLE" option is actually placed there by IDLE itself. The thing you could do is just change the default program you want python files to be opened with (by right clicking, choosing "Open with" and choose a default program). The additional entries in the context menus are specified in the registry. So i think if you want to manually add a "Edit with PyCharm" option you would have to edit the registry yourself. But I wouldn't recommend to do that at all.
2
2
0
When I right-click on a .py file, I am given the option to "Edit with IDLE". How can I replace IDLE with another IDE, such as PyCharm? ie when I right-click on any .py file, I would like there to be the option to "Edit with PyCharm". I am using Windows by the way.
How do I change the default editor for a python file?
1.2
0
0
3,080
26,090,938
2014-09-29T00:18:00.000
0
0
1
0
python,windows,ide
26,091,010
3
false
0
0
Assuming you have Windows 7/8 and have the program (in this case PyCharm) installed already, you would: Right click any python file (typically ending with .py) Click "Open With" Either select PyCharm or the desired program from the list of programs OR Click Choose Default Program and 'find more options' or 'choose program from list' and find PyCharm in the directory you installed it in and select program Hope this helps! Be sure to specify if you meant a different OS than Widows.
2
2
0
When I right-click on a .py file, I am given the option to "Edit with IDLE". How can I replace IDLE with another IDE, such as PyCharm? ie when I right-click on any .py file, I would like there to be the option to "Edit with PyCharm". I am using Windows by the way.
How do I change the default editor for a python file?
0
0
0
3,080
26,095,180
2014-09-29T08:03:00.000
1
0
0
0
javascript,python,django,node.js,evercookie
26,095,512
1
true
1
0
You will have to break down your application as it is a humongous task to create something like GA. You will have to track many user activities(click,spend time etc).so you can do that in plain js or use a cross-platform lightweight library(angular.js) that can make your life a little easy. Now since you will have to send large set of traced data to you database with minimum latency,use Node.js in this scenario.Simple Ajax call would also work but then it would be very slow. Now comes your database.Prefer NoSql since it suits your requirement of unstructured data ,preferrablly MongoDb which can help you with its own mapReduce,large storage capacity .Since there will be lot of calculation involved you can use your python knowledge which can help you process data a lot faster.you can use other languages as well(eg.Go) Your processed data and results can then be stored in Redis(which acts as a caching layer). you can use sophisticated graphic library like d3.js,Highcharts.js for displaying Graphical data on the client-side. There are a lot of factors that can be involved.This is just a very basic outline of what you could do.
1
0
0
I want to create my own application for monitoring traffic of my website without using any third party tools like google analytics. In which I want to log screenshots, user details, page details and cookies. So what technology should I opt so as to achieve this goal and which will be best suited and what work flow should I follow. I've never done this kind of work previously so I'm new to this. Any help would be greatly appreciated. The technologies I know are : javascript, nodejs, django(python).
Want to create a new application for website traffic monitoring ( analysis )
1.2
0
0
153
26,096,209
2014-09-29T09:06:00.000
3
0
1
0
python,scipy
26,099,751
1
false
0
0
nmax refers to the maximum number of internal steps that the solver will take. The default is 500. You can change it with the nsteps argument of the set_integrator method. E.g. ode(f).set_integrator('dopri5', nsteps=1000) (The Fortran code calls this NMAX, and apparently the Fortran name was copied to the error message in the python code for the "dopri5" solver. In the ode class API, all the solvers ("dopri5", "vode", "lsoda", etc) consistently call this solver parameter nsteps, so scipy should change the error message used in the python code to say nsteps.)
1
3
1
While using scipy 0.13.0, ode(f).set_integrator('dopri5'), I get the error message - larger nmax is needed I looked for nmax in the ode.py but I can't see the variable. I guess that the number call for integration exceeds the allowed default value. How can I increase the nmax value?
python scipy ode dopri5 'larger nmax needed'
0.53705
0
0
2,600
26,096,607
2014-09-29T09:27:00.000
2
0
1
0
python,maya,pymel
26,104,345
1
true
0
0
@beardedBerry's solution usually works; the main likely problem is that some kinds of history will screw up vert colors when the mesh is split. You can use an extra UV channel in the same fashion, the UV's tend to be more survivable then the vertex colors, although (rarely) they too can go bad. Sigh. If you want an analytical method, you can do it more or less like this: Grab every vertex's world space position. Convert these to a hash value (you can probably just use the hash of the xyz tuple for the vert. Loop through all the faces in the complete object, mapping faces to the 3 hashes for its vertices (you'll want to sort the 3 hashes to make sure they are in the same order) Split the object Repeat the process in 1 and 2 Compare the dictionaries: each new object will be a subset of the original, and you'll be able to say 'face 7 in object 3 is face 250 in the original'
1
0
0
So I have a list of faces from a single mesh. That mesh (lets call it "A") is the result of combining an unknown number of meshes (you can assume that the list of faces equals one or more whole meshes that make up "A"). I then use separate on the mesh which results in a lot of separate meshes. What would be the best way to know which of these meshes corresponds to the original set of faces?
How to identify the resulting meshes after separating an object
1.2
0
0
205
26,098,710
2014-09-29T11:21:00.000
12
0
0
0
python,pandas,csv
54,617,685
9
false
0
0
The solution can be improved as data.rename( columns={0 :'new column name'}, inplace=True ). There is no need to use 'Unnamed: 0', simply use the column number, which is 0 in this case and then supply the 'new column name'.
2
53
1
My csv file has no column name for the first column, and I want to rename it. Usually, I would do data.rename(columns={'oldname':'newname'}, inplace=True), but there is no name in the csv file, just ''.
Rename unnamed column pandas dataframe
1
0
0
116,651
26,098,710
2014-09-29T11:21:00.000
5
0
0
0
python,pandas,csv
58,729,636
9
false
0
0
Try the below code, df.columns = [‘A’, ‘B’, ‘C’, ‘D’]
2
53
1
My csv file has no column name for the first column, and I want to rename it. Usually, I would do data.rename(columns={'oldname':'newname'}, inplace=True), but there is no name in the csv file, just ''.
Rename unnamed column pandas dataframe
0.110656
0
0
116,651
26,102,680
2014-09-29T14:49:00.000
1
0
0
0
python,flask,windows-firewall
28,746,633
2
false
1
0
When running as a service the program running the service is not python.exe but rather pythonservice.exe. You will have to add that to the allowed programs in the Windows Firewall Setup. In my case it is located under C:\Python33\Lib\site-packages\win32\pythonservice.exe.
1
3
0
We have developed an app in python and are using flask to expose its api via http requests. all this on WINDOWS - Everything works ok and we have tested in-house with no problems and we are now trying to use the app in the real world - we have gotten our IT dept to give us a public facing ip/port address (forwarded through a firewall ??) and now we can't access the server/app at all. After a bit of digging, we've found the problem has something to do with the Windows Firewall configuration, when its on it won't work, when its off everything is fine. the flask app code is run like so: app.run(debug=False, host='0.0.0.0', port=8080) the port 8080 is setup in the Firewall Exceptions as is python.exe in the Program Exceptions netstat -a shows the app is sitting there awaiting connection. If I try to access the site though chrome I get the error: ERR_CONNECTION_TIMED_OUT. With the firewall on i'm never seeing any "hits" come through to the app at all. Is there some other configuration I'm missing? Many thanks.
how do i configure python/flask for public access with windows firewall
0.099668
0
0
8,285
26,103,566
2014-09-29T15:34:00.000
1
0
1
0
python,input,time,format,strptime
26,103,693
1
true
0
0
This is where the try statement is useful as it allows you to catch exceptions and deal with them. I am guessing that ValueError is the exception being thrown. try: time.strptime(whatever) except ValueError: # deal with it
1
0
0
In my program I take user input and then parse it using strptime(). This works mostly, but will throw an error if the input isn't formatted correctly. Is there a way I can check the format before parsing it so I can ask the user to re-enter?
Check user input formatting in python for strptime()
1.2
0
0
294
26,104,434
2014-09-29T16:21:00.000
0
0
0
0
python,regression,weka,raster,landsat
26,111,537
2
false
0
0
I have had some experience using LandSat Data for the prediction of environmental properties of soil, which seems to be somewhat related to the problem that you have described above. Although I developed my own models at the time, I could describe the general process that I went through in order to map the predicted data. For the training data, I was able to extract the LandSat values (in addition to other properties) for the spatial points where known soil samples were taken. This way, I could use the LandSat data as inputs for predicting the environmental data. A part of this data would also be reserved for testing to confirm that the trained models were not overfitting to training data and that it predicted the outputs well. Once this process was completed, it would be possible to map the desired area by getting the spatial information at each point of the desired area (matching the resolution of the desired image). From there, you should be able to input these LandSat factors into the model for prediction and the output used to map the predicted depth. You could likely just use Weka in this case to predict all of the cases, then use another tool to build the map from your estimates. I believe I whipped up some code long ago to extract each of my required factors in ArcGIS, but it's been a while since I did this. There should be some good tutorials out there that could help you in that direction. I hope this helps in your particular situation.
2
0
1
I'm trying to build and implement a regression tree algorithm on some raster data in python, and can't seem to find the best way to do so. I will attempt to explain what I'm trying to do: My desired output is a raster image, whose values represent lake depth, call it depth.tif. I have a series of raster images, each representing the reflectance values in different Landsat bands, say [B1.tif, B2.tif, ..., B7.tif] that I want to use as my independent variables to predict lake depth. For my training data, I have a shapefile of ~6000 points of known lake depth. To create a tree, I extracted the corresponding reflectance values for each of those points, then exported that to a table. I then used that table in weka, a machine-learning software, to create a 600-branch regression tree that would predict depth values based on the set of reflectance values. But because the tree is so large, I can't write it in python manually. I ran into the python-weka-wrapper module so I can use weka in python, but have gotten stuck with the whole raster part. Since my data has an extra dimension (if converted to array, each independent variable is actually a set of ncolumns x nrows values instead of just a row of values, like in all of the examples), I don't know if it can do what I want. In all the examples for the weka-python-wrapper, I can't find one that deals with spatial data, and I think this is what's throwing me off. To clarify, I want to use the training data (which is a point shapefile/table right now but can- if necessary- be converted into a raster of the same size as the reflectance rasters, with no data in all cells except for the few points I have known depth data at), to build a regression tree that will use the reflectance rasters to predict depth. Then I want to apply that tree to the same set of reflectance rasters, in order to obtain a raster of predicted depth values everywhere. I realize this is confusing and I may not be doing the best job at explaining. I am open to other options besides just trying to implement weka in python, such as sklearn, as long as they are open source. My question is, can what I described be done? I'm pretty sure it can, as it's very similar to image classification, with the exception that the target values (depth) are continuous and not discrete classes but so far I have failed. If so, what is the best/most straight-forward method and/or are there any examples that might help? Thanks
method for implementing regression tree on raster data - python
0
0
0
709
26,104,434
2014-09-29T16:21:00.000
0
0
0
0
python,regression,weka,raster,landsat
26,123,179
2
false
0
0
It sounds like you are not using any spatial information to build your tree (such as information on neighboring pixels), just reflectance. So, you can apply your decision tree to the pixels as if the pixels were all in a one-dimensional list or array. A 600-branch tree for a 6000 point training data file seems like it may be overfit. Consider putting in an option that requires the tree to stop splitting when there are fewer than N points at a node or something similar. There may be a pruning factor that can be set as well. You can test different settings till you find the one that gives you the best statistics from cross-validation or a held-out test set.
2
0
1
I'm trying to build and implement a regression tree algorithm on some raster data in python, and can't seem to find the best way to do so. I will attempt to explain what I'm trying to do: My desired output is a raster image, whose values represent lake depth, call it depth.tif. I have a series of raster images, each representing the reflectance values in different Landsat bands, say [B1.tif, B2.tif, ..., B7.tif] that I want to use as my independent variables to predict lake depth. For my training data, I have a shapefile of ~6000 points of known lake depth. To create a tree, I extracted the corresponding reflectance values for each of those points, then exported that to a table. I then used that table in weka, a machine-learning software, to create a 600-branch regression tree that would predict depth values based on the set of reflectance values. But because the tree is so large, I can't write it in python manually. I ran into the python-weka-wrapper module so I can use weka in python, but have gotten stuck with the whole raster part. Since my data has an extra dimension (if converted to array, each independent variable is actually a set of ncolumns x nrows values instead of just a row of values, like in all of the examples), I don't know if it can do what I want. In all the examples for the weka-python-wrapper, I can't find one that deals with spatial data, and I think this is what's throwing me off. To clarify, I want to use the training data (which is a point shapefile/table right now but can- if necessary- be converted into a raster of the same size as the reflectance rasters, with no data in all cells except for the few points I have known depth data at), to build a regression tree that will use the reflectance rasters to predict depth. Then I want to apply that tree to the same set of reflectance rasters, in order to obtain a raster of predicted depth values everywhere. I realize this is confusing and I may not be doing the best job at explaining. I am open to other options besides just trying to implement weka in python, such as sklearn, as long as they are open source. My question is, can what I described be done? I'm pretty sure it can, as it's very similar to image classification, with the exception that the target values (depth) are continuous and not discrete classes but so far I have failed. If so, what is the best/most straight-forward method and/or are there any examples that might help? Thanks
method for implementing regression tree on raster data - python
0
0
0
709
26,106,430
2014-09-29T18:27:00.000
0
0
0
1
python,windows,python-2.7
26,106,567
2
false
0
0
check if you are the specific user and if not do not run the python script. It can be done at least in the unix way by checking the uid, in windows you can use wmic useraccount get name,sid to get a users security identifier or in batch use %USERNAME%.
1
2
0
I have a python script that does some file processing and need to run as a specific user. Seems on Unix this can be done by using os.setuid. How do I do that in python on windows?
Execute script as a specific user
0
0
0
1,406
26,107,730
2014-09-29T19:49:00.000
0
0
1
1
python,osx-mavericks,enthought,canopy
26,110,135
2
true
0
0
Quit Canopy Delete the file ~/.canopy/locations.cfg. You can do this by opening a Terminal window and typing: rm ~/.canopy/locations.cfg Restart Canopy You'll be prompted again for your environment path.
2
0
0
I want to change the default environment (the folder where my Canopy files are stored), which I previously set to 'Documents' folder. But now I want to change the folder. If I just delete the folders, Canopy creates them again automatically, indicating that somewhere inside its logs it has saved the default location address, and I want to change this default location address.
Changing default environment (default folder) in Canopy on Mac OSX
1.2
0
0
1,380
26,107,730
2014-09-29T19:49:00.000
1
0
1
1
python,osx-mavericks,enthought,canopy
33,323,045
2
false
0
0
You can change the root path for the file browser by modifying the variable "root_paths" in the file ./canopy/preferences.ini under the section [file_browser]
2
0
0
I want to change the default environment (the folder where my Canopy files are stored), which I previously set to 'Documents' folder. But now I want to change the folder. If I just delete the folders, Canopy creates them again automatically, indicating that somewhere inside its logs it has saved the default location address, and I want to change this default location address.
Changing default environment (default folder) in Canopy on Mac OSX
0.099668
0
0
1,380
26,108,160
2014-09-29T20:19:00.000
0
0
0
0
python,mysql,sql,csv
26,108,522
2
true
0
0
The csv module can easily give you the column names from the first line, and then the values from the other ones. The hard part will be do guess the correct column types. When you load a csv file into an Excel worksheet, you only have few types : numeric, string, date. In a database like MySQL, you can define the size of string columns, and you can give the table a primary key and eventually other indexes. You will not be able to guess that part automatically from a csv file. At the simplest way, you can treat all columns as varchar(255). It is really uncommon to have fields in a csv file that do not fit in 255 characters. If you want something more clever, you will have to scan the file twice : first time to control the maximum size for each colum, and at the end, you could take the minimum power of 2 greater than that. Next step would be to control if any column contains only integers or floating point values. It begins to be harder to do that automatically, because the representation of floating point values may be different depending on the locale. For example 12.51 in an english locale would be 12,51 in a french locale. But Python can give you the locale. The hardest thing would be eventual date or datetime fields, because there are many possible formats only numeric (dd/mm/yyyy or mm/dd/yy) or using plain text (Monday, 29th of september). My advice would be to define a default mode, for example all string, or just integer and strings, and use configuration parameters or even a configuration file to finely tune conversion per column. For the reading part, the csv module will give you all what you need.
1
0
0
I have CSV files that I want to make database tables from in mysql. I've searched all over and can't find anything on how to use the header as the column names for the table. I suppose this must be possible. In other words, when creating a new table in MySQL do you really have to define all the columns, their names, their types etc in advance. It would be great if MySQL could do something like Office Access where it converts to the corresponding type depending on how the value looks. I know this is maybe a too broadly defined question, but any pointers in this matter would be helpful. I am learning Python too, so if it can be done through a python script that would be great too. Thank you very much.
create database by load a csv files using the header as columnnames (and add a column that has the filename as a name)
1.2
1
0
3,067
26,109,520
2014-09-29T21:52:00.000
2
0
1
0
rethinkdb,rethinkdb-python
26,109,631
1
true
0
0
The process that appends new entries in your json file should probably run query to insert the same entries in RethinkDB. Or you can have a cron job that get the last entry saved from rethinkdb read your json file for new entries insert new entries
1
1
0
I have a log file by the name log.json. A simple insert in rethinkdb works perfectly. Now this json file get updated every second, how to make sure that the rethinkdb gets the new data automatically, is there a way to achieve this, or i have to simply use the API and insert into db as well as log in a file if i want to. Thanks.
Insert json log files in rethinkdb?
1.2
0
0
167
26,111,811
2014-09-30T02:27:00.000
4
0
1
0
python,c,multithreading,user-interface,graph
26,111,839
2
false
0
0
igraph is good for graph visualisation. If you mean plotting points, use matplotlib. threading module is built-in with python standard library.
1
3
0
I am well experienced in the C languages, but for a project that I have been assigned in one of my classes, I am considering using it as a project to introduce me into Python. I have never used Python before, but I have done some research, and it is very clear that GUI design and Visualizations come very easy to Python; however, I have been unable to find any libraries on threads. My project is going to have two threads checking an object/class to see if a bool is set so that it can operate on some data and once it's turn is finished it will flag the other thread to operate on that same data. There will be several hundred logged data points on a graph that I will need to display graphically. My questions are the following: Does Python have specific libraries that could help me display graphs with hundreds of points easily? If so, which would you recommend for a beginner in Python? Which thread libraries are best for beginners in Python? Thank you all for any help in this matter.
Looking into getting into Python specifically for a project that involves threads and graphs
0.379949
0
0
53
26,112,179
2014-09-30T03:11:00.000
0
1
0
0
python,ssh,telnet,forwarding
44,082,308
2
false
0
0
Use tunneling by setting up a SSH session that tunnels the Telnet traffic: So ssh from localhost to server with the option -L xxx:deviceip:23 (xxx is a free port on localhost, device is the IP address of "device", 23 is the telnet port). Then open a telnet session on your localhost to localhost:xxx. SSH will tunnel it to the device and response is sent back to you.
2
2
0
telnet server ←————→ device ↑ | SSH ↓ localhost (me) I have a device that is connected with one server computer and I want to talk to the device by ssh'ing into the server computer and send telnet commands to my device. How do I setup things in Python to make this happen?
How to run telnet over ssh in python
0
0
1
2,315
26,112,179
2014-09-30T03:11:00.000
3
1
0
0
python,ssh,telnet,forwarding
26,112,392
2
true
0
0
You can use Python's paramiko package to launch a program on the server via ssh. That program would then in turn receive commands (perhaps via stdin) and return results (via stdout) from the controlling program. So basically you'll use paramiko.SSHClient to connect to the server and run a second Python program which itself uses e.g. telnetlib to talk to the device.
2
2
0
telnet server ←————→ device ↑ | SSH ↓ localhost (me) I have a device that is connected with one server computer and I want to talk to the device by ssh'ing into the server computer and send telnet commands to my device. How do I setup things in Python to make this happen?
How to run telnet over ssh in python
1.2
0
1
2,315
26,113,419
2014-09-30T05:33:00.000
2
0
0
1
python,sockets,udp,padding
26,326,206
1
true
0
0
This is pretty much impossible without playing around with the Linux drivers. This isn't the best answer but it should guide anyone else looking to do this in the right direction.3 Type sudo ethtool -d eth0 to see if your driver has pad short packets enabled.
1
0
0
I am trying to remove null padding from UDP packets sent from a Linux computer. Currently it pads the size of the packet to 60 bytes. I am constructing a raw socket using AF_PACKET and SOCK_RAW. I created everything from the ethernet frame header, ip header (in which I specify a packet size of less than 60) and the udp packet itself. I send over a local network and the observed packet in wireshark has null padding. Any advice on how to overcome this issue?
Removing padding from UDP packets in python (Linux)
1.2
0
1
1,669
26,116,350
2014-09-30T08:39:00.000
5
0
0
0
python,text,plot,formatting,pyqtgraph
26,129,313
2
true
0
1
It is not documented, but the method you want is TextItem.setHtml().
1
1
0
When creating a TextItem to be added to a plotItem in PyQtGraph, I know it is possible to format the text using html code, however I was wondering how to format the text (i.e. change font size) when updating the text through TextItem.setText()? or do I need to destroy/re-create a TextItem?
How to set font size using TextItem.setText() in PyQtGraph?
1.2
0
0
1,991
26,121,676
2014-09-30T13:06:00.000
0
1
0
0
php,python
26,121,763
1
true
0
0
You can use screen command to run the python script, then connect to the screen session anytime later from php with expect.
1
0
0
I have a problem in which I have to pass input to a Python script that must not terminate. Therefore I need to pass arguments to the Python script while it is running. The Python script will also not be executed in the same PHP script so it cannot incorporate any sort of expect script due to expect requiring that you spawn the process in the same script! I have full access to the PHP and Python scripts so I can modify them at will. I also would not prefer hacky ways like making the PHP write to a text file and having the Python script read the commands from the file. Is their any less hacky way to do this?
PHP Passing Input to Runing Python Script
1.2
0
0
53
26,123,939
2014-09-30T14:56:00.000
2
0
1
0
python,django
26,124,013
2
false
0
0
There's no huge difference, except 2 things : You have to give the execute right to your file manage.py (chmod +x manage.py) The file has to contain for example #!/usr/bin/python at the top.
1
4
0
I'm wondering if there any difference in using of following constructions? python manage.py [something] or ./manage.py [something] Maybe there is preffered command for one statement like runserver and another for syncdb for example?
Is there any difference in using: python manage.py [something] or ./manage.py [something]
0.197375
0
0
570
26,128,828
2014-09-30T19:43:00.000
0
0
0
0
python-3.x,pyqt,pyside
26,194,380
2
false
0
1
The default vertical size policy of QLineEditis QSizePolicy.Fixed. That means it will always have the same height and other widgets might take available space in the layout. So you might have to adjust the size policies of your widgets as Fenisko already demonstrated. However if you have two or more expanding widgets in your layout, all will have equal size. You can use QGridLayout.setRowStretch(int column, int stretch) and QGridLayout.setColumnStretch(int column, int stretch) of your layout to adjust proportions of rows and columns. Alterantively use QSizePolicy.setVerticalStretch(int stretch) and QSizePolicy.setHorizontalStretch(int stretch) of your widget's size policies.
1
1
0
If I have a QGridLayout and stack two widgets on top of each other, then they take up 50% of the layout each. However, if one of the widgets is a QLineEdit widget, then that widget takes up a much smaller portion of the layout than 50%. How do I add two widgets to a QGridLayout, then set the initial portions to something other than 50%/50%?
What's the proper way to resize widgets in a layout?
0
0
0
3,186
26,129,755
2014-09-30T20:43:00.000
1
0
1
0
python,python-2.7,pycharm,pyinstaller
26,133,713
1
false
0
0
By default, PyInstaller generates a one-folder bundle containing an executable, it also creates this executable with a console window for standard input/output. I'm just guessing, but your script don't have a GUI, right? In any case, the better way to work is creating a one-file bundle: pyinstaller -F myfile.py In this way, you only have to execute one file. if after executing the application, it behaves in the same way, I would say that adding the -d option will help you to find out what is going wrong with your generated executable. Also, running your application from a per-existing CMD window is recommendable since these windows do not close itself after running your application.
1
0
0
I am trying to use PyInstaller for generating an .Exe from a Python 2.7 file. In the CMD window, I run pyinstaller myfile.py. It creates a build and dist folder both of which have a number of files, including an Application file. When I click both application files, a CMD box pops up and very quickly disappears, despite my file requiring inputs from the user. What I am missing here? Which file can I distribute to be a usable copy?
PyInstaller Issue with Application File Python 2.7
0.197375
0
0
697
26,133,396
2014-10-01T02:53:00.000
0
0
0
0
python,selenium
26,134,128
1
false
0
0
Let's pretend, that it isn't a human user, who enters the information and submits it, but it all is done automatically with some delay. How would you handle it then? I guess, that you would wait for a certain element to appear or the page something. The same procedure does work with the human user, only wait times may be a bit longer.
1
0
0
I want to write a simple Python script using Selenium to scrape information from a website but in collaboration with a (standing by) human user who at some point will provide information in the browser. How do I get the following behaviour from the script: Wait until the human user enters information (such as login details) submit the information then (and only then) do something with the page loaded after human submission.
Selenium to interact with human user to provide login information
0
0
1
151
26,135,477
2014-10-01T06:38:00.000
1
0
1
0
python,xmodmap
26,135,997
1
false
0
0
Yes, it is possible to implement xmodmap in pure Python by using a Python X11 client implementation such as python-xlib and re-implementing the functionality of xmodmap itself (i.e. updating the X11 server keyboards mappings with all the appropriate X11 functions, e.g. XSetModifierMapping to name but one). The xmodmap source code is available so 'all' you need to do is translate it into equivalent Python code. However, the real question is why go to all that trouble when X11 provides a utility written to do just that (xmodmap)? There is little point in re-inventing the wheel when the easiest and most reliable approach to run xmodmap.
1
0
0
Is it possible to do the magic which xmodmap does with python? Use Case: I want to add a new layer to the current keyboard layout which gets entered with the CapsLock key. I want to avoid to call xmodmap via subprocess.
xmodmap in Python
0.197375
0
0
333
26,136,042
2014-10-01T07:18:00.000
0
0
1
0
python,path-finding,breadth-first-search
26,190,251
1
true
0
0
I found the answer. The split lists was a bad idea. I used linked lists instead. You can create a class called "Node" then connect the nodes yourself. Once of the properties of the node is the parent node. With the parent node, you can track all the way back up the tree. Look into linked lists.
1
0
0
So I understand breadth first search, but I am having trouble with the implementation. When there are multiple branches you are expanding the edges or successors of each branch on every iteration of the function. But when one of your nodes returns GOAL how do you get all of the previous moves to that location? Do you create a list of actions you took, and if there is a fork in the path you create two new lists with the actions you took to get to the branch, plus the moves you will take while on the branch? This is a pain in the butt to implement, especially in Python, in which I'm fairly new too. Is there a better way to organize this and get the path I need?
Breadth first search how to find a particular path out of several branches
1.2
0
0
111
26,136,643
2014-10-01T07:53:00.000
1
0
0
1
python,google-app-engine,cmd
26,136,777
1
true
1
0
dev_appserver doesn't "run a file" at all. It launches the GAE dev environment, and routes requests using the paths defined in app.yaml just like the production version. If you need to route your requests to a specific Python file, you should define that in app.yaml.
1
0
0
In order to start Google app engine through the cmd i use: dev_appserver.py --port=8080, but i can't figure how does it pick a file to run from the current directory. My question is, is there an argument specifying a file to run in the server from all possible files in the directory?
use dev_appserver.py for specific file in a directory
1.2
0
0
52
26,136,844
2014-10-01T08:06:00.000
1
0
0
1
java,python,weblogic,weblogic-10.x,wlst
26,177,888
1
true
1
0
Try again without the options. Just the stopApplication(appName). This is what the admin console does, kills all existing sessions and drags it to prepared state. You are trying to stop it gradually and hence the delay. You had mentioned, "when one of this application fail to deploy for X/Y reason, I just want to force stop this application and to pass to the other one." If an application fails to deploy, you should not have to stop it. If the App runs, its successful correct?
1
1
0
I am currently working with weblogic and the thing is that I deploy several applications on my weblogic server. Sadly, when one of this application fail to deploy for X/Y reason, I just want to force stop this application and to pass to the other one. I've already looked into the WLST doc and I don't find what I am searching for. Here is the function I use : stopApplication(applicationName, gracefulProductionToAdmin="true", gracefulIgnoreSessions="true") It takes about 5 minutes to stop application this way. When I stop application through Administration Console (force stop actually) it takes about 5 seconds to stop application. So is there any way to force stop application through WLST script? Thanks
WLST - force stop application
1.2
0
0
3,907
26,137,036
2014-10-01T08:19:00.000
2
1
0
0
python,dns,dnssec
26,139,060
2
false
0
0
To see if a particular request is protected, look at the DO flag in the request packet. Whatever language and library you use to interface to DNS should have an accessor for it (it may be called something else, like "dnssec"). The first answer is correct but incomplete if you want to know if a certain zone is protected. The described procedure will tell you if the zone's own data is signed. In order to check that the delegation to the zone is protected, you need to ask the parent zone's name servers for a (correctly signed) DS record for the zone you're interested in.
1
8
0
As the title says I want to programmatically check if a DNS response for a domain are protected with DNSSEC. How could I do this? It would be great, if there is a pythonic solution for this. UPDATE: changed request to response, sorry for the confusion
Programmatically check if domains are DNSSEC protected
0.197375
0
0
6,398
26,138,460
2014-10-01T09:43:00.000
0
0
1
0
python,multithreading,pyqt,pyqt4,usrp
26,153,975
2
false
0
1
There are several ways to do this, but basically either Breakup your waterfall sink into chunks of work, which the GUI can execute periodically. For example, instead of continuously updating the waterfall sink in a function that GUI calls, have only a "short" update (one "time step"), and have the function return right after; make the function called periodically via QTimer. Make the waterfall sink execute in a separate thread by using a QObject instantiated in a QThread instance; and make the sink function emit a signal at regular interval, say at every "time step" of the waterfall update.
1
0
0
I am trying to make a GUI in python using pyqt4 which incorporates a waterfall sink which is connected with an USRP. The problem is that the data should be shown in waterfall sink continuously which makes the GUI to freeze and I can not use the other buttons in meanwhile. I was checking to use threads, but till now what I understood is that in threads I can put just functions which will give an result at the end, but not the functions which will give results continuously and I want to see it in the main GUI. Any idea how to make it possible to see the continuous results from waterfall sink and not to freeze the main GUI.
Using threads with python pyqt?
0
0
0
352
26,142,468
2014-10-01T13:23:00.000
0
0
0
0
python,django,geodjango,django-countries
26,143,098
2
false
1
0
Personally I would set a generic flag as a fallback (django flag perhaps?) if user does not set their country, and to be fair there must be a reason if user doesn't. Or you can simply make that country field mandatory. And assuming that person in that country based on their existing IP just doesn't sound right. Imagine if you are working remotely via VPN or behind a proxy server... you get the idea. It makes more sense if you just want to redirect people to sub-domain (different language) site base on IP.
1
0
0
I am using django-countries to get a flag from the CountryField and it works. However, some users don't have a country set on their profiles, but I still have to display a flag. My idea was to get a country from IP and then get a flag from the country or anything similar to that. Unfortunately, this is not happening in the view, so I don't have the request. I can set the IP in the template and then, in the backend, check if no flag was found (if a country is not provided) and then somehow use the IP that I got from the template to get a flag. Is something like that possible? If needed, GeoDjango is already included in the project. Perhaps there is a totally different way of doing this, but I am new to Django and still learning. Thanks.
Python/Django get country flag from IP
0
0
0
1,719
26,146,512
2014-10-01T16:55:00.000
1
0
1
0
python,algorithm,dictionary
26,146,580
1
true
0
0
First of all, Python dicts are unsorted (they are hash tables), so when you're talking about sorting a dict, you're really talking about a different data structure. If you only need to do this once, then O(n) is as good as it can be since you need to examine each of the n keys. Use a dictionary comprehension, and you're done. If you need to do this repeatedly, there might be some mileage in moving to a different data structure. Whether or not it's worth it depends on how large the dict is, how often it changes, how many times you need to do the search, how many keys each search returns etc. To be more specific we'll need more information. Last but not least, depending on the exact circumstances, moving from O(n) to, say, O(log n) is not always a win. For example, if n is small enough a linear search could well turn out to be the fastest. A good general strategy is to go with the simplest implementation and profile the code to see whether there's anything to be gained from optimizing it.
1
2
0
I have an unsorted dictionary where value of a key is an integer. I want to get all keys where value is greater than some int k. approach 1: I can just easily loop over the dict and compare each value with int k. O(n) approach 2: I can sort but sorting a dictionary takes O(n log n) so approach 1 is better. approach 3: transform dict into something else? if so, to what? and is that worth it? I am curious if this can be done better than O(n).
Get all items in a dictionary where value meets a condition
1.2
0
0
471
26,155,448
2014-10-02T06:26:00.000
0
0
0
1
python,qt,anaconda
26,186,264
2
false
0
1
QT aggressively tries to find other QT installations on the system. It is likely finding the one installed by Linux. You likely can't remove this, as there are probably several programs that come with Linux that use QT, but is it possible to update it to the same version?
1
4
0
I have install Anaconda 2.0.1 on KDE desktop. When I run python and would see all modules installed, I have this message "Cannot mix incompatible Qt library (version 0x40801) with this library (version 0x40805)", Can I fix the problem if I unintall Qt library (version 0x40801)? How I do that? Or if someone have another suggestion,please help me, Thx very much
Cannot mix incompatible Qt library (version 0x40801) with this library (version 0x40805)
0
0
0
3,076
26,157,625
2014-10-02T09:07:00.000
1
0
0
0
mysql,django,python-2.7
26,158,170
1
false
1
0
It doesn't make sense to create a new database for each organization. Even if the number of customers or organizations grows to the hundreds or thousands, keeping data in a single database is your best option. Edit: Your original concern was that an increase in the number of organizations would impact performance. Well then, imagine if you had to create an entire database for each organization. You would have the same tables, indexes, views, etc replicated n-times. Each database would need system resources and disk space. Not to mention the code needed to make Django aware of the split. You would have to get data from n databases instead of just one table in one database. Also keep in mind that robust databases like PostgreSQL or MySQL are very capable of managing thousands or millions of records. They will do a better job out of the box than any code you may come up with. Just trust them with the data and if you notice a performance decline then you can find plenty of tips online to optimize them.
1
0
0
In my project there is two models ,ORGANISATION and CUSTUMER .Here what i am doing is while i am adding new customer to the organisation i save the organisation_id to the table CUSTOMER .But now i am worrying about the performance of my project when the database becomes huge. So now i am planning to create new database for every newly creating organisation .And save all the information of the organisation in that organisation's database. But i don't know how to create a new database for every newly creating organisation.And i'd like to know which method is better in performance.Please correct the question if not.
django multiple database for multiple organisation in a single project
0.197375
1
0
181
26,165,247
2014-10-02T16:29:00.000
1
0
0
0
python,openerp,openerp-7
26,207,355
2
false
1
0
Not sure I completely understand your question. A user may be able to do something in any company depending on their list of allowed companies, but a user can only ever do something in one company at a time. Any user can change their current company to one of the companies they are allowed but when they do this, the company_id on the user record is changed so if you browse res.users using the UID you will always get the current company of the user. The only exception I can think of is if you give the user a list of companies they are allowed to see and give them a button or check box to do something on that company. In this case, your screen will need to be backed by a model and you can check on there so see which company they chose, either by browsing to see which record has the checkbox set, or if you put a button or action on a tree view, the method will get the IDs of the records selected.
1
1
0
I need to create a function that takes a default value, this value is distinct for each company. Thing is that I can't use the uid because the user can do this in any company, and I have no object to ask ids for because its for a default field. Is there any way to get the current company without using ids or the uid ? Thanks in advance.
Get the current company without ids or uid in Odoo / OpenERP
0.099668
0
0
5,993
26,166,024
2014-10-02T17:15:00.000
0
0
1
0
python,mysql,django,pillow
26,166,726
1
false
1
0
There's not really enough detail here to help. But one possibility is that you don't have the development package for python installed. If you are using Debian or Ubuntu, you can do sudo apt-get install python-dev to install it.
1
0
0
I've created python virtual environment, installed django using pip and now I would like to install Pillow and MySQL-python using pip but it fails during compile process. (starting with python.h no such file or directory) Has anyone tried intall some of these on 1and1 hosting ? Maybe compile it on different machine or other solution ?
Python - install Pillow and MySQL-python on 1and1 hosting
0
0
0
390
26,166,210
2014-10-02T17:28:00.000
1
0
1
0
python,ftp
32,535,437
2
false
0
0
It should be fairly simple to use LIST, MLSD and NSLT to build a local index of the files on the FTP, and then use regex to filter unwanted files from the index, and then use the remainder in a batch script to download them.
1
1
0
I have an FTP server that hosts data files, where the date that the data is associated with is encoded into the file names. I want to write a process that can find and download all the files associated with a particular date. The complication is that different files use different encodings. (Unfortunately changing/standardising the names isn't an option.) The year can be four digits or two. The month can be two digits or three letters. Sometimes the day is represented, and the substring can be anywhere in the string. At the moment, I'm creating a list of all the files on the server, then using a regular expression to determine which files are relevant, and then downloading those files. Is it possible to condense the first two steps? That is, is there a way to get the server to return the list of files that match the expression? I'm using the Python ftplib if that makes any difference.
Download Files From FTP Server Using Regular Expression
0.099668
0
0
1,772
26,178,035
2014-10-03T11:19:00.000
0
0
0
0
python,machine-learning,scikit-learn
38,604,808
2
false
0
0
The above answer was too short, outdated and might result in misleading. Using score method could only give accuracy (it's in the BaseEstimator). If you want the loss function, you could either call private function _get_loss_function (defined in the BaseSGDClassifier). Or accessing BaseSGDClassifier.loss_functions class attribute which will give you a dict and whose entry is the callable for loss function (with default setting) Also using sklearn.metrics might not get exact loss used for minimization (due to regularization and what to minimize, but you can hand compute anyway). The exact code for Loss function is defined in cython code (sgd_fast.pyx, you could look up the code in scikit-learn github repo) I'm looking for a good way to plot the minimization progress. Probably will redirect stdout and parse the output. BTW, I'm using 0.17.1. So a update for the answer.
1
0
1
I'm using an SGDClassifier in combination with the partial fit method to train with lots of data. I'd like to monitor when I've achieved an acceptable level of convergence, which means I'd like to know the loss every n iterations on some data (possibly training, possibly held-out, maybe both). I know this information is available if I pass verbose=1 in the constructor of the classifier, but I'd like to query it programmatically rather than visually. I also know I can use the score method to get accuracy, but I'd like actual loss as measured by my chosen loss function. Does anyone know how to do this?
Way to compute the value of the loss function on data for an SGDClassifier?
0
0
0
1,528
26,178,633
2014-10-03T12:07:00.000
0
0
0
0
python,django,database,sqlite
26,179,396
2
false
1
0
You could also use fixtures. And generate fixtures for your app. Dependes on what you're planing to do with them. You'll just make a loaddata after that.
1
0
0
How can i create Django sqlite3 dump file (*.sql) using terminal? There is a fabric fabfile.py with certain dump scripts, but when i try to use fab command next massage shows up: The program 'fab' is currently not installed. To run fab please ask your administrator to install the package 'fabric'. But there are fabric files in /python2.7/site-packages/fabric/. I'm not good at Django and Python at all. The guy, who was responsible for our Django project, just left without any explanations. In general i need to know how to create Django sqlite3 dump file (*.sql) via terminal. Help? :)
Django sqlite3 database dump
0
1
0
1,114
26,185,042
2014-10-03T18:55:00.000
2
0
1
0
python
26,185,109
2
false
0
0
I doubt there is going to be a significant (if any) memory savings from using frozenset over set. The reason to use a frozenset is that it is hashable, so you can use it as a dictionary key (or as an element of another set).
1
2
0
When using for x in obj notation, does iterating over a frozenset use less memory than a set? Or when is there a memory advantage to using a frozen set opposed to a set?
Memory difference iterating over frozen set vs set
0.197375
0
0
954
26,188,548
2014-10-04T00:47:00.000
1
0
0
1
python,google-app-engine
26,194,692
2
false
1
0
You could create a new application, use Datastore Admin to copy your entities to the new application's Datastore, then re-deploy your application. Is there anything else that needs duplicating?
2
0
0
I want to backup my python app and restore it to a different app on Appengine. In the Application Setting Page, under Duplicate Applications, I add a new application identifier. When I click the Duplicate Application button, I get this error: "The developer does not own the app id being forked". Further research indicates that this seems to be a bug, but that a workaround is to send an email invitation to the other email addresses in my Google account to add them. I am able to send those emails from the Permissions screen by clicking a button and inserting the email address. When I click link in the email that is sent, it opens My Applications, listing all my apps, instead of a confirmation that my response. It appears to open the wrong page. In the Permissions page, the email address still shows Pending after about 10 hours. Is there a simple way to duplicate an application?
How do I duplicate an appengine app
0.099668
0
0
59
26,188,548
2014-10-04T00:47:00.000
1
0
0
1
python,google-app-engine
26,194,002
2
false
1
0
Do you have more than one google account? I have found that app engine does unexpected things when you are logged into more than one google account at a time. I suggest logging into only a single google account and trying the operations again.
2
0
0
I want to backup my python app and restore it to a different app on Appengine. In the Application Setting Page, under Duplicate Applications, I add a new application identifier. When I click the Duplicate Application button, I get this error: "The developer does not own the app id being forked". Further research indicates that this seems to be a bug, but that a workaround is to send an email invitation to the other email addresses in my Google account to add them. I am able to send those emails from the Permissions screen by clicking a button and inserting the email address. When I click link in the email that is sent, it opens My Applications, listing all my apps, instead of a confirmation that my response. It appears to open the wrong page. In the Permissions page, the email address still shows Pending after about 10 hours. Is there a simple way to duplicate an application?
How do I duplicate an appengine app
0.099668
0
0
59
26,189,557
2014-10-04T04:17:00.000
1
0
0
0
python,mysql,django
26,194,172
1
false
1
0
I think your choice of type field is incorrect. Basically CharField with choices has limited functionality and you can't do this with it you can only filter the choices you declared initially but you can't add new ones on the fly (you will see them but they will not pass the field validation). I think you should go with ForeignKey where you do not have such limitations. You can build it dynamically without ptoblems. I can't tell you the queries without seeing some code.
1
0
0
I want to select the options from a list in my django models. But the options in the list are populated in the post_save() method.The situation is like this: I enter some attributes in the table1 based on which the value of other attributes of the table is calculated using different algorithms in post_save() method. Now I want these values from different algorithms to be shown as choices to the attribute(which were set to blank intially or some default value) of table1 after saving it for the first time. There are constraints only distinct values should as choices and the choice list should contain only those values which are calculated using the attributes of this tuple/row of the table. I tried to use choice option and appending the values from diff algos to the list but that creates two problems: If I save two entries then the list contains the values that are calculated using the attributes of both the entries(filtering required) If I save a entry twice then the duplicate entries are appended to the list. Also foreign key may be a option but how to filter foreign key according the tuples which generated it. I tried to explain the problem and I feel code is not required. Comment if any details required in regard to question.
Append choices to choice list in Django models dynamically
0.197375
0
0
785
26,190,859
2014-10-04T07:58:00.000
0
0
1
0
python
26,190,902
4
false
0
0
is returns True if both operands refer to the same object. The interpreter can use interning for immutable objects, but it is not guaranteed to.
2
0
0
The code "python" is "python" returns True. But why does (1,2,3) is (1,2,3) return False? Even though both are immutable objects, the is operator is evaluating differently. Why?
Why is operator giving different output for immutable objects?
0
0
0
92
26,190,859
2014-10-04T07:58:00.000
0
0
1
0
python
26,190,966
4
false
0
0
The only reason your first example is True is that string literals are interned when loaded from source code (I think this only applies to strings within a single file even). In nearly all cases other than string literals, objects created at different times will have different IDs.
2
0
0
The code "python" is "python" returns True. But why does (1,2,3) is (1,2,3) return False? Even though both are immutable objects, the is operator is evaluating differently. Why?
Why is operator giving different output for immutable objects?
0
0
0
92
26,191,698
2014-10-04T10:03:00.000
0
0
0
1
python,celery
26,271,816
1
true
0
0
When you say you want different log files for each worker do you mean for each worker node or for each pool worker process? If it is for each node, that is already supported. Do celery worker --help for more info. -f LOGFILE, --logfile=LOGFILE Path to log file. If no logfile is specified, stderr is used. If you are using supervisord for running your workers, you can use stdout_logfile and stderr_logfile If you want separate logfile for each pool worker process, can you explain why? Note that pool worker processes will keep changing if maxtasksperchild is set to limit the number of tasks executed by a process. You will need to figure out how you want to relate each worker process to a log file in this case.
1
0
0
I have a custom python logging working. I want to build a celery custom logging, based on workers. I went through the Docs but couldn find a hint. Anyone can suggest me one such method to do so?
How to create Celery custom logging
1.2
0
0
368
26,196,165
2014-10-04T19:06:00.000
0
1
0
1
python,ubuntu,vps
26,196,763
1
true
0
0
Running a script on the server is the same as running locally. python script.py You may check if you have python installed first using which python (should return the python location) If not, get it using sudo apt-get install python after that, go to www directory and run it.
1
0
0
I've uploaded my .html chat file on my Ubuntu VPS, all that remains is to execute/run the python script pywebsock.py which will run the python server. I've uploaded the pywebsock.py to /bin/www and now I want to run it but I have no idea where to start. When I run the pywebsock.py on my desktop it opens up a terminal saying "waiting connection". This is what I've done so far to try and run it: Downloaded Putty Downloaded WinSCP Installed version of Python according to .py (2.7) Any ideas?
Executing/Running python script on ubuntu server
1.2
0
0
2,380
26,198,847
2014-10-05T01:46:00.000
5
0
1
0
python,debugging,libraries,system-calls,pdb
26,198,855
2
false
0
0
os.mkdir() is implemented in C code and pdb cannot step into that function. You are limited to debugging pure Python code only; it doesn't matter if that code is part of the standard library or not. You can step into the shutil module, or os.path just fine, for example. os.mkdir() has to call into native code because it interacts with the OS; even PyPy has to defer to the underlying (host-Python) os.mkdir() call to handle that part, so you cannot step into it with pdb even in PyPy. In fact, just like in CPython, that part of the standard library is part of the RPython runtime and not seen as 'native Python code' by PyPy either, just like the built-in types are part of the runtime environment. You could run the PyPy interpreter untranslated (so not statically compile the RPython code but have Python run the PyPy interpreter directly), but that'll only give you access to the RPython code paths, not the os.mkdir() C code.
1
6
0
When I run my Python debugger, I can step into functions that I write. But if I try to step into a library function like os.mkdir("folder"), for example, it "steps over" it instead. Is there a way to step into builtin library functions to see what Python is doing under the hood? Ideally there'd be a way to do this in PyPy so that you could keep drilling down into Python code.
Can I step into Python library code?
0.462117
0
0
2,631
26,199,088
2014-10-05T02:30:00.000
0
0
0
0
python,python-2.7
26,199,110
1
false
1
0
Assuming this array is a string array of x strings, you can create a substring from the index of the second instance of '|' to the end of the string.
1
0
1
Suppose you have this type of array (Sonny Rollins|Who Cares?|Sonny Rollins And Friends|Jazz| Various|, Westminster Philharmonic Orchestra conducted by Kenneth Alwyn|Curse of the Werewolf: Prelude|Horror!|Soundtrack|1996). Is there any possible way to only take out Sonny Rollins and Who cares from the array?
Is there a ways to take out the artist and song from the array?
0
0
0
27
26,199,115
2014-10-05T02:35:00.000
0
0
0
0
python,windows,console,pyqt,exe
26,208,237
1
false
0
1
The best solution is in the post linked to in a comment to your post, but on Windows you can also set a property on a shortcut that starts the app, to have the console minimized. It still shows in taskbar, there id you need it, can be handy in some situations(like during dev to use same script as user will, but not have to bother evrytime start the app to manually minimize the console). In general best to use pythonw.exe or use .pyw extension for your script.
1
0
0
I create an exe executable of my application made ​​in pyqt, and everything went well, but when I run the exe in windows before opening the application opens a console and while the application is open that console is also. I need someone to tell me how to make the console does not come out, or at least not visible. I've seen some answers to this problem but with C ++ with Python I have not seen anything
The exe executable of my application, also opens a console
0
0
0
50
26,200,726
2014-10-05T07:55:00.000
0
0
0
1
python,linux,codeskulptor
26,200,815
3
false
0
0
It looks like an ordinary python package, so just: pip install --user SimpleGUICS2Pygame If that gives errors, post them.
1
0
0
Python. I've exhausted my research on this topic. I have it running on Windows just fine, but I can't figure out a way to install it on Linux. How do I install it on Linux?
SimpleGUICS2Pygame Installing on Linux
0
0
0
88
26,201,141
2014-10-05T09:00:00.000
0
1
1
0
python,django,gcc,hosting,virtualenv
27,670,939
3
false
0
0
You need root access to install the necessary packages to run your python application. PAAS like Heroku are another option but the free package at Heroku is only good for developing your application and it is not intended for hosting it once you get traffic and users. I strongly suggest you get a VPS at DigitalOcean.com. For 5$ per month you will get root access and more power. You will also control your full stack. I use Nginx+Gunicorn to host about 10 Django projects on DigitalOcean right now.
1
6
0
I'm using Hostgator shared as a production environment and I had a problem installing some python modules, after using: pip install MySQL-python pip install pillow results in: unable to execute gcc: Permission denied error: command 'gcc' failed with exit status 1 server limitations no root access sudo doesnt work (sudo: effective uid is not 0, is sudo installed setuid root?) no gcc questions is there an alternative package for pillow. I want this to use django ImageField. (just like pymysql is an equally capable alternative for mysql-python) i have modules like mysql-python and pil installed in root, i.e. pip freeze without any virtualenv lists these modules. but i cannot install my other required modules in this root environment and in my virtualenv i cannot install mysql-python and pil. can something be done? can we import/use packages installed in root somehow in a virtualenv? is hostgator shared only good for PHP and not for python/django webapps. we have limited traffic so we are using hostgator shared. should we avoid hostgator or shared hosting? aren't they good enough for python/django (i had no problems in hosting static/PHP sites ever). are they too many problems and limitations or performance issues (FCGI)? if yes, what are the alternatives?
installing python modules that require gcc on shared hosting with no gcc or root access
0
0
0
4,250
26,201,740
2014-10-05T10:28:00.000
0
0
0
0
python,audio,kivy,convolution
54,000,026
3
false
0
1
Kivy is not really obviously designed for Audio or Signal Processing and doesn't need to be thats why you have python and its massive collections of libraries, you can use 'numpy', 'scipy', 'librosa', and so on.... with numpy you could get an array of the signal
1
1
0
I would like to do some basic audio-signalprocessing in kivy. For example, I would like to convolve a .wav file with an impulse response. I use SoundLoader.load('file.wav') to load the audio files. My question is: is it possible to convert an audio object into a list, so I can access each sample? Or does the SoundLoader class offer any possibilities of convolution, or any other audio processing?
Audio processing in kivy
0
0
0
1,453
26,205,629
2014-10-05T18:04:00.000
1
0
1
0
ipython,ipython-notebook,pep8
26,205,934
2
false
0
0
.ipynb files are pure json, you can read it, concatenate all the cells, and run pep8 on it. On the other end, getting the correct cell number/line number to "fix" them would be slightly more difficult. I'm not aware of any project the does it right now.
1
1
0
Is there a way to verify that an iPython notebook's code is PEP8 compliant, after it has been exported as an .ipynb file?
Verifying PEP8 in exported iPython notebook code
0.099668
0
0
469
26,211,308
2014-10-06T07:04:00.000
0
0
0
0
python,twitter,machine-learning,classification,nltk
29,718,799
1
false
0
0
The thing is that how do I even generate/create a training data for such a huge data I would suggest finding a training data set that could help you with the categories you are interested in. So let's say price related articles, you might want to find a training data set that is all about price related articles and then perhaps expand it by using synonyms for key-words like cheap or so. And perhaps look into sentence structure to find out whether if the structure of the sentence helps with your classifier algorithm. If not then what is the best approach to create a training data for multi-class classification of text/comments? key-words, pulling articles that are all about related categories and then go from there. Lastly, I suggest being very familiar with NLTK's corpus library, this might also help you with retrieving training data as well. As for your last question, I'm kinda confused to what you mean by 'multiple categories to classify the comments into', do you mean having multiple classifiers for a particular comment to belong in? So a comment can belong to 1 to more classifiers?
1
1
1
So I have some 1 million lines of twitter comments data in csv format. I need to classify them in certain categories like if somebody is talking about : "product longevity", "cheap/costly", "on sale/discount" etc. As you can see I have multiple classes to classify these tweets data into. The thing is that how do I even generate/create a training data for such a huge data.Silly question but I was wondering whether/not there are already preclassified/tagged comments data to train our model with? If not then what is the best approach to create a training data for multi-class classification of text/comments ? While I have tried and tested NaiveBayes for sentiment classification for a smaller dataset, could you please suggest which classifier shall I use for this problem (multiple categories to classify the comments into). Thanks!!!
preclassified trained twitter comments for categorization
0
0
0
149
26,214,055
2014-10-06T10:10:00.000
1
1
0
0
python,unit-testing,code-coverage,coverage.py,python-coverage
63,373,136
5
false
0
0
I had similar case where I had multiple packages and each of them had its tests and they were run using own testrunner. so I could combine all the coverage xml by following these steps. Indivisually generate the coverage report. You would need to naviagte to each package and generate the report in that package. This would create .coverage file. You can also add [run]parallel=True in your .coveragerc to create coverage file appended with machine name and processid. Aggregate all the reports. You need to copy all the .coverage files to for these packages to a seaparte folder. You might want to run a batch or sh script to copy all the coverage files. Run combine. Now naviagte tp the folder when you have all the report files and then run coverage combine. This will delete all the coverage files and combine it to one .coverage file. Now you can run coverage html and coverage xml.
1
28
0
I'm wondering if it's possible to combine coverage.xml files into 1 file to see global report in HTML output. I've got my unit/functional tests running as 1 command and integration tests as the second command. That means my coverage for unit/functional tests are overridden by unit tests. That would be great if I had some solution for that problem, mainly by combining those files into 1 file.
combine python coverage files?
0.039979
0
1
19,392
26,222,066
2014-10-06T18:02:00.000
6
0
1
0
python,numpy
26,222,130
2
true
0
0
That's what __rmul__ is for. In your scenario, Python calls int.__mul__(-1, o). int doesn't know how to do this operation, so this call returns NotImplemented. Python therefore calls type(o).__rmul__(o, -1) giving your class the chance to handle it.
1
0
0
How is the operator overloading parsed? I have a class object o and want to do -1 * o with the overloaded __mul__ operator. Would that be parsed correctly when the left operand is a -1? Multiplication should be commutative (except for matrices and cross-products)... ?
Python operator overloading : commutative or right only?
1.2
0
0
652
26,224,243
2014-10-06T20:21:00.000
5
1
0
0
python,twitter,python-twitter
26,224,375
2
true
0
0
Fortunately not! Passwords are very confidential information which only Twitter itself wants to handle. Think about a third-party developer suggesting to change your Twitter password: would you trust them and let them see your new password? how could you make sure they are really going to set your new password and not another one?
1
4
0
Is there a way to change your twitter password via the python-twitter API or the twitter API in general? I have looked around but can't seem to find this information...
Password Change with the Twitter API
1.2
0
1
1,863
26,225,125
2014-10-06T21:26:00.000
4
0
1
0
python,python-2.7,py2exe
26,225,322
1
true
0
0
Py2exe doesn't actually create one single executable. You have to include the dlls and other files in the folder.
1
2
0
I compiled a simple python module using py2exe. It works fine for me when I run the executable through the cmd window, but when I give it to someone without Python installed, they get the following error message: LoadLibrary(pythondll) failedThe specified module could not be found. C:\PYTHON27.DLL How do I resolve this issue?
py2exe failed to load specified module
1.2
0
0
1,420
26,226,730
2014-10-07T00:16:00.000
2
0
1
0
python,macos,qt,pyqt,anaconda
51,688,988
9
false
0
1
I found it in this location in my mac /Users/ramakrishna/Qt/5.11.1/clang_64/bin/Designer.app command "open -a designer" also works on mac shell command + space bar and invoke spot light search and typing designer also find the designer app
4
13
0
I am trying to find Qt designer app on Mac. I installed anaconda package and conda reports that qt, sip, and pyqt are installed. Still I couldn't find the designer app in any of the folders. My Python app that uses pyqt works perfectly. I'm very new to macs and probably missing something very simple. I did search folder tree for anything named designer. I found QtDesigner.so (supposed to be executable?) at /Users/XXXX/anaconda/pkgs/pyqt-4.10.4-py27_0/lib/python2.7/site-packages/PyQt4 but it won't even run saying "cannot execute binary file" anaconda/bin doesn't have it. There's a folder anaconda/include/QtDesigner but noting I can run /anaconda/pkgs/qt-4.8.5-3/bin - no designer. I'm totally confused now.
Where is Qt designer app on Mac + Anaconda?
0.044415
0
0
36,375
26,226,730
2014-10-07T00:16:00.000
5
0
1
0
python,macos,qt,pyqt,anaconda
36,830,816
9
false
0
1
OSX Yosemite 10.10.5 Qt 5.6 QtCreator 3.6.1 QtDesigner is part of my QtCreator. To use QtDesigner: Launch QtCreator, and from the menu bar (outside QtCreator), click on: File>New File or Project You will be presented with a New File or Project dialog window. In the Files And Classes section, select Qt. In the middle pane, select QtDesigner Form. Then click on the Choose button in the lower right corner. You will be presented with a QtDesigner Form dialog window. Then you can select Main Window or Dialog with Buttons Bottom, etc. Then click on the Continue button in the lower right corner. In the Location dialog window, use a name like mainwindow1.ui, and for the path you might want to step aside and create a directory called forms, e.g. $ mkdir /Users/7stud/qt_projects/forms, then enter that as the path. Enter any other details and click on Done. That will land you in QtCreator with the Design button selected (which I guess means you are in QtDesigner), and you will be able to drag and drop widgets onto your window. To convert the .ui file to a .py file that you can import into your python program: $ pyuic5 mainwindow1.ui -o mainwindow1.py -o => output file (default is stdout) That command converts the .ui file mainwindow1.ui to a .py file named mainwindow1.py. To re-open the file: File>Open File or Project. If you select a file with a .ui extension, it will be opened with QtCreator's Design button pre-selected, i.e. you will be inside QtDesigner.
4
13
0
I am trying to find Qt designer app on Mac. I installed anaconda package and conda reports that qt, sip, and pyqt are installed. Still I couldn't find the designer app in any of the folders. My Python app that uses pyqt works perfectly. I'm very new to macs and probably missing something very simple. I did search folder tree for anything named designer. I found QtDesigner.so (supposed to be executable?) at /Users/XXXX/anaconda/pkgs/pyqt-4.10.4-py27_0/lib/python2.7/site-packages/PyQt4 but it won't even run saying "cannot execute binary file" anaconda/bin doesn't have it. There's a folder anaconda/include/QtDesigner but noting I can run /anaconda/pkgs/qt-4.8.5-3/bin - no designer. I'm totally confused now.
Where is Qt designer app on Mac + Anaconda?
0.110656
0
0
36,375
26,226,730
2014-10-07T00:16:00.000
0
0
1
0
python,macos,qt,pyqt,anaconda
27,385,758
9
false
0
1
I can't answer you question definitively as I don't have OSX installed anywhere, but perhaps I can help lead you in the right direction. 1) you are going to want to be looking for Designer, not QT Creator, as Designer is what comes bundled with PyQt4 (PyQt4 is what Anaconda comes packaged with) 2) in linux when you install Anaconda 2.1 to the default location, designer is going to be placed in home/user_name/anaconda/bin/ 3) typing 'designer' from a terminal launches designer in linux, so you may not have to bother searching around for it. Hopefully there is some consistency between linux and osx (windows designer is located in \Anaconda\Lib\site-packages\PyQt4). Best of luck.
4
13
0
I am trying to find Qt designer app on Mac. I installed anaconda package and conda reports that qt, sip, and pyqt are installed. Still I couldn't find the designer app in any of the folders. My Python app that uses pyqt works perfectly. I'm very new to macs and probably missing something very simple. I did search folder tree for anything named designer. I found QtDesigner.so (supposed to be executable?) at /Users/XXXX/anaconda/pkgs/pyqt-4.10.4-py27_0/lib/python2.7/site-packages/PyQt4 but it won't even run saying "cannot execute binary file" anaconda/bin doesn't have it. There's a folder anaconda/include/QtDesigner but noting I can run /anaconda/pkgs/qt-4.8.5-3/bin - no designer. I'm totally confused now.
Where is Qt designer app on Mac + Anaconda?
0
0
0
36,375
26,226,730
2014-10-07T00:16:00.000
18
0
1
0
python,macos,qt,pyqt,anaconda
41,113,866
9
false
0
1
You can try open -a Designer from your terminal to launch Qt Designer that comes with Anaconda (version 4.x). If you have Qt5.x, you may want to launch a newer version of Designer by open -a Designer-qt5.
4
13
0
I am trying to find Qt designer app on Mac. I installed anaconda package and conda reports that qt, sip, and pyqt are installed. Still I couldn't find the designer app in any of the folders. My Python app that uses pyqt works perfectly. I'm very new to macs and probably missing something very simple. I did search folder tree for anything named designer. I found QtDesigner.so (supposed to be executable?) at /Users/XXXX/anaconda/pkgs/pyqt-4.10.4-py27_0/lib/python2.7/site-packages/PyQt4 but it won't even run saying "cannot execute binary file" anaconda/bin doesn't have it. There's a folder anaconda/include/QtDesigner but noting I can run /anaconda/pkgs/qt-4.8.5-3/bin - no designer. I'm totally confused now.
Where is Qt designer app on Mac + Anaconda?
1
0
0
36,375
26,231,755
2014-10-07T08:30:00.000
3
0
1
0
python,qt,python-2.7
26,231,835
2
false
0
0
Convert to a string and find the position of the decimal point, relative to the length of the string.
1
9
0
In python, I import a file that has different length decimal places in the numbers, such as 7.2 or 7.2332 Is it possible to find out the number of decimal places in the number, if so, how? All the question I can find on SO are about formatting to a specific decimal place, but thats not what I am doing. The application is to use qdoublespinbox and set the decimal places to only the required amount. Edit: In context, I am able to print the number, which will print 7.2, or 5.42422 etc. If the command prompt prints the correct digits (ie. doesnt print everything in the memory allocation of a float) is there any way to get that information.
Count decimal places in a float
0.291313
0
0
35,498
26,232,798
2014-10-07T09:28:00.000
1
0
0
1
python,linux,sockets,python-2.7,vmware
26,802,000
2
false
0
0
When you use NAT, the host machine has no way to directly contact the client machine. All you can do is usign port forwarding to tell vmware that all traffic directed to the designated ports on the host is to be delivered to the client. It is intended to install a server on the client machine that can be accessed from outside the host machine. If you want to test network operation between the host and the client, you should configure a host-only adapter on the client machine. It is a virtual network between the host and the client(s) machine(s) (more than one client can share same host-only network, of course with different addresses) I generally configure 2 network adapters on my client machines : one NAT to give the client machine an access to the open world on host-only to have a private network between host and clients and allow them to communicate with any protocol on any port You can also use a bridged interface on the client. In this mode, the client machine has an address on same network than the external network of the host : it combines both previous modes
1
2
0
I am trying to send a string from the windows to the linux vmware on the same machine. I did the following: - opened a socket on 127.0.0.1 port 50000 on the linux machine and reading the socket in a while loop. My programming language is python 2.7 - send a command using nc ( netcat ) on 127.0.0.1 port 50000 from the windows machine ( using cygwin ). However, I dont receive any command on the linux machine although the command sent through windows /cygwin is successful. I am using NAT ( sharing the hosts IP address ) on the VMWARE Machine. Where could be the problem?
Send a string from windows to vmware-ubuntu over socket using python
0.099668
0
1
2,245
26,235,884
2014-10-07T12:21:00.000
0
0
0
0
java,python,web-services,architecture
28,648,892
2
false
1
0
I would personally recommend wrapping the Python code in an HTTP layer and turning it into a REST web service. I don't doubt that many successful applications have been written by calling scripts from the command line, but I think there are a couple things that are really nice when it comes to the freedoms of a web service. It definitely seems like putting the Python app in a web service would be the more 'Service-oriented' way to do it, so it seems reasonable to expect that doing so would give you the typical benefits of SOA. This may or may not apply to your situation, but if none of these considerations apply that seems to point towards this being a 'neither choice will be that bad' situation. The biggest thing I see going for using the web service is scalability. If the command line application can chew up a lot of server resources, it would be good to have it running on a separate server from the web application. That way, other users who aren't using the part of the application that interacts with this Python app won't have their experience reduced when other users do things that cause the Python app to be called. Putting the Python code behind a web service would also make it easier to cluster. If it's a situation where you could get some benefit from caching, it would also be easy to take advantage of the caching mechanisms in HTTP and your proxy servers. Another thing that might be nice is testability. There's a lot of good tools out there for testing the common cases of a web application talking to a web service, whereas thoroughly testing your applications when they're just calling command line applications might be a little more work.
2
1
0
I have a java web application that needs to make use of a python script. The web application will give the script a file as input and take some text as output. I have two options: Wrap the python script with HTTP and call it from the web application as a REST service Simply execute command line from the web application Which option should I take and why? This script won't be used by any other application.
Execute the script in command line or in Rest client
0
0
1
663
26,235,884
2014-10-07T12:21:00.000
0
0
0
0
java,python,web-services,architecture
26,236,230
2
false
1
0
Try both things but try to execute in command line to know the output line by line. you can find any error in the line when that line will execute while at rest service it will return you an Http response.
2
1
0
I have a java web application that needs to make use of a python script. The web application will give the script a file as input and take some text as output. I have two options: Wrap the python script with HTTP and call it from the web application as a REST service Simply execute command line from the web application Which option should I take and why? This script won't be used by any other application.
Execute the script in command line or in Rest client
0
0
1
663
26,238,097
2014-10-07T14:14:00.000
2
1
0
0
php,python,django,web-frameworks
26,238,910
1
true
1
0
You can use any of these framework since the have authetications nativly implemented but you don't need to, you can build your own authentication system: You will need a database/file to store credentails and a authentication program verifying credentials against this given storage in any language your webserver can use. However I would strongly recommend you to use an established framework for authetication.
1
0
0
on my beaglebone i have installed hostapd, iscdhcp and lighttpd so i can build a router and webserver. let say my network is not secured so every on can be connected to it, and will get an ip-address from the beaglebone-server. After the person is connected to the network and he starts a browser he should be redirected to a homepage to give his password, if he is not authenticated yet. do i need a webframe-work (like django) for this purpose?? if not what else?? i am familiar with programming in python, php, html and java. thanks
authentication on webserver
1.2
0
0
50
26,240,220
2014-10-07T15:56:00.000
7
0
1
0
python,random
26,240,285
1
true
0
0
For "secure" random numbers, Python doesn't actually generate them: it gets them from the operating system, which has a special driver that gathers entropy from various real-world sources, such as variations in timing between keystrokes and disk seeks.
1
7
0
I understand that I should use os.urandom() or SystemRandom in Python for 'secure' pseudo-random numbers. But how does Python generate these random numbers logically? Also is there a way to generate a number in Python that is 'more random' than others?
How does Python generate random numbers?
1.2
0
0
3,810
26,241,675
2014-10-07T17:23:00.000
0
0
1
0
python
26,241,848
4
false
0
0
Global is great if you really want just one set of data. And modifying globals has reasonable use cases also. Take the re module for example. It caches regular expressions its already seen in a global dict to save the cost of recompiling. Although re doesn't protect its cache with a lock, its common to use threading.Lock to make sure you don't insert the same key multiple times.
3
0
0
The general "why global variables are bad" answers don't seem to cover this, so I'm just curious if this would be bad practice or not. Here is an example of what I mean - You have one function to generate a list of itemsYou have another function to build these items You would like to output the current progress in each function every x seconds, where x is defined at the start of the code. Would it be bad to set this as a global variable? You'd avoid having to pass an unimportant thing to each function which is what I did in my first code, and it got so messy in some places I had to categorise everything into lists, then pull them out again inside the function. This made it impossible to test individual functions as they needed too much input to run.Also, if a function only needs one variable to run, I don't want to add anything else to it, so passing a huge list/dictionary containing everything isn't really ideal. Or alternatively, would it be worthwile setting up a global dictionary to contain any values you may want to use throughout the code? You could set the variable name as the key so it's easy to access wherever it's needed.I'm not so sure about other versions of Python, but in Maya it's all pretty much contained in the same block of code, so there's no real danger of it affecting anything else.
Is it fine to use a global variable if it is only set once to avoid having to pass it to multiple functions?
0
0
0
73
26,241,675
2014-10-07T17:23:00.000
2
0
1
0
python
26,241,745
4
false
0
0
Global variables aren't bad in themselves. What's bad is modifying global variables in your functions. Reading global constants, which is what you're proposing here, is fine - you don't even need the global keyword, since Python will look up the var in surrounding scopes until it finds it. You might want to put your variable in ALL_CAPS to signify that it is indeed a constant. However, you still might consider whether your code would be better structured as a class, with your variable as an instance attribute.
3
0
0
The general "why global variables are bad" answers don't seem to cover this, so I'm just curious if this would be bad practice or not. Here is an example of what I mean - You have one function to generate a list of itemsYou have another function to build these items You would like to output the current progress in each function every x seconds, where x is defined at the start of the code. Would it be bad to set this as a global variable? You'd avoid having to pass an unimportant thing to each function which is what I did in my first code, and it got so messy in some places I had to categorise everything into lists, then pull them out again inside the function. This made it impossible to test individual functions as they needed too much input to run.Also, if a function only needs one variable to run, I don't want to add anything else to it, so passing a huge list/dictionary containing everything isn't really ideal. Or alternatively, would it be worthwile setting up a global dictionary to contain any values you may want to use throughout the code? You could set the variable name as the key so it's easy to access wherever it's needed.I'm not so sure about other versions of Python, but in Maya it's all pretty much contained in the same block of code, so there's no real danger of it affecting anything else.
Is it fine to use a global variable if it is only set once to avoid having to pass it to multiple functions?
0.099668
0
0
73
26,241,675
2014-10-07T17:23:00.000
0
0
1
0
python
26,241,775
4
false
0
0
Maybe your design with function is bad. Consider using class to do this thinfs. Read about design patterns like Builder, Decorator, iterator etc.
3
0
0
The general "why global variables are bad" answers don't seem to cover this, so I'm just curious if this would be bad practice or not. Here is an example of what I mean - You have one function to generate a list of itemsYou have another function to build these items You would like to output the current progress in each function every x seconds, where x is defined at the start of the code. Would it be bad to set this as a global variable? You'd avoid having to pass an unimportant thing to each function which is what I did in my first code, and it got so messy in some places I had to categorise everything into lists, then pull them out again inside the function. This made it impossible to test individual functions as they needed too much input to run.Also, if a function only needs one variable to run, I don't want to add anything else to it, so passing a huge list/dictionary containing everything isn't really ideal. Or alternatively, would it be worthwile setting up a global dictionary to contain any values you may want to use throughout the code? You could set the variable name as the key so it's easy to access wherever it's needed.I'm not so sure about other versions of Python, but in Maya it's all pretty much contained in the same block of code, so there's no real danger of it affecting anything else.
Is it fine to use a global variable if it is only set once to avoid having to pass it to multiple functions?
0
0
0
73
26,245,476
2014-10-07T21:18:00.000
1
1
0
0
javascript,python,html,email,outlook
26,245,689
1
true
1
0
Outlook Express, unlike real Outlook, does not provide any API to enumerate its messages. The only API exposed by OE is Simple MAPI.
1
0
0
I am trying to save a whole slew of emails as html in Outlook Express, but the program only lets me save them individually. If I click "Save As" on more than one email Outlook only lets me save them all in one .txt file. I was able to drag all of the files into a folder outside of outlook, but their file type is Outlook Item. Thus, I am trying to find a way to save all of these files in html. I have looked at a number of programs, such as SAFE PST Backup, but they didn't seem to have this functionality. If there were a way to do this in either Python or JavaScript that would be awesome, but I'm not sure where to start.
How can I save every Outlook Express Email as an html without clicking "Save As" for every email
1.2
0
0
123
26,246,015
2014-10-07T21:59:00.000
0
0
0
0
python,cluster-analysis,dbscan
68,602,469
4
false
0
0
Why not just flatten the data to 2 dimensions with PCA and use DBSCAN with only 2 dimensions? Seems easier than trying to custom build something else.
1
11
1
I have been searching around for an implementation of DBSCAN for 3 dimensional points without much luck. Does anyone know I library that handles this or has any experience with doing this? I am assuming that the DBSCAN algorithm can handle 3 dimensions, by having the e value be a radius metric and the distance between points measured by euclidean separation. If anyone has tried implementing this and would like to share that would also be greatly appreciated, thanks.
Python: DBSCAN in 3 dimensional space
0
0
0
15,341
26,247,293
2014-10-08T00:15:00.000
0
0
0
0
python,django,django-settings
26,723,457
1
true
1
0
So the short answer to this question: Do not store global runtime data in Django's settings. Each new process thread will create its own instance of settings, meaning that your application will behave differently in each thread, or you'll have to run your app in single threaded mode (bad for performance). If you need to store this kind of data either use the Django Caching Framework, a database or the filesystem.
1
0
0
I want to use the Django Settings to indefinitely store a value. This value will change throughout the life span of the server process and should not be forgotten. See, what I am trying to do is generate a static file on my server's file system and store its generation time in django.conf.settings. I am more than aware that all settings are lost when the fcgi server is restarted, this is ok. But, the behavior (and problem) I'm seeing is that the setting is being reset every so often. I cannot determine why it is being reset, it seems fairly random. Does anyone out there know if Django resets its settings every so often? or should I be looking for a bug in my logic? How about another way to achieve what I want? Could this be related to running the fcgi server in threaded mode? I may just end up writing the timestamp to the filesystem, we will see... PS: My code works as expected when I use Django's local lighttp server. PPS: Django's Caching Framework would probably have been a smart way to get what I want, but at this point I won't be able to use it :(
Using Django Settings to cache a timestamp
1.2
0
0
129
26,258,420
2014-10-08T13:41:00.000
1
0
0
0
python,numpy,sympy
26,331,726
2
false
0
0
If you need a lot of precision, you can try using SymPy floats, or mpmath directly (which is part of SymPy), which provides arbitrary precision. For example, sympy.Float('2.0', 100) creates a float of 2.0 with 100 digits of precision. You can use something like sympy.sin(2).evalf(100) to get 100 digits of sin(2) for instance. This will be a lot slower than numpy because it is arbitrary precision, meaning it doesn't use machine floats, and it is implemented in pure Python (whereas numpy is written in Fortran and C).
1
2
1
I am solving a large non-linear system of equations and I need a high degree of numerical precision. I am currently using sympy.lambdify to convert symbolic expressions for the system of equations and its Jacobian into vectorized functions that take ndarrays as inputs and return an ndarray as outputs. By default, lambdify returns an array with dtype of numpy.float64. Is it possible to have it return an array with dtype numpy.float128? Perhaps this requires the inputs to have dtype of numpy.float128?
Can lambdify return an array with dytpe np.float128?
0.099668
0
0
667