Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
33,187,572
2015-10-17T14:11:00.000
0
0
0
0
1
python,django
0
33,187,887
0
2
0
false
1
0
Even if you could do this, it wouldn't help solve your ultimate problem. You can't use order_by on concatenated querysets from different models; that can't possibly work, since it is a request for the database to do an ORDER BY on the query.
2
1
0
0
In Django, how does one give an attribute field name an alias that can be used to manipulate a queryset? Background: I have a queryset where the underlying model has an auto-generating time field called "submitted_on". I want to use an alias for this time field (i.e. "date"). Why? Because I will concatenate this queryset with another one (with the same underlying model), and then order_by('-date'). Needless to say, this latter qset already has a 'date' attribute (attached via annotate()). How do I make a 'date' alias for the former queryset? Currently, I'm doing something I feel is an inefficient hack: qset1 = qset1.annotate(date=Max('submitted_on')) I'm using Django 1.5 and Python 2.7.
Making an alias for an attribute field, to be used in a django queryset
0
0
1
0
0
452
33,187,572
2015-10-17T14:11:00.000
0
0
0
0
1
python,django
0
33,192,349
0
2
0
true
1
0
It seems qset1 = qset1.annotate(date=Max('submitted_on')) is the closest I have right now. This, or using exclude(). I'll update if I get a better solution. Of course other experts from SO are welcome to chime in with their own answers.
2
1
0
0
In Django, how does one give an attribute field name an alias that can be used to manipulate a queryset? Background: I have a queryset where the underlying model has an auto-generating time field called "submitted_on". I want to use an alias for this time field (i.e. "date"). Why? Because I will concatenate this queryset with another one (with the same underlying model), and then order_by('-date'). Needless to say, this latter qset already has a 'date' attribute (attached via annotate()). How do I make a 'date' alias for the former queryset? Currently, I'm doing something I feel is an inefficient hack: qset1 = qset1.annotate(date=Max('submitted_on')) I'm using Django 1.5 and Python 2.7.
Making an alias for an attribute field, to be used in a django queryset
0
1.2
1
0
0
452
33,231,156
2015-10-20T08:01:00.000
0
0
0
0
0
python,django,selenium,selenium-webdriver
0
33,231,191
0
1
0
false
1
0
I suggest you use a continuous integration solution like Jenkins to run your tests periodically.
1
0
0
0
I'm quite new to whole Selenim thing and I have a simple question. When I run tests (Django application) on my local machine, everything works great. But how this should be done on server? There is no X, so how can I start up webdriver there? What's the common way? Thanks
Selenium on server
0
0
1
0
1
63
33,234,363
2015-10-20T10:35:00.000
3
0
0
0
0
python,image,opencv,image-processing,opencv-contour
0
42,767,296
0
3
0
false
0
0
Answer from @rayryeng is excellent! One small thing from my implementation is: The np.where() returns a tuple, which contains an array of row indices and an array of column indices. So, pts[0] includes a list of row indices, which correspond to height of the image, pts[1] includes a list of column indices, which correspond to the width of the image. The img.shape returns (rows, cols, channels). So I think it should be img[pts[0], pts[1]] to slice the ndarray behind the img.
1
15
1
0
I'm using OpenCV 3.0.0 on Python 2.7.9. I'm trying to track an object in a video with a still background, and estimate some of its properties. Since there can be multiple moving objects in an image, I want to be able to differentiate between them and track them individually throughout the remaining frames of the video. One way I thought I could do that was by converting the image to binary, getting the contours of the blobs (tracked object, in this case) and get the coordinates of the object boundary. Then I can go to these boundary coordinates in the grayscale image, get the pixel intensities surrounded by that boundary, and track this color gradient/pixel intensities in the other frames. This way, I could keep two objects separate from each other, so they won't be considered as new objects in the next frame. I have the contour boundary coordinates, but I don't know how to retrieve the pixel intensities within that boundary. Could someone please help me with that? Thanks!
Access pixel values within a contour boundary using OpenCV in Python
0
0.197375
1
0
0
32,084
33,236,054
2015-10-20T11:58:00.000
0
0
0
0
0
python,python-requests
0
33,236,211
0
3
0
true
0
0
HTTP status codes are usually meant for the browsers, or in case of APIs for the client talking to the server. For normal web sites, using status codes for semantical error information is not really useful. Overusing the status codes there could even cause the browser to not render responses correctly. So for normal HTML responses, you would usually expect a code 200 for almost everything. In order to check for errors, you will then have to check the—application specific—error output from the HTML response. A good way to find out about these signs is to just try logging in from the browser with invalid credentials and then check what output is rendered. Or as many sites also show some kind of user menu once you’re logged in, check for its existence to figure out if you’re logged in. And when it’s not there, the login probably failed.
2
1
0
0
I used requests to login to a website using the correct credentials initially. Then I tried the same with some invalid username and password. I was still getting response status of 200. I then understood that the response status tells if the corresponding webpage has been hit or not. So now my doubt is how to verify if I have really logged in to the website using correct credentials
How to verify that we have logged in correctly to a website using requests in python?
0
1.2
1
0
1
98
33,236,054
2015-10-20T11:58:00.000
0
0
0
0
0
python,python-requests
0
33,236,191
0
3
0
false
0
0
What status code the site responds with depends entirely on their implementation; you're more likely to get a non-200 response if you're attempting to log in to a web service. If a login attempt yielded a non-200 response on a normal website, it'd require a special handler on their end, as opposed to a 200 response with a normal page prompting you (presumably a human user, not a script) with a visual cue indicating login failure. If the site you're logging into returns a 200 regardless of success or failure, you may need to use something like lxml or BeautifulSoup to look for indications of success or failure (which presumably you'll be using already to process whatever it is you're logging in to access).
2
1
0
0
I used requests to login to a website using the correct credentials initially. Then I tried the same with some invalid username and password. I was still getting response status of 200. I then understood that the response status tells if the corresponding webpage has been hit or not. So now my doubt is how to verify if I have really logged in to the website using correct credentials
How to verify that we have logged in correctly to a website using requests in python?
0
0
1
0
1
98
33,241,211
2015-10-20T15:52:00.000
6
0
0
1
1
python,linux,file,space
0
33,241,435
0
1
0
true
0
0
You don't need to (and shouldn't) escape the space in the file name. When you are working with a command line shell, you need to escape the space because that's how the shell tokenizes the command and its arguments. Python, however, is expecting a file name, so if the file name has a space, you just include the space.
1
3
0
0
I have problem with os.access(filename, os.R_OK) when file is an absolute path on a Linux system with space in the filename. I have tried many ways of quoting the space, from "'" + filename + "'" to filename.replace(' ', '\\ ') but it doesn't work. How can I escape the filename so my shell knows how to access it? In terminal I would address it as '/home/abc/LC\ 1.a'
Handling a literal space in a filename
0
1.2
1
0
0
4,377
33,273,885
2015-10-22T05:22:00.000
0
1
1
0
0
python,api,twitter,tweepy,twitter-streaming-api
0
33,290,201
0
1
0
true
0
0
Once your data has been loaded into JSON format, you can access the username by calling tweet['user']['screen_name']. Where tweet is whatever varibale you have assigned that holds the JSON object for that specific tweet.
1
0
0
0
How to list name of users who tweeted with given keyword along with count of tweets from them ? I am using python and tweepy. I used tweepy to list JSON result in a file by filter(track["keyword"]) but doesn't know how to list users who tweeted given keyword.
how to list all users who tweeted a given keyword using twitter api and tweepy
0
1.2
1
0
1
578
33,294,758
2015-10-23T04:12:00.000
0
0
0
0
0
javascript,python,django,cordova,authentication
0
33,294,839
0
2
0
false
1
0
You would need to either expose the django token in the settings file so that it can be accessed via jquery, or that decorator wont be accessible via mobile. Alternatively, you can start using something like oauth
1
1
0
0
I have developed a Python/Django application for a company. In this app all the employees of the company have a username and a password to login in. Now there is a need for a phone application that can do some functionality. In some functions I have the decorator @login_required For security reasons I would like to work with this decorator than against it, so how do I? I'm using PhoneGap (JavaScript/JQuery) to make the phone app if that helps. I can do my own research but I just need a starting point. Do I get some sort of token and keep it in all my HTTP request headers? First Attempt: I was thinking that maybe I POST to the server and get some kind of Authentication Token or something. Maybe there is some Javascript code that hashes my password using the same algorithm so that I can compare it to the database. Thanks
Login in from Phone App
0
0
1
0
0
57
33,321,076
2015-10-24T17:19:00.000
0
0
0
0
1
python,session,cookies,python-requests
0
33,321,212
0
2
0
false
1
0
You need to get the response from the page then regex match for token.
2
0
0
0
I am trying to extract the data from webpage after log in. To log in website, i can see the token (authenticity_token) in Form Data section. It seems, token generating automatically.I am trying to get token values but no luck.Please anyone help me on this,how to get the token value while sending the post requests.
How to get token value while sending post requests
0
0
1
0
1
70
33,321,076
2015-10-24T17:19:00.000
0
0
0
0
1
python,session,cookies,python-requests
0
37,413,165
0
2
0
false
1
0
token value is stored in the cookie file..check the cookie file and extract the value from it.. for example,a cookie file after login contain jsession ID=A01~xxxxxxx where 'xxxxxxx' is the token value..extract this value..and post this value
2
0
0
0
I am trying to extract the data from webpage after log in. To log in website, i can see the token (authenticity_token) in Form Data section. It seems, token generating automatically.I am trying to get token values but no luck.Please anyone help me on this,how to get the token value while sending the post requests.
How to get token value while sending post requests
0
0
1
0
1
70
33,328,730
2015-10-25T10:41:00.000
0
0
0
0
1
python,django
0
33,329,001
0
1
0
false
1
0
Groups in django (django.contrib.auth) are used to specify certain rights of viewing content mainly in the admin to certain users. I think your group functionality might be more custom than this and that you're better of creating your own group models, and making your own user and group management structure that suits the way your website is used better.
1
0
0
0
I am currently learning how to use Django. I want to make a web app where you as a user can join groups. These groups have content that just members of this group should be able to see. I learned about users, groups and a bit of authentication. My first impression is, that this is more about the administration of the website itself and I cannot really believe that I can solve my idea with it. I just want to know if thats the way to go in Django. I probably have to create groups in Django that have the right to see the content of the group on the website. But that means that everytime a group is created, I have to create a django group. Is that an overkill or the right way?
How to organize groups in Django?
0
0
1
0
0
77
33,353,398
2015-10-26T18:50:00.000
0
0
0
1
0
python,design-patterns,command-line-interface,restful-architecture,n-tier-architecture
0
33,353,621
0
1
0
true
1
0
Since your app is not very complex, I see 2 layers here: ServerClient: it provides API for remote calls and hides any details. It knows how to access HTTP server, provide auth, deal with errors etc. It has methods like do_something_good() which anyone may call and do not care if it remote method or not. CommandLine: it uses optparse (or argparse) to implement CLI, it may support history etc. This layer uses ServerClient to access remote service. Both layers do not know anything about each other (only protocol like list of known methods). It will allow you to use somethign instead of HTTP Rest and CLI will still work. Or you may change CLI with batch files and HTTP should work.
1
0
0
0
I am going to write some HTTP (REST) client in Python. This will be a Command Line Interface tool with no gui. I won't use any business logic objects, no database, just using an API to communicate with the server (using Curl). Would you recommend me some architectual patterns for doing that, except for Model View Controller? Note: I am not asking for a design patterns like Command or Strategy. I just want to know how to segregate and decouple abstraction layers. I think using MVC is pointless regarding the fact of not having a business logic - please correct me if I'm wrong. Please give me your suggestions! Do you know any examples of CLI projects (in any language, not necessarily in Python) that are well maintained and with clean code? Cheers
Architectual pattern for CLI tool
0
1.2
1
0
0
355
33,353,968
2015-10-26T19:25:00.000
0
0
0
1
1
python,azure
1
42,435,442
0
4
0
false
0
0
BlobService is function you are trying to call, but it is not defined anywhere. It should be defined when you call from azure.storage import *. It is probably not being called, due to a difference in package versions. Calling from azure.storage.blob import * should work, as it is now invoked correctly.
3
0
0
0
I've installed the azure SDK for Python (pip install azure). I've copied the Python code on the MS Azure Machine Learning Batch patch for the ML web-service into an Anaconda Notebook. I've replaced all the place holders in the script with actual values as noted in the scripts comments. When I run the script I get the error: "NameError: global name 'BlobService' is not defined" at the script line "blob_service = BlobService(account_name=storage_account_name, account_key=storage_account_key)". Since the "from azure.storage import *" line at the beginning of the script does not generate an error I'm unclear as to what the problem is and don't know how to fix it. Can anyone point me to what I should correct?
How do I fix the 'BlobService' is not defined' error
0
0
1
0
0
3,863
33,353,968
2015-10-26T19:25:00.000
0
0
0
1
1
python,azure
1
33,354,366
0
4
0
false
0
0
It's been a long time since I did any Python, but BlobStorage is in the azure.storage.blob namespace I believe. So I don't think your from azure.storage import * is pulling it in. If you've got a code sample in a book which shows otherwise it may just be out of date.
3
0
0
0
I've installed the azure SDK for Python (pip install azure). I've copied the Python code on the MS Azure Machine Learning Batch patch for the ML web-service into an Anaconda Notebook. I've replaced all the place holders in the script with actual values as noted in the scripts comments. When I run the script I get the error: "NameError: global name 'BlobService' is not defined" at the script line "blob_service = BlobService(account_name=storage_account_name, account_key=storage_account_key)". Since the "from azure.storage import *" line at the beginning of the script does not generate an error I'm unclear as to what the problem is and don't know how to fix it. Can anyone point me to what I should correct?
How do I fix the 'BlobService' is not defined' error
0
0
1
0
0
3,863
33,353,968
2015-10-26T19:25:00.000
1
0
0
1
1
python,azure
1
33,355,053
0
4
0
false
0
0
James, I figured it out. I just changed from azure.storage import * to azure.storage.blob import * and it seems to be working.
3
0
0
0
I've installed the azure SDK for Python (pip install azure). I've copied the Python code on the MS Azure Machine Learning Batch patch for the ML web-service into an Anaconda Notebook. I've replaced all the place holders in the script with actual values as noted in the scripts comments. When I run the script I get the error: "NameError: global name 'BlobService' is not defined" at the script line "blob_service = BlobService(account_name=storage_account_name, account_key=storage_account_key)". Since the "from azure.storage import *" line at the beginning of the script does not generate an error I'm unclear as to what the problem is and don't know how to fix it. Can anyone point me to what I should correct?
How do I fix the 'BlobService' is not defined' error
0
0.049958
1
0
0
3,863
33,405,411
2015-10-29T03:21:00.000
3
0
1
1
0
python,eclipse,pydev
0
33,405,546
0
1
0
true
0
0
First check whether python3.5 is auto-configured in eclipse. Go to Window>Preferences On the preferences window you will find PyDev configurations on left pan. PyDev>Interpreters>Python Interpreter If python3.5 is not listed you can either add using "Quick Auto-Config" or if you want to add manually click "New" then add give the interpreter name (ex:Py3.5) and then browse to the path of python executable (In your case inside /Library/Frameworks/Python.framework/) Once you have configured your interpreter in PyDev then you can change the interpreter of your project. Right click on your project>Properties On the left pan click PyDev-Interpreter.In that select the name of the PythonInterpreter(Py3.5) which you previously configured and you can also select the grammar version.
1
1
0
0
In eclipse, I'm used to configuring the buildpath for versions of java installed on my computer. I recently added Python 3.5 to my computer and want to use it in place of the default 2.7 that Macs automatically include. How can I configure my build path on PyDev, if there is such as concept to begin with, for the plugin? I've found that Python 3.5 is located at/Library/Frameworks/Python.framework/; how can I now change PyDev to use it?
How to change build path on PyDev
0
1.2
1
0
0
1,264
33,418,316
2015-10-29T15:26:00.000
0
0
1
0
0
python,list
0
33,418,551
0
5
0
false
0
0
You don't need to "delete" the node, just "skip" it. That is, change Node1's next member to the second Node2. Edit your question if you would like specific code examples (which are the norm for this site).
2
2
0
0
I was wondering if any of you could give me a walk through on how to remove an element from a linked list in python, I'm not asking for code but just kinda a pseudo algorithm in english. for example I have the linked list of 1 -> 2 -> 2 -> 3 -> 4 and I want to remove one of the 2's how would i do that? I thought of traversing through the linked list, checking to see if the data of one of the nodes is equal to the data of the node after it, if it is remove it. But I'm having trouble on the removing part. Thanks!
removing an element from a linked list in python
0
0
1
0
0
3,800
33,418,316
2015-10-29T15:26:00.000
0
0
1
0
0
python,list
0
33,418,852
0
5
0
false
0
0
You can do something like: if element.next.value == element.value: element.next = element.next.next Just be carefull to free the memory if you are programing this in C/C++ or other language that does not have GC
2
2
0
0
I was wondering if any of you could give me a walk through on how to remove an element from a linked list in python, I'm not asking for code but just kinda a pseudo algorithm in english. for example I have the linked list of 1 -> 2 -> 2 -> 3 -> 4 and I want to remove one of the 2's how would i do that? I thought of traversing through the linked list, checking to see if the data of one of the nodes is equal to the data of the node after it, if it is remove it. But I'm having trouble on the removing part. Thanks!
removing an element from a linked list in python
0
0
1
0
0
3,800
33,418,678
2015-10-29T15:41:00.000
0
0
0
0
0
python,windows,opencv,numpy,tkinter
1
33,434,056
0
2
0
false
0
1
Finally did it with .whl files. Download them, copy to C:\python27\Scripts and then open "cmd" and navigate to that folder with "cd\" etc. Once there run: pip install numpy-1.10.1+mkl-cp27-none-win_amd64.whl for example. In IDLE I then get: import numpy numpy.version '1.10.1'
2
0
1
0
I want to use "tkinter", "opencv" (cv2) and "numpy" in windows(8 - 64 bit and x64) with python2.7 - the same as I have running perfectly well in Linux (Elementary and DistroAstro) on other machines. I've downloaded the up to date Visual Studio and C++ compiler and installed these, as well as the latest version of PIP following error messages with the first attempts with PIP and numpy first I tried winpython, which already has numpy present but this comes without tkinter, although openCV would install. I don't want to use qt. so I tried vanilla Python, which installs to Python27. Numpy won't install with PIP or EasyInstall (unless it takes over an hour -same for SciPy), and the -.exe installation route for Numpy bombs becausee its looking for Python2.7 (not Python27). openCV won't install with PIP ("no suitable version") extensive searches haven't turned up an answer as to how to get a windows Python 2.7.x environment with all three of numpy, tkinter and cv2 working. Any help would be appreciated!
tkinter opencv and numpy in windows with python2.7
0
0
1
0
0
170
33,418,678
2015-10-29T15:41:00.000
0
0
0
0
0
python,windows,opencv,numpy,tkinter
1
33,441,221
0
2
0
false
0
1
small remark: WinPython has tkinter, as it's included by Python Interpreter itself
2
0
1
0
I want to use "tkinter", "opencv" (cv2) and "numpy" in windows(8 - 64 bit and x64) with python2.7 - the same as I have running perfectly well in Linux (Elementary and DistroAstro) on other machines. I've downloaded the up to date Visual Studio and C++ compiler and installed these, as well as the latest version of PIP following error messages with the first attempts with PIP and numpy first I tried winpython, which already has numpy present but this comes without tkinter, although openCV would install. I don't want to use qt. so I tried vanilla Python, which installs to Python27. Numpy won't install with PIP or EasyInstall (unless it takes over an hour -same for SciPy), and the -.exe installation route for Numpy bombs becausee its looking for Python2.7 (not Python27). openCV won't install with PIP ("no suitable version") extensive searches haven't turned up an answer as to how to get a windows Python 2.7.x environment with all three of numpy, tkinter and cv2 working. Any help would be appreciated!
tkinter opencv and numpy in windows with python2.7
0
0
1
0
0
170
33,420,633
2015-10-29T17:14:00.000
0
0
0
0
0
python,python-2.7,pandas,dataframe,pivot-table
0
33,421,040
0
1
0
false
0
0
I'm going to be general here, since there was no sample code or data provided. Let's say your original dataframe is called df and has columns Date and Sales. I would try creating a list that has all dates from 01-01-2014 to 12-31-2015. Let's call this list dates. I would also create an empty list called sales (i.e. sales = []). At the end of this workflow, sales should include data from dt['Sales'] AND placeholders for dates that are not within the data frame. In your case, these placeholders will be 0. In my answer, the names of the columns in the dataframe are capitalized; names of lists start with a lower case. Next, I would iterate through dates and check to see if each date is in dt['Date']. Each iteration through the list dates will be called date (i.e. date = dates[i]). If date is in dt['Date'], I would append the Sales data for that date into sales. You can find the date in the dataframe through this command: df['Date']==date. So, to append the corresponding Sales data into the list, I would use this command sales.append(df[df['Date']==date]['Sales']. If date is NOT in dt['Date'], I would append a placeholder into sales (i.e. sales.append(0). Once you iterate through all the dates in the list, I would create the final dataframe with dates and sales. The final dataframe should have both your original data and placeholders for dates that were not in the original data.
1
1
1
0
I have a pivot table which has an index of dates ranging from 01-01-2014 to 12-31-2015. I would like the index to range from 01-01-2013 to 12-31-2016 and do not know how without modifying the underlying dataset by inserting a row in my pandas dataframe with those dates in the column I want to use as my index for the pivot table. Is there a way to accomplish this wihtout modifying the underlying dataset?
Padding python pivot tables with 0
0
0
1
0
0
214
33,449,969
2015-10-31T09:20:00.000
1
1
1
0
0
python,sublimetext2,sublimetext3,sublimetext,sublime-text-plugin
0
55,436,658
0
2
0
false
0
0
The second question about running without saving to hard disk: 1-press ctrl + shift + p 2- write install package and install it 3- write "auto save" and install it 4- go to preferences> package settings> Auto-save> Settings Default and copy all the code 5- go to preferences> package settings> Auto-save> Settings user and past it and change the first code from: "auto_save_on_modified": false, to "auto_save_on_modified": true, good luck
1
5
0
0
I am using Sublime Text Editor for Python development. If I create a new file by Ctrl + N, the default language setting for this file is Plain Text, so how to change the default language setting for the new file to be Python ? Another question :If I write some code in the new file and have not save it to disk, it is impossible to run it and get the running result, is there a solution to remove this restriction so that we can run code in the new file without saving it to disk first?
Sublime Text: run code in a new file without saving to disk and the default language setting for a new file
0
0.099668
1
0
0
2,066
33,450,285
2015-10-31T09:59:00.000
0
0
0
0
1
python,mapreduce,scikit-learn,svm
0
35,586,970
0
1
0
false
0
0
Make sure that all of the required libraries (scikit-learn, NumPy, pandas) are installed on every node in your cluster. Your mapper will process each line of input, i.e., your training row and emit a key that basically represents the fold for which you will be training your classifier. Your reducer will collect the lines for each fold and then run the sklearn classifier on all lines for that fold. You can then average the results from each fold.
1
1
1
0
I've been tasked with solving a sentiment classification problem using scikit-learn, python, and mapreduce. I need to use mapreduce to parallelize the project, thus creating multiple SVM classifiers. I am then supposed to "average" the classifiers together, but I am not sure how that works or if it is even possible. The result of the classification should be one classifier, the trained, averaged classifier. I have written the code using scikit-learn SVM Linear kernel, and it works, but now I need to bring it into a map-reduce, parallelized context, and I don't even know how to begin. Any advice?
Combining SVM Classifiers in MapReduce
1
0
1
0
0
453
33,458,865
2015-11-01T03:15:00.000
8
0
0
0
1
python,pandas,dataframe,series
0
33,458,868
0
1
0
true
0
0
You can do df.ix[[n]] to get a one-row dataframe of row n.
1
5
1
0
I have a huge dataframe, and I index it like so: df.ix[<integer>] Depending on the index, sometimes this will have only one row of values. Pandas automatically converts this to a Series, which, quite frankly, is annoying because I can't operate on it the same way I can a df. How do I either: 1) Stop pandas from converting and keep it as a dataframe ? OR 2) easily convert the resulting series back to a dataframe ? pd.DataFrame(df.ix[<integer>]) does not work because it doesn't keep the original columns. It treats the <integer> as the column, and the columns as indices. Much appreciated.
how to make 1 by n dataframe from series in pandas?
0
1.2
1
0
0
1,239
33,464,294
2015-11-01T16:16:00.000
2
0
0
0
0
python,statsmodels
0
33,479,441
0
1
0
true
0
0
When we request automatic lag selection in adfulller, then the function needs to compare all models up to the given maxlag lags. For this comparison we need to use the same observations for all models. Because lagged observations enter the regressor matrix we loose observations as initial conditions corresponding to the largest lag included. As a consequence autolag uses nobs - maxlags observations for all models. For calculating the test statistic for adfuller itself, we don't need model comparison anymore and we can use all observations available for the chosen lag, i.e. nobs - best_lag. More general, how to treat initial conditions and different number of initial conditions is not always clear cut, autocorrelation and partial autocorrelation are largely based on using all available observations, full MLE for AR and ARMA models uses the stationary model to include the initial conditions, while conditional MLE or least squares drops them as necessary.
1
3
1
0
This question is on Augmented Dickey–Fuller test implementation in statsmodels.tsa.stattools python library - adfuller(). In principle, AIC and BIC are supposed to compute information criterion for a set of available models and pick up the best (the one with the lowest information loss). But how do they operate in the context of Augmented Dickey–Fuller? The thing which I don't get: I've set maxlag=30, BIC chose lags=5 with some informational criterion. I've set maxlag=40 - BIC still chooses lags=5 but the information criterion have changed! Why in the world would information criterion for the same number of lags differ with maxlag changed? Sometimes this leads to change of the choice of the model, when BIC switches from lags=5 to lags=4 when maxlag is changed from 20 to 30, which makes no sense as lag=4 was previously available.
How exactly BIC in Augmented Dickey–Fuller test work in Python?
0
1.2
1
0
0
1,081
33,465,153
2015-11-01T17:43:00.000
0
0
0
0
1
python,django,django-migrations
0
33,465,379
0
1
0
true
1
0
I was using a models directory. Adding an import of the model to __init__.py allowed me to control whether it's visible to makemigrations or not. I found that using strace.
1
0
0
0
I just made a mess in my local Django project and realized that somehow I'm out of sync with my migrations. I tried to apply initial and realized that some of the tables already exist, so I tried --fake. This made the migration pass, but now I'm missing the one table I just wanted to add... how can I prepare migration just for one model or make Django re-discover what my database is missing and create that?
Django Migrations - how to insert just one model?
0
1.2
1
0
0
941
33,465,685
2015-11-01T18:32:00.000
0
0
1
0
0
python,oop
0
33,465,756
0
2
0
false
0
0
More information needs to be given to fully understand the context. But, in a general sense, I'd do a mix of all of them. Use helper functions for "shared" parts, and use conditional statements too. Honestly, a lot of it comes down to just what is easier for you to do?
1
1
1
0
I need several very similar plotting functions in python that share many arguments, but differ in some and of course also differ slightly in what they do. This is what I came up with so far: Obviously just defining them one after the other and copying the code they share is a possibility, though not a very good one, I reckon. One could also transfer the "shared" part of the code to helper functions and call these from inside the different plotting functions. This would make it tedious though, to later add features that all functions should have. And finally I've also thought of implementing one "big" function, making possibly not needed arguments optional and then deciding on what to do in the function body based on additional arguments. This, I believe, would make it difficult though, to find out what really happens in a specific case as one would face a forest of arguments. I can rule out the first option, but I'm hard pressed to decide between the second and third. So I started wondering: is there another, maybe object-oriented, way? And if not, how does one decide between option two and three? I hope this question is not too general and I guess it is not really python-specific, but since I am rather new to programming (I've never done OOP) and first thought about this now, I guess I will add the python tag. EDIT: As pointed out by many, this question is quite general and it was intended to be so, but I understand that this makes answering it rather difficult. So here's some info on the problem that caused me to ask: I need to plot simulation data, so all the plotting problems have simulation parameters in common (location of files, physical parameters,...). I also want the figure design to be the same. But depending on the quantity, some plots will be 1D, some 2D, some should contain more than one figure, sometimes I need to normalize the data or take a logarithm before plotting it. The output format might also vary. I hope this helps a bit.
What is a good way to implement several very similar functions?
1
0
1
0
0
444
33,469,625
2015-11-02T02:00:00.000
0
0
1
0
0
python,c++,text,extract
0
33,469,697
0
3
0
false
0
0
It sounds like what you want to do is first read File B, collecting the IDs. You can store the IDs in a set or a dict. Then read File A. For each line in File A, extract the ID, then see if it was in File B by checking for membership in your set or dict. If not, then skip that line and continue with the next line. If it is, then process that line as desired.
1
1
0
0
So first, I know there are some answers out there for similar questions, but...my problem has to do with speed and memory efficiency. I have a 60 GB text file that has 17 fields and 460,368,082 records. Column 3 has the ID of the individual and the same individual can have several records in this file. Lets call this file, File A. I have a second file, File B, that has the ID of 1,000,000 individuals and I want to extract the rows of File A that have an ID that is in File B. I have a windows PC and I'm open to doing this in C or Python, or whatever is faster... but not sure how to do it fast and efficiently. So far every solution I have come up with takes over 1.5 years according to my calculations.
Extracting certain rows from a file that match a condition from another file
0
0
1
0
0
358
33,503,134
2015-11-03T15:37:00.000
1
0
0
1
0
python,python-3.x,asynchronous,scalability,popen
0
33,503,827
0
1
0
false
0
0
You could use os.listdir or os.walk instead of ls, and the re module instead of grep. Wrap everything up in a function, and use e.g. the map method from a multiprocessing.Pool object to run several of those in parallel. This is a pattern that works very well. In Python3 you can also use Executors from concurrent.futures in a similar way.
1
0
0
0
Requirement - I want to execute a command that uses ls, grep, head etc using pipes (|). I am searching for some pattern and extracting some info which is part of the query my http server supports. The final output should not be too big so m assuming stdout should be good to use (I read about deadlock issues somewhere) Currently, I use popen from subprocess module but I have my doubts over it. how many simultaneous popen calls can be fired. does the result immediately come in stdout? (for now it looks the case but how to ensure it if the commands take long time) how to ensure that everything is async - keeping close to single thread model? I am new to Python and links to videos/articles are also appreciated. Any other way than popen is also fine.
Async execution of commands in Python
0
0.197375
1
0
0
302
33,506,328
2015-11-03T18:19:00.000
0
0
0
0
0
python,tkinter,widget
0
33,507,438
0
2
0
false
0
1
The very definition of a row is that is the same height all the way across. That's what makes it a row. The same can be said for columns. Therefore, the tallest item in a row (height plus padding) is what controls the overall height of the row. The only control you have over smaller widgets is which sides of their too-large cell they stick to. For example, if you want all widgets to be aligned along their tops, use sticky="n", which causes the top of the widgets to "stick" to the top (north) side of the space they have been allocated. If you want them aligned along their bottoms, use sticky="s". Providing neither "n" or "s" means they will be aligned along their midpoints.
1
0
0
0
I have an large frame of a wide array of elements. Within this frame, there are basically two different sides to the frame. Consider a widget x on the left side, which is placed by .grid(row=4, column=0). Padding is added to this object x, so it is actually x.grid(row=4, column=0, pady=10) Well, the opposite object, object y, is placed on the same row by y.grid(row=4, column=4), or something along those lines. I have this setup, but the pady on x is adding padding to y as well. I want there to be padding on one widget in the row-- not the entire row. Therefore, my paraphrased question is, how does one add padding to only one widget in a row, without adding padding to every object in that respective row?
How to add padding to a widget, but not the entire row's widgets, in tkinter?
0
0
1
0
0
767
33,510,814
2015-11-03T23:09:00.000
1
0
0
0
0
python,selenium,scrapy
0
33,521,521
0
3
0
false
1
0
Scrapy by itself does not control browsers. However, you could start a Selenium instance from a Scrapy crawler. Some people design their Scrapy crawler like this. They might process most pages only using Scrapy but fire Selenium to handle some of the pages they want to process.
1
1
0
0
When I use Selenium I can see the Browser GUI, is it somehow possible to do with scrapy or is scrapy strictly command line based?
Can scrapy control and show a browser like Selenium does?
1
0.066568
1
0
1
1,636
33,514,313
2015-11-04T05:26:00.000
4
0
0
0
0
python,selenium,cookies,phantomjs
0
33,516,899
0
1
0
true
0
0
Documentation suggests this driver.cookies_enabled = False, you can use it.
1
0
0
1
I have searched for long time but I could not find how to disable cookies for phantomjs using selenium with python . I couldn't understand the documentation of phantomjs.Please someone help me.
disabling Cookies on phantomjs using selenium with python
0
1.2
1
0
1
730
33,541,692
2015-11-05T10:06:00.000
68
0
0
0
0
python,excel,openpyxl
0
33,543,305
0
3
0
true
0
0
ws.max_row will give you the number of rows in a worksheet. Since version openpyxl 2.4 you can also access individual rows and columns and use their length to answer the question. len(ws['A']) Though it's worth noting that for data validation for a single column Excel uses 1:1048576.
1
38
0
0
I'm using openpyxl to put data validation to all rows that have "Default" in them. But to do that, I need to know how many rows there are. I know there is a way to do that if I were using Iterable workbook mode, but I also add a new sheet to the workbook and in the iterable mode that is not possible.
How to find the last row in a column using openpyxl normal workbook?
1
1.2
1
1
0
117,258
33,550,976
2015-11-05T17:25:00.000
0
1
0
1
1
python,linux
1
36,415,642
0
1
0
false
0
0
It could be related to many things: things that I had to fix also: check the external power supply of the router, needs to be stable, the usb drives could drain too much current than the port can handle, a simple fix is to add a externally powered usbhub or the same port but with capacitors in parallel to the powerline and at the beginning of the usb port where the drive is, maybe 1000uF
1
0
0
0
I've got several MR-3020's that I have flashed with OpenWRT and mounted a 16GB ext4 USB drive on it. Upon boot, a daemon shell script is started which does two things: 1) It constantly looks to see if my main program is running and if not starts up the python script 2) It compares the lasts heartbeat timestamp generated by my main program and if it is older than 10 minutes in the past kills the python process. #1 is then supposed to restart it. Once running, my main script goes into monitor mode and collects packet information. It periodically stops sniffing, connects to the internet and uploads the data to my server, saves the heartbeat timestamp and then goes back into monitor mode. This will run for a couple hours, days, or even a few weeks but always seems to die at some point. I've been having this issue for nearly 6 months (not exclusively) I've run out of ideas. I've got files for error, info and debug level logging on pretty much every line in the python script. The amount of memory used by the python process seems to hold steady. All network calls are encapsulated in try/catch statements. The daemon writes to logread. Even with all that logging, I can't seem to track down what the issue might be. There doesn't seem to be any endless loops entered into, none of the errors (usually HTTP request when not connected to internet yet) are ever the final log record - the device just seems to freeze up randomly. Any advice on how to further track this down?
Crashing MR-3020
0
0
1
0
0
61
33,551,143
2015-11-05T17:34:00.000
1
1
0
0
0
python,api,amazon-web-services,amazon-s3,boto
0
33,590,521
0
1
0
true
1
0
You are correct that AWS Lambda can be triggered when objects are added to, or deleted from, an Amazon S3 bucket. It is also possible to send a message to Amazon SNS and Amazon SQS. These settings needs to be configured by somebody who has the necessary permissions on the bucket. If you have no such permissions, but you have the ability to call GetBucket(), then you can retrieve a list of objects in the bucket. This returns up to 1000 objects per API call. There is no API call available to "get the newest files". There is no raw code to "monitor" uploads to a bucket. You would need to write code that lists the content of a bucket and then identifies new objects. How would I approach this problem? I'd ask the owner of the bucket to add some functionality to trigger Lambda/SNS/SQS, or to provide a feed of files. If this wasn't possible, I'd write my own code that scans the entire bucket and have it execute on some regular schedule.
1
2
0
0
I have access to a S3 bucket. I do not own the bucket. I need to check if new files were added to the bucket, to monitor it. I saw that buckets can fire events and that it is possible to make use of Amazon's Lambda to monitor and respond to these events. However, I cannot modify the bucket's settings to allow this. My first idea was to sift through all the files and get the latest one. However, there are a lot of files in that bucket and this approach proved highly inefficient. Concrete questions: Is there a way to efficiently get the newest file in a bucket? Is there a way to monitor uploads to a bucket using boto? Less concrete question: How would you approach this problem? Say you had to get the newest file in a bucket and print it's name, how would you do it? Thanks!
How to monitor a AWS S3 bucket with python using boto?
1
1.2
1
0
1
4,530
33,577,252
2015-11-06T23:37:00.000
0
0
0
1
0
python,django,twisted,wamp-protocol,crossbar
0
34,815,287
0
1
0
false
1
0
With a Web app using WAMP, you have two separate mechanisms: Serving the Web assets and the Web app then communicating with the backend (or other WAMP components). You can use Django, Flask or any other web framework for serving the assets - or the static Web server integrated into Crossbar.io. The JavaScript you deliver as part of the assets then connects to Crossbar.io (or another WAMP router), as do the backend or other components. This is then used to e.g. send data to display to the Web frontend or to transmit user input.
1
0
0
0
As I understand it (please do correct misunderstandings, obviously), the mentioned projects/technologies are as follows:- Crossover.io - A router for WAMP. Cross-language. WAMP - An async message passing protocol, supporting (among other things) Pub/Sub and RPC. Cross-language. twisted - An asynchronous loop, primarily used for networking (low-level). Python specific. As far as I can tell, current crossover.io implementation in python is built on top of twisted. klein - Built on top of twisted, emulating flask but asynchronously (and without the plugins which make flask easier to use). Python specific. django/flask/bottle - Various stacks/solutions for serving web content. All are synchronous because they implement the WSGI. Python specific. How do they interact? I can see, for example, how twisted could be used for network connections between various python apps, and WAMP between apps of any language (crossover.io being an option for routing). For networking though, some form of HTTP/browser based connection is normally needed, and that's where in Python django and alternatives have historically been used. Yet I can't seem to find much in terms of interaction between them and crossover/twisted. To be clear, there's things like crochet (and klein), but none of these seem to solve what I would assume to be a basic problem, that of saying 'I'd like to have a reactive user interface to some underlying python code'. Or another basic problem of 'I'd like to have my python code update a webpage as it's currently being viewed'. Traditionally I guess its handled with AJAX and similar on the webpage served by django et. al., but that seems much less scalable on limited hardware than an asynchronous approach (which is totally doable in python because of twisted and tornado et. al.). Summary Is there a 'natural' interaction between underlying components like WAMP/twisted and django/flask/bottle? If so, how does it work.
How do crossover.io, WAMP, twisted (+ klein), and django/flask/bottle interact?
1
0
1
0
0
263
33,587,761
2015-11-07T21:04:00.000
0
0
0
0
0
python,opencv,image-processing,computer-vision
0
33,596,513
0
1
0
false
0
0
I'd take advantage of the fact that dilations are efficiently implemented in OpenCV. If a point is a local maximum in 3d, then it is also in any 2d slice, therefore: Dilate each image in the array with a 3x3 kernel, keep as candidate maxima the points whose intensity is unchanged. Brute-force test the candidates against their upper and lower slices.
1
0
1
0
I'm trying to implement a blob detector based on LOG, the steps are: creating an array of n levels of LOG filters use each of the filters on the input image to create a 3d array of h*w*n where h = height, w = width and n = number of levels. find a local maxima and circle the blob in the original image. I already created the filters and the 3d array (which is an array of 2d images). I used padding to make sure I don't have any problems around the borders (which includes creating a constant border for each image and create 2 extra empty images). Now I'm trying to figure out how to find the local maxima in the array. I need to compare each pixel to its 26 neighbours (8 in the same picture and the 9 pixels in each of the two adjacent scales) The brute force way of checking the pixel value directly seems ugly and not very efficient. Whats the best way to find a local maxima point in python using openCV?
finding a local maximum in a 3d array (array of images) in python
0
0
1
0
0
1,030
33,603,304
2015-11-09T06:03:00.000
1
0
0
0
0
python,image-processing
0
33,729,058
0
2
0
true
0
0
I finally did it with ImageMagick, using Python to calculate the various coordinates, etc. This command will create the desired circle (radius 400, centered at (600, 600): convert -size 1024x1024 xc:none -stroke black -fill steelblue -strokewidth 1 -draw "translate 600,600 circle 0,0 400,0" drawn.png This command will then convert it to B/W to get a rudimentary mask: convert drawn.png -alpha extract mask.png This command will blur the mask (radius 180, sigma 16): convert -channel RGBA -blur 100x16 mask.png mask2.png The above three commands gives me the mask I need. This command will darken the whole image (without the mask): convert image.jpg -level 0%,130%,0.7 dark.jpg And this command will put all 3 images together (original image, darkened image, and mask): composite image.jpg dark.jpg mask2.png out.jpg
1
1
1
0
Here's what I'm trying to do: I have an image. I want to take a circular region in the image, and have it appear as normal. The rest of the image should appear darker. This way, it will be as if the circular region is "highlighted". I would much appreciate feedback on how to do it in Python. Manually, in Gimp, I would create a new layer with a color of gray (less than middle gray). I would then create a circualr region on that layer, and make it middle gray. Then I would change the blending mode to soft light. Essentially, anything that is middle gray on the top layer will show up without modification, and anything darker than middle gray would show up darker. (Ideally, I'd also blur out the top layer so that the transition isn't abrupt). How can I do this algorithmically in Python? I've considered using the Pillow library, but it doesn't have these kinds of blend modes. I also considered using the Blit library, but I couldn't import (not sure it's maintained any more). Am open to scikit-image as well. I just need pointers on the library and some relevant functions. If there's no suitable library, I'm open to calling command line tools (e.g. imagemagick) from within the Python code. Thanks!
Create a "spotlight" in an image using Python
0
1.2
1
0
0
687
33,614,947
2015-11-09T17:42:00.000
1
0
0
0
0
python,seaborn,marker
0
46,858,249
0
1
0
false
0
0
As per the comment from @Sören, you can add the markeredges with the keyword scatter_kws. For example scatter_kws={'linewidths':1,'edgecolor':'k'}
1
1
1
0
sns.lmplot(x="size", y="tip", data=tips) gives a scatter plot. By default the markers have no edges. How can I add markeredges? Sometimes I prefer to use edges transparent facecolor. Especially with dense data. However, Neither markeredgewidth nor mew nor linewidths are accepted as keywords. Does anyone know how to add edges to the markers?
Add markeredges in seaborn lmplot?
0
0.197375
1
0
0
1,371
33,627,789
2015-11-10T10:23:00.000
1
0
0
0
0
python,xml,python-2.7,odoo-8,odoo
0
33,636,388
0
2
0
false
1
0
You cannot make few or some of the fields as 'readonly' in odoo based on the groups. If you need to do it, you can use the custom module 'smile_model_access_extension'. For loading appropriate view on menu click you can create record of 'ir.actions.act_window' (view_ids) field of 'ir.action', where you can specify the sequence and type of view to be loaded when the menu action is performed. In your case you can specify the specific 'form view' for your action.
1
1
0
0
I've created a new group named accountant. If an user of this group opens the res.partner form for example, he must be able to read all, but only modify some specific fields (the ones inside the tab Accountancy, for example). So I set the permissions create, write, unlink, read to 0, 1, 0, 1 in the res.partner model for the accountant group. The problem: if I'm an user of the accountant group and I go to the form of res.partner, I will see the Edit button, if I click on it, I will be able to modify any field I want (and I should not, only the ones inside the tab). So I thought to duplicate the menuitem (put the attribute groups="accountant" to the copy) and the form (put all fields readonly except for the content of the tab). The problem: if I'm an user of a group over accountant group (with accountant in its implied_ids list), I will see both menuitems (the one which takes to the normal form and the one which takes to the duplicated form with the readonly fields). Is it possible to create a menuitem which opens a specific set of views depending on the group of the user who is clicking on the mentioned menuitem? Any ideas of how can I succesfully implement this?
How to allow an user of a group to modify specific parts of a form in Odoo?
1
0.099668
1
0
0
2,097
33,628,679
2015-11-10T11:13:00.000
15
0
0
0
1
python,opencv,mathematical-morphology
0
44,216,923
0
2
0
false
0
0
Make sure volume_start is dtype=uint8. You can convert it with volume_start = np.array(volume_start, dtype=np.uint8). Or nicer: volume_start = volume_start.astype(np.uint8)
1
9
1
0
I'm trying to morphologically close a volume with a ball structuring element created by the function SE3 = skimage.morphology.ball(8). When using closing = cv2.morphologyEx(volume_start, cv2.MORPH_CLOSE, SE) it returns TypeError: src data type = 0 is not supported Do you know how to solve this issue? Thank you
Python Opencv morphological closing gives src data type = 0 is not supported
0
1
1
0
0
16,369
33,630,137
2015-11-10T12:36:00.000
1
1
0
0
0
python,django,pootle,translate-toolkit
0
38,104,213
0
1
0
true
1
0
Template updates now happen outside of Pootle. The old update_against_template had performance problems and could get Pootle into a bad state. To achieve the same functionality as update_against_templates do the following. Assuming your project is myproject and you are updating language af: sync_store --project=myproject --language=af pot2po -t af template af update_store --project=myproject --language=af You can automate that in a script to iterate through all languages. Use list_languages --project=myproject to get a list of all the active languages for that project.
1
1
0
0
I added a new template file from my project. Now I don't know how to make the languages update or get the new template file. I've read that 2.5 has update_against_templates but it's not in 2.7. How will update my languages?
what is update_against_templates in pootle 2.7?
0
1.2
1
0
0
75
33,638,010
2015-11-10T19:29:00.000
1
0
1
0
0
python,visual-studio-2015,ironpython
0
38,080,053
0
4
0
false
0
0
Go to Project -> Properties -> General -> Interpreter Set the Interpreter to IronPython 2.7 (you may need to install it).
2
2
0
0
So recently I've thought about trying IronPython. I've got my GUI configured, I got my .py file. I click Start in Visual Studio, and that thing pops up: The environment "Unknown Python 2.7 [...]". I have the environment in Solution Explorer set to the Unknown Python 2.7 and I have no idea how to change it. Installed 2.7, 3.5, IronPyhon 2.7 and refreshed them in Python Environments tab
Visual Studio 2015 IronPython
0
0.049958
1
0
0
7,496
33,638,010
2015-11-10T19:29:00.000
3
0
1
0
0
python,visual-studio-2015,ironpython
0
34,078,923
0
4
0
false
0
0
Generally the best approach to handle this is to right click "Python Environments" in Solution Explorer, then select "Add/remove environments" and change what you have added in there.
2
2
0
0
So recently I've thought about trying IronPython. I've got my GUI configured, I got my .py file. I click Start in Visual Studio, and that thing pops up: The environment "Unknown Python 2.7 [...]". I have the environment in Solution Explorer set to the Unknown Python 2.7 and I have no idea how to change it. Installed 2.7, 3.5, IronPyhon 2.7 and refreshed them in Python Environments tab
Visual Studio 2015 IronPython
0
0.148885
1
0
0
7,496
33,645,899
2015-11-11T07:23:00.000
2
0
0
0
0
python,django,middleware
0
33,646,559
0
2
0
false
1
0
You can implement your own RequestMiddleware (which plugs in before the URL resolution) or ViewMiddleware (which plugs in after the view has been resolved for the URL). In that middleware, it's standard python. You have access to the filesystem, database, cache server, ... the same you have anywhere else in your code. Showing the last N requests in a separate web page means you create a view which pulls the data from the place where your middleware is storing them.
1
1
0
0
It might be that this question sounds pretty silly but I can not figure out how to do this I believe the simplest issue (because just start learning Django). What I know is I should create a middleware file and connect it to the settings. Than create a view and a *.html page that will show these requests and write it to the urls. how can one store last (5/10/20 or any) http requests in the middleware and show them in a *.html page? The problem is I don't even know what exactly should I write into middlaware.py and views.py in the way it could be displayed in the *.html file. Ideally, this page should be also updated after the new requests occur. I read Django documentation, some other topics with middleware examples but it seems to be pretty sophisticated for me. I would be really thankful for any insights and elucidates. P.S. One more time sorry for a dummy question.
Python, Django - how to store http requests in the middleware?
0
0.197375
1
0
0
1,698
33,646,840
2015-11-11T08:39:00.000
0
0
0
0
0
python,callback,gtk3
0
33,658,899
0
1
0
true
0
1
Thank you, PM 2Ring. For posterity, the answer is that you can bind multiple callbacks to a signal in GTK3. They will be called in the order they were bound in.
1
0
0
0
If so, how would this be done? If not, is there a conventional way to achieve the same effect?
Is it possible to set multiple callback functions for a given event with GTK (Python)?
1
1.2
1
0
0
463
33,672,506
2015-11-12T13:26:00.000
0
0
1
0
1
python,regex,python-3.x,repeat
0
33,673,417
0
3
0
false
0
0
OK. I presume that this is a homework assignment, so I'm not going to give you a complete solution. But, you really need to do a number of things. The first is to read the input file in to memory. Then split it in to its component words (tokenize it) probably contained in a list, suitably cleaned up to remove stray punctuation. You seem to be well on your way to doing that, but I would recommend you look at the split() and strip() methods available for strings. You need to consider whether you want the count to be case sensitive or not, and so you might want to convert each word in the list to (say) lowercase to keep this consistent. So you could do this with a for loop and the string lower() method, but a list-comprehension is probably better. You then need to go through the list of words and count how many times each one appears. If you check out collections.Counter you will find that this does the heavy lifting for your or, alternatively, you will need to build a dictionary which has the words as keys and the count of the words. (You might also want to check out the collections.defaultdict class here as well). Finally, you need to go through the text you've read from the file and for each word it contains which has more than one match (i.e. the count in the dictionary or counter is > 1) mark it appropriately. Regular expressions are designed to do exactly this sort of thing. So I recommend you look at the re library. Having done that, you simply then write the result to a file, which is simple enough. Finally, with respect to your file operations (reading and writing) I would recommend you consider replacing the try ... except construct with a with ... as one.
1
1
0
0
I'm a beginner to both Python and to this forum, so please excuse any vague descriptions or mistakes. I have a problem regarding reading/writing to a file. What I'm trying to do is to read a text from a file and then find the words that occur more than one time, mark them as repeated_word and then write the original text to another file but with the repeated words marked with star signs around them. I find it difficult to understand how I'm going to compare just the words (without punctuation etc) but still be able to write the words in its original context to the file. I have been recommended to use regex by some, but I don't know how to use it. Another approach is to iterate through the textstring and tokenize and normalize, sort of by going through each character, and then make some kind av object or element out of each word. I am thankful to anyone who might have ideas on how to solve this. The main problem is not how to find which words that are repeated but how to mark them and then write them to the file in their context. Some help with the coding would be much appreciated, thanks. EDIT I have updated the code with what I've come up with so far. If there is anything you would consider "bad coding", please comment on it. To explain the Whitelist class, the assignment has two parts, one of where I am supposed to mark the words and one regarding a whitelist, containing words that are "allowed repetitions", and shall therefore not be marked. I have read heaps of stuff about regex but I still can't get my head around how to use it.
Reading text from a file, then writing to another file with repetitions in text marked
0
0
1
0
0
115
33,677,932
2015-11-12T17:46:00.000
0
0
0
0
0
python,scikit-learn,cluster-analysis,feature-selection
0
33,683,680
0
1
0
false
0
0
Are you sure it was done automatically? It sounds to me as if you should be treating this as a classification problem: construct a classifier that does the same as the human did.
1
0
1
0
I have a clustering of data performed by a human based solely on their knowledge of the system. I also have a feature vector for each element. I have no knowledge about the meaning of the features, nor do I know what the reasoning behind the human clustering was. I have complete information about which elements belong to which cluster. I can assume that the human was not stupid and there is a way to derive the clustering from the features. Is there an intelligent way to reverse-engineer the clustering? That is, how can I select the features and the clustering algorithm that will yield the same clustering most of the time (on this data set)? So far I have tried the naive approach - going through the clustering algorithms provided by the sklearn library in python and comparing the obtained clusters to the source one. This approach does not yield good results. My next approach would be to use some linear combinations of the features, or subsets of features. Here, again, my question is if there is a more intelligent way to do this than to go through as many combinations as possible. I can't shake the feeling that this is a standard problem and I'm just missing the right term to find the solution on Google.
Reverse-engineering a clustering algorithm from the clusters
1
0
1
0
0
497
33,691,392
2015-11-13T11:10:00.000
-5
0
1
0
0
python,queue,python-multiprocessing,lifo
0
33,691,733
0
3
0
false
0
0
The multiprocessing.Queue is not a data type. It is a mean to communicate between two processes. It is not comparable to Stack That's why there is no API to pop the last item off the queue. I think what you have in mind is to make some messages to have a higher priority than others. When they are sent to the listening process, you want to dequeue them as soon as possible, bypassing existing messages in the queue. You can actually achieve this effect by creating two multiprocessing.Queue: One for normal data payload and another for priority message. Then you do not need to worry about getting the last item. Simply segregate two different type of messages into two queues.
1
11
0
0
I understand the difference between a queue and stack. But if I spawn multiple processes and send messages between them put in multiprocessing.Queue how do I access the latest element put in the queue first?
How to implement LIFO for multiprocessing.Queue in python?
0
-1
1
0
0
3,099
33,696,300
2015-11-13T15:34:00.000
2
0
1
0
0
python,python-2.7,pdb
0
33,710,178
0
1
0
false
0
0
The direct way, of course, is to pass the line as an argument to l. But without having to go through the trouble of finding the current line and typing it, the non-optimal way I usually do it is to return to the same line by navigating up+down the call stack, then listing again. The sequence of commands for that is: u (up), d (down), l.
1
1
0
0
when using pdb to debug a python script, repeating l command will continue listing the source code right after the previous listing. l(ist) [first[, last]] List source code for the current file. Without arguments, list 11 lines around the current line or continue the previous listing. With one argument, list 11 lines around at that line. With two arguments, list the given range; if the second argument is less than the first, it is interpreted as a count. How can I repeatedly show the current line (i.e. the line where the program running is paused), instead of continuing after the previous listing? Thanks.
pdb: how to show the current line, instead of continuing after the previous listing?
0
0.379949
1
0
0
849
33,697,893
2015-11-13T16:54:00.000
1
0
1
0
0
python-3.x,autocomplete,sublimetext3
0
36,128,676
0
1
0
true
0
0
in case you are still looking for the answer, I had a similar problem. I had both SublimeJEDI autocompletion as well as Anaconda. The flashing behavior is a result of you having two separate autocompletes fighting for the same space. Turning off SublimeJEDI solved this for me - I couldn't find a way to turn off Anaconda's.
1
0
0
0
I'm new to SublimeText and Python3, so I don't really know how to turn Sublime autocomplete on. I installed Anaconda from Package Control, but I don't know how to use it. Some autocomplete shows up, but I don't think it's Anaconda's. That autocomplete keeps on poping up, then dissapearing. I can't read what it says and it hurts my eyes. How can i properly set up the autocomplete?
SublimeText 3 Anaconda autocomplete bug
0
1.2
1
0
0
374
33,756,512
2015-11-17T12:06:00.000
1
1
0
0
0
python,automation,jira,confluence,asciidoctor
0
33,761,564
0
1
0
false
1
0
I did something similar - getting info from Jira and updating confluence info. I did it in a bash script that ran on Jenkins. The script: Got Jira info using the Jira REST API Parsed the JSON from Jira using jq (wonderful tool) Created/updated the confluence page using the Confluence REST API I have not used python but the combination of bash/REST/jq was very simple. Running the script from Jenkins allowed me to run this periodically, so confluence is updated automatically every 2 weeks with the new info from Jira.
1
1
0
0
I'm curious, how a good automated workflow could look like for the process of automating issues/touched file lists into a confluence page. I describe my current idea here: Get all issues matching my request from JIRA using REST (DONE) Get all touched files related to the matching Issues using Fisheye REST Create a .adoc file with the content Render it using asciidoctor-confluence to a confluence page I'm implementing the this in python (using requests etc.) and I wonder how I could provide proper .adoc for the ruby-based asciidoctor. I'm planning to use asciidoctor for the reason it has an option to render directly to confluence using asciidocter-confluence. So, is there anybody who can kindly elaborate on my idea?
Programmatically create confluence content from jira and fisheye
1
0.197375
1
0
0
649
33,756,970
2015-11-17T12:29:00.000
0
0
1
0
0
python,image-processing
0
33,757,187
0
2
0
false
0
0
To make it blurry filter it using any low-pass filter (mean filter, gaussian filter etc.).
1
0
1
0
I already have a function that converts an image to a matrix, and back. But I was wondering how to manipulate the matrix so that the picture becomes blurry, or pixified?
How can I blur or pixify images in python by using matrixes?
0
0
1
0
0
161
33,761,192
2015-11-17T15:47:00.000
6
1
1
0
0
python
0
33,761,211
0
1
0
true
0
0
By doing import LargeSizedModule everywhere you need it. Python will only load it once.
1
4
0
0
I want to create a Python package that has multiple subpackages. Each of those subpackages contain files that import the same specific module that is quite large in size. So as an example, file A.py from subpackage A will import a module that is supposedly named LargeSizedModule and file B.py from subpackage B will also import LargeSizedModule. Similarly with C.py from subpackage C. Does anyone know how I can efficiently import the same exact module across multiple subpackages? I would like to reduce the 'loading' time that comes from those duplicate imports.
How to efficiently import the same module into multiple sub-packages in python
0
1.2
1
0
0
1,283
33,778,802
2015-11-18T11:24:00.000
2
0
0
0
0
python,scikit-learn,outliers
0
42,991,702
0
2
0
true
0
0
Right way to do this is: Divide data into normal and outliers. Take large sample from normal data as normal_train for fitting the novelty detection model. Create a test set with a sample from normal that is not used in training (say normal_test) and a sample from outlier (say outlier_test) in a way such that the distribution of the test data (normal_test + outlier_test) retains population distribution. Predict on this test data to get usual metrics (accuracy, sensitivity, positive-predictive-value, etc.) Wow. I have come a long way!
1
2
1
0
I am using sklearn's EllipticEnvelope to find outliers in dataset. But I am not sure about how to model my problem? Should I just use all the data (without dividing into training and test sets) and apply fit? Also how would I obtain the outlyingness of each datapoint? Should I use predict on the same dataset?
How to apply sklearn's EllipticEnvelope to find out top outliers in the given dataset?
0
1.2
1
0
0
2,901
33,800,742
2015-11-19T10:07:00.000
0
0
1
0
0
python,asynchronous,grequests
0
33,801,105
0
1
0
false
0
0
Just use the regular requests library for this. A call to res = requests.get(...) is asynchronous anyway, it will not block until you call something like "res.content". Is this what you are looking for?
1
0
0
0
So I know that you could use grequests create multiple requests and use map to process them at the same time. But how do you create some requests on the fly while some requests sent have not returned a response yet? I don't want to use multiprocessing or multithreading,is there a way to use grequests to realize it?
create asynchronous requests on fly using greqeusts
1
0
1
0
0
61
33,801,334
2015-11-19T10:30:00.000
0
0
1
0
0
python
0
33,801,420
0
1
0
false
0
0
When displaying a question — store current time to some variable. Then after user provided answer, calculate difference between current time and the time stored at previous step. Check if it exceeds 60 seconds limit or not.
1
0
0
0
I am new to programming, in fact I take a class at school and I am not very good. My assignment is to write a quiz and with every question, the person has 60 seconds to answer the question and with every right answer their score doubles. Please help.
Python: How to calculate scores and how to exercise a time limit?
1
0
1
0
0
67
33,830,715
2015-11-20T15:46:00.000
0
0
0
1
1
python,file,google-app-engine
0
33,830,880
0
1
0
false
1
0
your best bet could upload to blobstore or Cloud Storage, then use Task Queue to process the file which has no time limits.
1
0
0
0
I am trying to create a process that will upload a file to GAE to interpret it's contents (most are PDFs, so we would use something like PDF Miner), and then store it in Google Cloud Storage. To my understanding, the problem is that file uploads are limited to both 60 seconds for it to execute, as well as a size limit of I think 10MB. Does anyone have any ideas of how to address this issue?
Google App Engine File Processing
0
0
1
0
0
62
33,840,926
2015-11-21T07:37:00.000
0
0
1
0
0
python,python-idle
0
33,841,418
0
2
0
false
0
0
Edit then go to line :D or Alt + G
1
2
0
0
I use IDLE when I'm coding in Python and really enjoy it's simplicity. One thing I don't like though is when you need to navigate to a certain line and have to scroll around the place, haphazardly guessing how far you have to go to reach it. So, my question is is there a way to jump to a certain line number in IDLE for Windows?
Jump to certain line in IDLE?
1
0
1
0
0
1,329
33,842,944
2015-11-21T11:46:00.000
1
0
0
0
1
python,amazon-s3,boto3
0
59,685,923
0
24
0
false
0
0
Just following the thread, can someone conclude which one is the most efficient way to check if an object exists in S3? I think head_object might win as it just checks the metadata which is lighter than the actual object itself
1
283
0
0
I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches. But that seems longer and an overkill. Boto3 official docs explicitly state how to do this. May be I am missing the obvious. Can anybody point me how I can achieve this.
check if a key exists in a bucket in s3 using boto3
1
0.008333
1
0
1
266,735
33,852,035
2015-11-22T05:45:00.000
3
0
0
0
0
python,django
0
33,852,782
0
1
0
true
1
0
If I understand you correctly, you're looking to have an external program communicate with your server. To do this, the server needs to expose an API (Application Interface) that communicates with the external program. That interface will receive a message and return a response. The request will need to have two things: identifying information for the user - usually a secret key - so that other people can't access the user's data. a query of some sort indicating what kind of information to return. The server will get the request, validate the user's secret key, process the query, and return the result. It's pretty easy to do in Django. Set up a url like /api/cards and a view. Have the view process the request and return the response. Often, these days, these back and forth messages are encoded in JSON - an easy way to encapsulate and send data. Google around with the terms django, api, and json and you'll find a lot of what you need.
1
2
0
0
I am currently learning how to use django. I have a standalone python script that I want to communicate with my django app. However, I have no clue how to go about doing this. My django app has a login function and a database with usernames and passwords. I want my python script to talk to my app and verify the persons user name and password and also get some account info like the person's name. How do I go about doing this? I am very new to web apps and I am not really sure where to begin. Some Clarifications: My standalone python program is so that the user can access some information about their account. I am not trying to use the script for login functionality. My django app already handles this. I am just trying to find a way to verify that they have said account. For example: If you have a flashcards web app and you want the user to have a program locally on their computer to access their flashcards, they need to login and download the cards from the web app. So wouldn't the standalone program need to communicate with the app to get login information and access to the cards on that account somehow? That's what I am trying to accomplish.
How to get a standalone python script to get data from my django app?
0
1.2
1
0
0
832
33,862,420
2015-11-23T01:25:00.000
-6
0
1
0
0
python,ipython,ipython-notebook
0
33,862,460
0
3
0
false
0
0
You should you start your workflow after restarting and opening a notebook again by running all cells. In the top menu, before you do anything else, first select "Cell->Run all"
1
29
0
0
I define many modules in a file, and add from myFile import * to the first line of my ipython notebook so that I can use it as dependency for other parts in this notebook. Currently my workflow is: modify myFile restart the Ipython kernel rerun all code in Ipython. Does anyone know if there is a way to reload all modules in myFile without need to restart the Ipython kernel? Thanks!
IPython notebook: how to reload all modules in a specific Python file?
1
-1
1
0
0
19,517
33,867,992
2015-11-23T09:52:00.000
0
0
1
1
0
python,azure,pip,python-wheel
1
33,873,874
0
3
0
false
0
0
have you tried uninstalling and reinstalling? I tried pip wheel azure-mgmt and that installed -0.20.1 for me. The directory for mine is /Users/me/wheelhouse, so you could look there. I found that in the initial log of the build.
1
0
0
0
I try to run pip wheel azure-mgmt=0.20.1, but whenever I run it I get following pip wheel error, which is very clear: error: [Error 183] Cannot create a file when that file already exists: 'build\\bdist.win32\\wheel\\azure_mgmt-0.20.0.data\\..' So my question is where or how I can find that path? I want to delete that existing file. I have been searching my local computer, searched for default path in Google, but still didn't find any solution. Also is it possible to tell pip wheel to output full log? As you can see that full error path is not displayed. I'm using virtualenv.
Pip Wheel Package Installation Fail
1
0
1
0
0
1,089
33,909,039
2015-11-25T05:26:00.000
0
0
1
0
1
python,mysql,python-2.7,mysql-python
0
34,680,890
0
1
0
false
0
0
Are you using myisam or innodb? I suggest using innodb since it has a better table/record locking flexibility for multiple simultaneous updates.
1
0
0
0
I have 2 different python processes (running from 2 separate terminals) running separately at the same time accessing and updating mysql. It crashes when they are using same table at the same time. Any suggestions on how to fix it?
python accessing and updating mysql from simultaneously running processes
0
0
1
1
0
41
33,926,704
2015-11-25T21:28:00.000
0
0
0
0
0
python,numpy,matrix,matplotlib,lidar
0
33,926,859
0
2
0
false
0
0
I am aware that I am not answering half of your questions but this is how I would do it: Create a 2D array of the desired resolution, The "leftmost" values correspond to the smallest values of x and so forth Fill the array with the elevation value of the closest match in terms of x and y values Smoothen the result.
1
3
1
0
I have a set of 3D coordinates points: [lat,long,elevation] ([X,Y,Z]), derived from LIDAR data. The points are not sorted and the steps size between the points is more or less random. My goal is to build a function that converts this set of points to a 2D numpy matrix of a constant number of pixels where each (X,Y) cell hold the Z value, then plot it as elevations heatmap. scales must remain realistic, X and Y should have same step size. the matrix doesn't have to catch the exact elevations picture, It will obviously need some kind of resolution reduction in order to have a constant number of pixels. The solution I was thinking of is to build a bucket for each pixel, iterate over the points and put each in a bucket according to it's (X,Y) values. At last create a matrix where each sell holds the mean of the Z values in the corresponding bucket. Since I don't have lots of experience in this field I would love to hear some tips and specially if there are better ways to address this task. Is there a numpy function for converting my set of points to the desired matrix? (maybe meshgrid with steps of a constant value?) If I build very sparse matrix, where the step size is min[min{Xi,Xj} , min{Yk,Yl}] for all i,j,k,l is there a way to "reduce" the resolution and convert it to a matrix with the required size? Thanks!
Converting coordinates vector to numpy 2D matrix
0
0
1
0
0
2,868
33,940,518
2015-11-26T13:56:00.000
0
1
1
0
0
python,unit-testing,testing,asynchronous,tornado
0
33,975,984
0
1
0
true
0
0
In general, using yield gen.moment to trigger specific events is dicey; there are no guarantees about how many "moments" you must wait, or in what order the triggered events occur. It's better to make sure that the function being tested has some effect that can be asynchronously waited for (if it doesn't have such an effect naturally, you can use a tornado.locks.Condition). There are also subtleties to patching IOLoop.time. I think it will work with the default Tornado IOLoops (where it is possible without the use of mock: pass a time_func argument when constructing the loop), but it won't have the desired effect with e.g. AsyncIOLoop. I don't think you want to use AsyncTestCase.stop and .wait, but it's not clear how your test is set up.
1
0
0
0
I'm using Tornado as a coroutine engine for a periodic process, where the repeating coroutine calls ioloop.call_later() on itself at the end of each execution. I'm now trying to drive this with unit tests (using Tornado's gen.test) where I'm mocking the ioloop's time with a local variable t: DUT.ioloop.time = mock.Mock(side_effect= lambda: t) (DUT <==> Device Under Test) Then in the test, I manually increment t, and yield gen.moment to kick the ioloop. The idea is to trigger the repeating coroutine after various intervals so I can verify its behaviour. But the coroutine doesn't always trigger - or perhaps it yields back to the testing code before completing execution, causing failures. I think should be using stop() and wait() to synchronise the test code, but I can't see concretely how to use them in this situation. And how does this whole testing strategy work if the DUT runs in its own ioloop?
Unit-testing a periodic coroutine with mock time
0
1.2
1
0
0
196
33,964,851
2015-11-27T21:13:00.000
0
0
1
0
0
python,python-2.7,menu
0
33,965,008
0
3
0
false
0
0
In order to pass a variable from a function to another it needs to be ‘‘global’’. One easy way to do that is to initialize the variable outside of all functions and just let all functions call it. This way it will be defined in all functions.
1
0
0
0
I am having trouble with a menu that allows the user to choose which function to call. Part of the problem is that when I run the program it starts from the beginning (instead of calling the menu function), and the other part is that I don't know how to pass the table and the number of rows and columns from the first function to the rest of them (when I tried it said they were not defined). The program is supposed to encrypt and decrypt text using a table.
Python: function menu not working
0
0
1
0
0
158
33,974,018
2015-11-28T17:06:00.000
0
0
1
0
0
python,tkinter,textfield
0
34,096,603
0
1
0
false
0
1
tkinter doesn't have anything built-in to support this. Tkinter likely has all of the fundamental building blocks in order to build it yourself, but it will require a lot of work on your part.
1
0
0
0
So far I am using Tkinter to make textfields in Python. My question is how do I make it so there are placeholders, preferably in the style of mathematica or something similar so that when a user starts a new line, a right and left place holder appear on that line and the user can only enter text in these placeholders? Eventually I would like to be able to make it so all the right placeholders are aligned as well, but that may be too complicated. I can't seem to find a way to do this in Tkinter. Is there possibly a better package for this? I'm not sure how to generate and format "text placeholders" Edit: I think this question is coming down to: how do I dynamically add text placeholders within already existing text fields based on certain key commands?
Adding dotted text placeholders within a textfield in Python?
0
0
1
0
0
62
33,975,835
2015-11-28T20:01:00.000
0
0
0
0
1
python,fft,dft,period
0
33,980,259
0
1
0
false
0
0
Before doing the FFT, you will need to resample or interpolate the data until you get a set of amplitude values equally spaced in time.
1
0
1
0
I spent couple days trying to solve this problem, but no luck so I turn to you. I have file for a photometry of a star with time and amplitude data. I'm supposed to use this data to find period changes. I used Lomb-Scargle from pysca library, but I have to use Fourier analysis. I tried fft (dft) from scipy and numpy but I couldn't get anything that would resemble frequency spectrum or Fourier coefficients. I even tried to use nfft from pynfft library because my data are not evenly sampled, but I did not get anywhere with this. So if any of you know how to get from Fourier analysis main frequency in periodical data, please let me know.
Fourier series of time domain data
0
0
1
0
0
243
33,977,130
2015-11-28T22:26:00.000
2
0
0
1
0
python,google-app-engine,google-cloud-sql
1
33,978,178
0
1
0
false
1
0
Each single app engine instance can have no more than 12 concurrent connections to Cloud SQL -- but then, by default, an instance cannot service more than 8 concurrent requests, unless you have deliberately pushed that up by setting the max_concurrent_requests in the automatic_scaling stanza to a higher value. If you've done that, then presumably you're also using a hefty instance_class in that module (perhaps the default module), considering also that Django is not the lightest-weight or fastest of web frameworks; an F4 class, I imagine. Even so, pushing max concurrent requests above 12 may result in latency spikes, especially if serving each and every request also requires other slow, heavy-weight operations such as MySQL ones. So, consider instead using many more instances, each of a lower (cheaper) class, serving no more than 12 requests each (again, assuming that every request you serve will require its own private connection to Cloud SQL -- pooling those up might also be worth considering). For example, an F2 instance costs, per hour, half as much as an F4 one -- it's also about half the power, but, if serving half as many user requests, that should be OK. I presume, here, that all you're using those connections for is to serve user requests (if not, you could dispatch other, "batch-like" uses to separate modules, perhaps ones with manual or basic scheduling -- but, that's another architectural issue).
1
0
0
0
I want to connect my project from app engine with (googleSQL), but I get that error exceeded the maximum of 12 connections in python, I have a CLOUDSQL D8 1000 simultaneous connections how can i change this number limit conexions, I'm using django and python thanks
As codified the limit of 12 connections appengine to cloudsql
1
0.379949
1
1
0
223
33,980,603
2015-11-29T07:52:00.000
0
0
0
0
1
python,tkinter
0
33,993,563
0
1
0
true
0
1
I figured it out with Furas's help - with Pyhook I can wait for events globally, and then tie in the event with tkinter events.
1
0
0
0
I trying to make an application with a pop-up menu - when I type SPACE-R_ALT on my keyboard, globally across the OS (Windows in my case). When that happens, I want to pop-up a window (I know how to do that), and it is crucial that I can happen to be using Chrome or Word, then tap Space-Right Alt, then be able to open up this little menu. Tkinter event bindings have two problems: First, when I use an event binding for <Key> and then, in the function, use evt.keysym, I can see that the program can't register both at the same time. I could use a timer and then see if it works, but I would prefer one line that fixes it all. Second, I find that tkinter event bindings only work when the binded widget's window (or window itself) is FOCUSED. I will hide my root and TopLevel at all times, and so is not focused. I would appreciate any help on this. If your suggestion uses another module, I don't really care, as long as it works on Windows 10 (not Mac OS X, not Linux, but Windows). I'm also using Python 3, but any version (aka 2) would also be okay, as I could either try to port YOUR suggestion to Py3, or port MY code to Py2. Thanks!
Check for tkinter events globally (across OS)
0
1.2
1
0
0
65
34,004,281
2015-11-30T17:19:00.000
0
0
0
0
0
python,oauth-2.0,token,google-api-python-client
0
34,023,824
0
1
1
true
0
0
Okay, found it myself. You gotta refresh you token everytime it expires, using httplib2. Quick hint: import httplib2 http = httplib2.Http() http = credentials.authorize(http) where credentials contains what you got from your first authorization flow. Cheers
1
0
0
0
I'm a super noob in python and oauth2 but still I've wasted days on this one, so if you guys could give me hand, I would be eternally grateful :') Goal: writing a script that download a file everything 5min from google drive Achieved: Get the credentials with tokens and download it once Problem: how do I refresh the token? I achieved to get my tokens once but I don't understand what to do so that I don't need to rebuild a refresh token eveytime... I don't really know if I'm getting oauth2 wrong, but I've read that it should be stored and (there is store method, right?) Thanks :)
Google API oauth2 - how to store credentials in order ot refresh token later
0
1.2
1
0
1
336
34,018,450
2015-12-01T10:46:00.000
0
0
0
0
1
python,facebook,selenium,xpath
0
34,019,477
0
1
0
false
1
0
This error usually comes if element is not present in the DOM. Or may be element is in iframe.
1
1
0
0
I'm trying to get the XPATH for Code Generator field form (Facebook) in order to fill it (of course before I need to put a code with "numbers"). In Chrome console when I get the XPATH I get: //*[@id="approvals_code"] And then in my test I put: elem = driver.find_element_by_xpath("//*[@id='approvals_code']") if elem: elem.send_keys("numbers") elem.send_keys(Keys.ENTER) With those I get: StaleElementReferenceException: Message: stale element reference: element is not attached to the page document What means wrong field name. Does anyone know how to properly get a XPATH?
Python selenium test - Facebook code generator XPATH
0
0
1
0
1
231
34,032,055
2015-12-01T23:17:00.000
3
1
1
0
0
python,profiler,cprofile
0
59,113,102
0
2
0
false
0
0
I run the python program with -m cProfile example: python -m cProfile <myprogram.py> This will require zero changes to the myprogram.py
1
1
0
0
I have a program in Python that takes in several command line arguments and uses them in several functions. How can I use cProfile (within my code)to obtain the running time of each function? (I still want the program to run normally when it's done). Yet I can't figure out how, for example I cannot use cProfile.run('loadBMPImage(sys.argv[1])') to test the run time of the function loadBMPImage. I cannot use sys.argv[1] as an argument. Any idea how I can use cProfile to test the running time of each function and print to stdout, if each function depends on command line arguments? Also the cProfile must be integrated into the code itself. Thanks
How can I use the cProfile module to time each funtion in python?
0
0.291313
1
0
0
2,680
34,033,149
2015-12-02T01:02:00.000
1
0
0
0
1
python,pygame,pygame-surface
0
34,033,205
0
1
0
true
0
1
I do not believe the existing API provides a way to do this. I think the intended use is to convert all your surfaces (why wouldn't you?) so you never have to worry about it. Perhaps it is possible to subclass pygame.Surface and override the convert methods to set a flag in the way you wish.
1
1
0
0
The title says it all, really. I am writing functions that deal with pygame.Surface objects from multiple sources. Among other operations, these functions will ensure that the Surface objects they return have been convert()ed at least once (or, according to user preference, convert_alpha()ed), as is required to optimize them for blitting in the current display mode. But I don't want to run the the convert() or convert_alpha() methods needlessly since they create copies of the surface and therefore take up time and memory. How do I tell whether I need to do it? I have looked at the output of S.get_flags() before and after S = S.convert_alpha() but it doesn't seem to change. The scalar value of S.get_alpha() does change (from 255 to 0) but I'm not convinced that's meaningful or reliable (and it doesn't solve the problem of knowing whether you have to .convert() in the case where alpha blending is not desired).
how can you tell whether convert()/convert_alpha() has already been run on a pygame.Surface?
1
1.2
1
0
0
86
34,048,431
2015-12-02T16:53:00.000
-1
0
1
1
1
python,windows,opencv,dll,64-bit
0
49,609,215
0
4
0
false
0
0
in this case, I just copy file 'python3.dll' from my python3 installation folder to my virtualenv lib folder, and then it works.
1
1
0
0
ImportError: DLL load failed: %1 is not a valid Win32 application Does anyone know how to fix this? This problem occurs when i am trying to import cv2. My laptop is 64bit and installed 64bit python, i also put the cv2.pyd file in the site-packages folder of Python. My PYTHONPATH value = C:\Python35;C:\Python35\DLLs;C:\Python35\Lib;C:\Python35\libs;C:\Users\CV\OpenCV\opencv\build\python\2.7\x64;%OPENCV_DIR%\bin; My OPENCV_DIR value = C:\Users\CV\OpenCV\opencv\build\x64\vc12 I also put reference of my pythonpath and my opencv_dir to the PATH by putting **%PYTHONPATH%;%PYTHONPATH%\Scripts\;%OPENCV_DIR%;** I also installed opencv_python-3.0.0+contrib-cp35-none-win_amd64 through pip install and command line. None of this solved my problem.
Import CV2: DLL load failed (Python in Windows 64bit)
0
-0.049958
1
0
0
10,949
34,061,651
2015-12-03T09:01:00.000
0
0
0
0
0
python,django,gis,geodjango
0
34,397,621
0
1
1
false
1
0
Yes, it is relatively easy to manage geometry data in the Django Admin, and it's all included. You can do any of the CRUD tasks relatively simply using the Geo Model manager in much the same way as any Django model or you can use the map interface you get in the admin. From time to time I find I want to investigate my data in more detail, and then I simply connect to my PostGIS database using QGIS and have a panoply of GIS tools at my disposal. I would strongly recommend using PostGIS from the start. If there is any 'mission creep' towards more geo-functionality in the future then it will save you oodles of time. It sounds like the sort of project where spatial queries might be very useful at some point.
1
0
0
0
I'm going to write a web system to add and manage the position of driver and shops. GEO searching is not required, so it would be easier to use SQLite instead of PostgreSQL. The core question here is that is there any easy way to manage GIS points using Django admin. I know Django have GeoModelAdmin to manage maps based on MapBox, but I could not find out how to use it just to save, delete, and update these points?
How to use Django to manage GIS points easily?
1
0
1
0
0
119
34,075,288
2015-12-03T20:12:00.000
0
0
1
0
0
python
0
34,075,641
0
1
1
false
0
0
It's tough to tell, without a bit more information about the problem that you are trying to solve, the scope of your code, and your code's architecture. Generally speaking: If you're writing a script that's reasonably small in scope, there really is nothing wrong with declaring variables in the global namespace of the script. If you're writing a larger system - something that goes beyond one module or file, but is not running in a multi-process/multi-thread environment, then you can create a module (as a separate file) that handles storage of your re-used data. Anytime you need to read or write that data you can import that module. The module could just expose simple variables, or it can wrap the data in classes and expose methods for creation/reading/updating/deletion. If you are running in a multi-process/multi-thread environment, then any sort of global in-memory variable storage is going to be problematic. You'll need to use an external store of some sort - Redis or the like for temporary storage, or a database of some sort for permanent storage.
1
0
0
0
Old habits being what they are, I would declare global variables, and probably use lists to store records. I appreciate this is not the best way of doing this these days, and that Python actively discourages you from doing this by having to constantly declare 'global' throughout. So what should I be doing? I'm thinking I should maybe use instances, but I know of no way to create a unique instance name based on an identifier (all the records will have a unique ID) and then find out how many instances I have. I could use dictionaries maybe? The most important thing is that the values are accessible anywhere in my code, and that I can list the number of records and easily refer to / change the values.
Best way of storing records and then iterating through them?
0
0
1
0
0
45
34,098,567
2015-12-04T22:30:00.000
-3
0
0
0
0
python-sphinx
0
39,006,565
0
2
0
false
1
0
So, we were able to make it work by adjusting the HTML template and the globaltoc setting.
1
1
0
0
When I build HTML output using sphinx, it is possible to display h1 and h2 on separate pages, however, h3 is always displayed on the same page as h2. Does anyone know how to make sphinx display the content of h3 on a separate page? The same way traditional online help systems do this. For example: Section Sub-section Sub-section Sub-sub-section Sub-sub-section Sub-section So, I want when I click on sub-sub-section see the content only under that sub-sub-section and not from Sub-section above or sub-sub-section below. Thanks in advance!
Display each section (h1, h2, h3) in a new page in Sphinx
1
-0.291313
1
0
0
1,308
34,119,746
2015-12-06T16:25:00.000
1
0
1
0
0
python-2.7,sorting
0
34,119,821
0
2
0
false
0
0
Try this: b = sorted(b, key = lambda i: (i[0], i[1]))
1
1
1
0
My code b=[((1,1)),((1,2)),((2,1)),((2,2)),((1,3))] for i in range(len(b)): print b[i] Obtained output: (1, 1) (1, 2) (2, 1) (2, 2) (1, 3) how do i sort this list by the first element or/and second element in each index value to get the output as: (1, 1) (1, 2) (1, 3) (2, 1) (2, 2) OR (1, 1) (2, 1) (1, 2) (2, 2) (1, 3) It would be nice if both columns are sorted as shown in the desired output, how ever if either of the output columns is sorted it will suffice.
how to sort list in python which has two numbers per index value?
0
0.099668
1
0
0
48
34,135,672
2015-12-07T14:14:00.000
0
0
1
0
0
python,debugging,pycharm
0
34,135,775
0
2
0
false
0
0
Set a breakpoint at the next line of code after the comprehension and then hit play again.
2
5
0
0
I am new to Python and Pycharm, I am trying to step over a line with list comprehension, but instead of moving me to the next line, pycharm is incrementing the loop in 1 iteration. any ideas how to move to the next line without pushing F8 3000 times? thanks!
how to step over list comprehension in pycharm?
0
0
1
0
0
941
34,135,672
2015-12-07T14:14:00.000
2
0
1
0
0
python,debugging,pycharm
0
34,135,795
0
2
0
false
0
0
PyCharm has 'Run to Cursor' option - just move your cursor one line down and hit it.
2
5
0
0
I am new to Python and Pycharm, I am trying to step over a line with list comprehension, but instead of moving me to the next line, pycharm is incrementing the loop in 1 iteration. any ideas how to move to the next line without pushing F8 3000 times? thanks!
how to step over list comprehension in pycharm?
0
0.197375
1
0
0
941
34,135,973
2015-12-07T14:30:00.000
0
0
0
0
1
python,xml,openerp,xml-rpc
0
34,148,043
0
1
0
false
1
0
Chandu, Well you can call on_change method on through xml-rpc which will give you desire data and you can pass those data back to the server to store correct values. Bests
1
0
0
0
I have used xml-rpc in my Odoo ERP so whenever some user inputs data in external website that will come to my ERP. Everything working fine i.e. getting data which user inputs from website like personal details, But the problem is i've some onchange selection fields in custom model.for that data is not getting updated over here. Got my point?? I would like to know how to resolve this issue. At least i need to know someone's approach. Thanks in advance
Can't retrive data from webpage for onchange fields in odoo?
0
0
1
0
0
257
34,146,996
2015-12-08T02:33:00.000
0
0
0
0
0
python,machine-learning,nlp,sentiment-analysis
0
34,163,258
0
2
0
false
0
0
When you annotate the sentiment, don't annotate 'Positive', 'Negative', and 'Neutral'. Instead, annotate them as either "has negative" or "doesn't have negative". Then your sentiment classification will only be concerned with how strongly the features indicate negative sentiment, which appears to be what you want.
1
0
1
0
I am trying to do sentiment analysis on a review dataset. Since I care more about identifying (extracting) negative sentiments in reviews (unlabeled now but I try to manually label a few hundreds or use Alchemy API), if the review is overall neutral or positive but a part has negative sentiment, I'd like my model to consider it more toward as a negative review. Could someone give me advices on how to do this? I'm thinking about using bag of words/word2vect with supervised (random forest, SVM) /unsupervised learning models (Kmeans).
Review data sentiment analysis, focusing on extracting negative sentiment?
0
0
1
0
0
268
34,153,844
2015-12-08T10:46:00.000
0
0
0
0
0
python,loops,selenium
0
34,154,600
0
1
1
false
0
0
Your attempts is always less then 5 because there is no variable increment. So your loop is infinite
1
0
1
0
I'm trying to study customers behavior. Basically, I have information on customer's loyalty points activities data (e.g. how many points they have earned, how many points they have used, how recent they have used/earn points etc). I'm using R to conduct this analysis I'm just wondering how should I go about segmenting customers based on the above information? I'm trying to apply the RFM concept then use K-means to segment my customers(although I have a few more variables than just R,F,M , as i have recency,frequency and monetary on both points earn and use, as well as other ratios and metrics) . Is this a good way to do this? Essentially I have two objectives: 1. To segment customers 2. Via segmenting customers, identify customers behavior(e.g.customers who spend all of their points before churning), provided that segmentation is the right method for such task? Clustering <- kmeans(RFM_Values4,centers = 10) Please enlighten me, need some guidance on the best methods to tackle such problems.
Python Selenium infinite loop
1
0
1
0
0
436
34,160,995
2015-12-08T16:26:00.000
2
0
1
0
0
python,colors,turtle-graphics
0
34,161,243
0
1
0
false
0
1
You are ending fill after every new coordinate. You need to call t.begin_fill() before your for loop and call t.end_fill() after the last coordinate, otherwise you are just filling in your single line with each iteration.
1
2
0
0
I am currently using the turtle.goto cords from a text file. I have the triangle drawn and everything but I don't know how to fill the triangle.
Python Turtle fill the triangle with color?
0
0.379949
1
0
0
812
34,166,369
2015-12-08T21:28:00.000
3
0
1
0
0
python,gensim,word2vec
1
34,166,580
0
4
0
false
0
0
It seems gensim throws a misleading error message. Gensim wants to iterate over your data multiple times. Most libraries just build a list from the input, so the user doesn't have to care about supplying a multiple iterable sequence. Of course, generating an in-memory list can be very resource-consuming, while iterating over a file for example, can be done without storing the whole file in memory. In your case, just changing the generator to a list comprehesion should solve the problem.
1
25
0
0
I have an generator (a function that yields stuff), but when trying to pass it to gensim.Word2Vec I get the following error: TypeError: You can't pass a generator as the sentences argument. Try an iterator. Isn't a generator a kind of iterator? If not, how do I make an iterator from it? Looking at the library code, it seems to simply iterate over sentences like for x in enumerate(sentences), which works just fine with my generator. What is causing the error then?
Generator is not an iterator?
0
0.148885
1
0
0
6,813
34,168,019
2015-12-08T23:23:00.000
0
0
1
0
1
python,python-3.x
0
52,257,045
0
4
0
false
0
0
In MacOS, I have anaconda package manager installed, so after pip install 3to2 I found executable at /Users/<username>/anaconda3/bin/3to2 Run ./3to2 to convert stdin (-), files or directories given as arguments. By default, the tool outputs a unified diff-formatted patch on standard output and a “what was changed” summary on standard error, but the -w option can be given to write back converted files, creating .bak-named backup files. In Windows its in C:\Python27\Scripts\ as a file 3to2 Run by invoking python python 3to2 <filetoconvert> to display the diff on console or with -w option to write back the converted to same file.
2
9
0
0
I have to convert some of my python 3 files to 2 for class, but I can't figure out how to use 3to2. I did pip install 3to2 and it said it was successful. It install 2 folders 3to2-1.1.1.dist-info and lib3to2. I have tried doing python 3to2 file_name, `python lib3to2 file_name' I also tried changing the folder to 3to2.py like I saw on an answer on someone else question still didn't work. What is the correct way to use this?
How to use 3to2
0
0
1
0
0
16,706
34,168,019
2015-12-08T23:23:00.000
11
0
1
0
1
python,python-3.x
0
38,457,022
0
4
0
false
0
0
Had the same question and here's how I solved it: pip install 3to2 rename 3to2 to 3to2.py (found in the Scripts folder of the Python directory) Open a terminal window and run 3to2.py -w [file] NB: You will either have to be in the same folder as 3to2.py or provide the full path to it when you try to run it. Same goes for the path to the file you want to convert. The easy way around this is to copy 3to2.py into the folder your py file is in and just run the command inside that folder. Use 3to2.py --help for info on how the script works.
2
9
0
0
I have to convert some of my python 3 files to 2 for class, but I can't figure out how to use 3to2. I did pip install 3to2 and it said it was successful. It install 2 folders 3to2-1.1.1.dist-info and lib3to2. I have tried doing python 3to2 file_name, `python lib3to2 file_name' I also tried changing the folder to 3to2.py like I saw on an answer on someone else question still didn't work. What is the correct way to use this?
How to use 3to2
0
1
1
0
0
16,706
34,169,111
2015-12-09T01:16:00.000
1
0
0
0
0
python,tcp,wireshark,pcap
0
34,496,093
0
1
0
false
0
0
I did it in C but the general ideas is, you need to keep track of TCP sequence numbers (there are two streams for each TCP session, one from client to server, the other is from server to client). This is a little complex. For each stream, you have a pointer to keep track of the consecutive sequence numbers sent so far and use a linked-list to keep track of the pairs ( sequence number + data length) that have a gap to the pointer. Each time you see a new data packet in the stream, you either update the pointer, or add to the linked-list. Note that after you update the pointer, you should check if the linked-list to see if some of the gaps are closed. You can tell retransmitted data packets this way. Hope it helps. Good luck.
1
0
0
0
The data was captured from LTE network. I don't know how to recognize count TCP retransmission of a single TCP flow, using Python. Could I recognize the type of retransmission? It's because of congestion or packet loss? Thanks.
How could I find TCP retransmission and packet loss from pcap file?
0
0.197375
1
0
1
1,227
34,198,892
2015-12-10T10:04:00.000
-3
0
1
1
0
python,ubuntu
0
55,326,327
0
6
0
false
0
0
Its simple just try: sudo apt-get remove python3.7 or the versions that you want to remove
4
13
0
0
I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place: iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7 Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7. So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7?
Ubuntu, how do you remove all Python 3 but not 2
1
-0.099668
1
0
0
127,783
34,198,892
2015-12-10T10:04:00.000
2
0
1
1
0
python,ubuntu
0
55,406,526
0
6
0
false
0
0
neither try any above ways nor sudo apt autoremove python3 because it will remove all gnome based applications from your system including gnome-terminal. In case if you have done that mistake and left with kernal only than trysudo apt install gnome on kernal. try to change your default python version instead removing it. you can do this through bashrc file or export path command.
4
13
0
0
I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place: iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7 Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7. So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7?
Ubuntu, how do you remove all Python 3 but not 2
1
0.066568
1
0
0
127,783
34,198,892
2015-12-10T10:04:00.000
8
0
1
1
0
python,ubuntu
0
34,220,703
0
6
0
true
0
0
So I worked out at the end that you cannot uninstall 3.4 as it is default on Ubuntu. All I did was simply remove Jupyter and then alias python=python2.7 and install all packages on Python 2.7 again. Arguably, I can install virtualenv but me and my colleagues are only using 2.7. I am just going to be lazy in this case :)
4
13
0
0
I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place: iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7 Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7. So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7?
Ubuntu, how do you remove all Python 3 but not 2
1
1.2
1
0
0
127,783
34,198,892
2015-12-10T10:04:00.000
9
0
1
1
0
python,ubuntu
0
34,198,961
0
6
0
false
0
0
EDIT: As pointed out in recent comments, this solution may BREAK your system. You most likely don't want to remove python3. Please refer to the other answers for possible solutions. Outdated answer (not recommended) sudo apt-get remove 'python3.*'
4
13
0
0
I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place: iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7 Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7. So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7?
Ubuntu, how do you remove all Python 3 but not 2
1
1
1
0
0
127,783
34,213,706
2015-12-10T23:32:00.000
0
0
0
1
0
python,mysql,twisted
0
35,131,551
0
1
0
false
0
0
I think the best way to accomplish this is to first make a select for the id (or ids) of the row/rows you want to update, then update the row with a WHERE condition matching the id of the item to update. That way you are certain that you only updated the specific item. An UPDATE statement can update multiple rows that matches your criteria. That is why you cannot request the last updated id by using a built in function.
1
3
0
0
Python, Twistd and SO newbie. I am writing a program that organises seating across multiple rooms. I have only included related columns from the tables below. Basic Mysql tables Table id Seat id table_id name Card seat_id The Seat and Table tables are pre-populated with the 'name' columns initially NULL. Stage One I want to update a seat's name by finding the first available seat given a group of table ids. Stage Two I want to be able to get the updated row id from Stage One (because I don't already know this) to add to the Card table. Names can be assigned to more than one seat so I can't just find a seat that matches a name. I can do Stage One but have no idea how to do Stage Two because lastrowid only works for inserts not updates. Any help would be appreciated. Using twisted.enterprise.adbapi if that helps. Cheers
Python Twistd MySQL - Get Updated Row id (not inserting)
0
0
1
1
0
311
34,242,017
2015-12-12T16:11:00.000
-1
0
0
0
0
python,mysql,database,lamp,mysql-python
0
34,242,082
0
1
0
false
0
0
are never on the same internet network. Let me clear the question, the problem is are never on the same internet network. firstly you need to fix the network issue, add router between the two sides which you want to communicate with. No relations with Python or LAMP. let me assume your DB is mysql, if you can make you can access that DB from outside servers, you can just talk with that DB directly from another servers. but for another solution, I recommend you to use API which would cover all request above the DB, then you can talk with that API to handle the data.
1
0
0
0
I have a python code which needs to retrieve and store data to/from a database on a LAMP server. The LAMP server and the device running the python code are never on the same internet network. The devices running the python code can be either a Linux, Windows or a MAC system. Any idea how could I implement this?
How to fetch or store data into a database on a LAMP server from devices over the internet?
0
-0.197375
1
1
0
155
34,275,096
2015-12-14T19:30:00.000
0
0
0
0
0
python,math,matplotlib
0
34,277,060
0
2
0
false
0
0
The function is evaluated at every grid node, and compared to the iso-level. When there is a change of sign along a cell edge, a point is computed by linear interpolation between the two nodes. Points are joined in pairs by line segments. This is an acceptable approximation when the grid is dense enough.
1
0
1
0
I want to know how the contours levels are chosen in pyplot.contour. What I mean by this is, given a function f(x, y), the level curves are usually chosen by evaluating the points where f(x, y) = c, c=0,1,2,... etc. However if f(x, y) is an array A of nxn points, how do the level points get chosen? I don't mean how do the points get connected, just simply the points that correspond to A = c
How are the points in a level curve chosen in pyplot?
0
0
1
0
0
976
34,284,335
2015-12-15T08:34:00.000
0
0
1
0
0
python,multithreading,kill-process
0
34,284,612
0
2
0
false
0
1
First you can use subprocess.Popen() to spawn child processes, then later you can use Popen.terminate() to terminate them. Note that you could also do everything in a single Python thread, without subprocesses, if you want to. It's perfectly possible to "multiplex" reading from multiple ports in a single event loop.
1
4
0
0
Kind all, I'm really new to python and I'm facing a task which I can't completely grasp. I've created an interface with Tkinter which should accomplish a couple of apparently easy feats. By clicking a "Start" button two threads/processes will be started (each calling multiple subfunctions) which mainly read data from a serial port (one port per process, of course) and write them to file. The I/O actions are looped within a while loop with a very high counter to allow them to go onward almost indefinitely. The "Stop" button should stop the acquisition and essentially it should: Kill the read/write Thread Close the file Close the serial port Unfortunately I still do not understand how to accomplish point 1, i.e.: how to create killable threads without killing the whole GUI. Is there any way of doing this? Thank you all!
How to integrate killable processes/thread in Python GUI?
0
0
1
0
0
1,596
34,284,421
2015-12-15T08:39:00.000
2
1
0
1
0
python,c++,c,language-binding
0
34,284,538
0
1
0
true
0
1
You can call between C, C++, Python, and a bunch of other languages without spawning a separate process or copying much of anything. In Python basically everything is reference-counted, so if you want to use a Python object in C++ you can simply use the same reference count to manage its lifetime (e.g. to avoid copying it even if Python decides it doesn't need the object anymore). If you want the reverse, you may need to use a C++ std::shared_ptr or similar to hold your objects in C++, so that Python can also reference them. In some cases things are even simpler than this, such as if you have a pure function in C or C++ which takes some values from Python and returns a result with no side effects and no storing of the inputs. In such a case, you certainly do not need to copy anything, because you can read the Python values directly and the Python interpreter will not be running while your C or C++ code is running (because they are all in a single thread). There is an extensive Python (and NumPy, by the way) C API for this, plus the excellent Boost.Python for C++ integration including smart pointers.
1
2
0
0
There are multiple questions about "how to" call C C++ code from Python. But I would like to understand what exactly happens when this is done and what are the performance concerns. What is the theory underneath? Some questions I hope to get answered by understanding the principle are: When considering data (especially large data) being processed (e.g. 2GB) which needs to be passed from python to C / C++ and then back. How are the data transferred from python to C when function is called? How is the result transferred back after function ends? Is everything done in memory or are UNIX/TCP sockets or files used to transfer the data? Is there some translation and copying done (e.g. to convert data types), do I need 2GB memory for holding data in python and additional +-2GB memory to have a C version of the data that is passed to C function? Do the C code and Python code run in different processes?
How does calling C or C++ from python work?
0
1.2
1
0
0
397