Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
46,776,659
2017-10-16T18:17:00.000
1
0
0
0
python,selenium,automation,selenium-chromedriver
46,777,247
2
false
1
0
I have encountered a similar kind of situation. My recommendation to you is speak to your front end devs or the company which provides the feedback survey and ask them for a script to disable the pop-ups for a browser session that you run. Then, execute the script using selenium library(like JavascriptExecutor) as soon as you open the web page so that you do not see the random occuring pop-ups.
2
5
0
For the past few months, I have been using Selenium to automate a very boring but essential part of my job which involves uploading forms. Recently, the site has implemented a feed back survey which randomly pops up and switches the driver to a new frame. I have written the code which can handle this pop up and switch back to my default frame however the issue is with the randomness. Is there a way to run my popup handling code as soon the exception generated by the wrong frame is thrown? I could encase my whole script in a try and except block but how do I direct the script to pick back up from where it left off? Any advice is much appreciated!
Selenium Python - How to handle a random pop up?
0.099668
0
1
2,555
46,776,659
2017-10-16T18:17:00.000
2
0
0
0
python,selenium,automation,selenium-chromedriver
46,777,980
2
true
1
0
I've encountered this on sites that I work on also. What I've found on our site is that it's not really random, it's on a timer. So basically the way it works is a customer comes to the site. If they haven't been prompted to sign up for X, then they get a popup. If they signed up or dismissed the popup, a cookie is written tracking that they already got the message. You want to find that tracking cookie and recreate it to prevent the popups. What I did was find that cookie, recreate it on an error page on the site domain (some made up page like example.com/some404page), and then start the script as normal. It completely prevents the popup from occurring. To find out if this is your case, navigate to the home page (or whatever is appropriate) and just wait for the popup. That will give you a sense of how long the timer is. Close the popup and reload the page. If the popup doesn't occur again, then I'm guessing it's the same mechanic. To find the cookie, clear cookies in your browser and open your site again. (Don't close the popup yet). I use Chrome so I'll describe how to do it. The general approach should work in other browsers but the specific steps and command names may vary. Open up the devtools (F12) Navigate to the Application tab and find Cookies (left panel) Expand Cookies and click on the domain of the site you are working on. There may be a lot of cookies in there from other sites, depending on your site. Click the Clear All button to clear all the cookies. Close the popup and watch for cookies to be created. Hopefully you will see one or just a few cookies created. Note the name of the cookies and delete them one at a time and reload the page and wait the 15s (or whatever) and see if the popup occurs. Once the popup appears, you know that the last cookie you deleted is the one you need to recreate. You can now write a quick function that creates the cookie using Selenium by navigating to the dummy error page, example.com/some404page, and create the cookie. Call that function at the start of your script. Profit.
2
5
0
For the past few months, I have been using Selenium to automate a very boring but essential part of my job which involves uploading forms. Recently, the site has implemented a feed back survey which randomly pops up and switches the driver to a new frame. I have written the code which can handle this pop up and switch back to my default frame however the issue is with the randomness. Is there a way to run my popup handling code as soon the exception generated by the wrong frame is thrown? I could encase my whole script in a try and except block but how do I direct the script to pick back up from where it left off? Any advice is much appreciated!
Selenium Python - How to handle a random pop up?
1.2
0
1
2,555
46,776,763
2017-10-16T18:23:00.000
1
0
0
0
javascript,python,node.js,client-side
47,111,903
2
true
1
0
After some research and the suggestions received, i created a chrome extension using the simple guide on the Chrome Developer site and used a CORSrequest to get what i needed. If anyone finds this question and would like help, i am happy to provide further details/assistance :)
2
1
0
This is probably not the best title for this question. So i have a nodejs application running on my server which currently uses a python script for web-scraping but i am looking at moving this to the client-side due to individual client seeing different versions (potentially unique) of the same site. I an ideal world i would like to use javascript to get the html response from a page (what i can see in chrome by right-clicking and choosing view source) to then be processed in javascript. However from what i have read online this does not seem to be possible. I am aware of sites that provide the response (such as anyorigin.com) that can be scraped. However, these are not really suitable for me as i need to be able to scrape what the user see's as each user can potentially see something different on the site i want to scrape. The python script i am currently using would do this but it would require the user to have python installed in order for me to be able to execute it and this cannot be guaranteed. Apologies for the block of text. Is there any solution to this problem ?
Web Scraping on client-side
1.2
0
1
1,650
46,776,763
2017-10-16T18:23:00.000
0
0
0
0
javascript,python,node.js,client-side
46,776,879
2
false
1
0
I was recently trying to do something very similar, and unfortunately, as far as I know there's not a way to do this on the client-side. You may be able to do some trickery and "post" the data you need back you the server where you deal with it, but I don't imagine that will be very efficient or straight forward. Though if you do find something, please do share.
2
1
0
This is probably not the best title for this question. So i have a nodejs application running on my server which currently uses a python script for web-scraping but i am looking at moving this to the client-side due to individual client seeing different versions (potentially unique) of the same site. I an ideal world i would like to use javascript to get the html response from a page (what i can see in chrome by right-clicking and choosing view source) to then be processed in javascript. However from what i have read online this does not seem to be possible. I am aware of sites that provide the response (such as anyorigin.com) that can be scraped. However, these are not really suitable for me as i need to be able to scrape what the user see's as each user can potentially see something different on the site i want to scrape. The python script i am currently using would do this but it would require the user to have python installed in order for me to be able to execute it and this cannot be guaranteed. Apologies for the block of text. Is there any solution to this problem ?
Web Scraping on client-side
0
0
1
1,650
46,780,773
2017-10-17T00:27:00.000
1
0
0
0
python-3.x,user-interface,focus,pyqt5,startup
56,467,458
2
false
0
1
Create a button with dimensions 0 wide by 0 high. Set it as the default button and also early in the tab order before the other controlls that except focus; but note that it will be triggered if the user presses ENTER in some edit controls. Call self.ui.yourbutton.setFocus() if desired for example after restore from minimized
2
3
0
When I start up my PyQt GUI, the focus immediately goes to the text box. I want there to be no focus on any of the buttons at the start of the program, especially not the text box. Is there a way to remove the focus entirely or at least move the focus to a button or something? Thanks
PyQt: How do you clear focus on startup?
0.099668
0
0
1,856
46,780,773
2017-10-17T00:27:00.000
1
0
0
0
python-3.x,user-interface,focus,pyqt5,startup
51,915,780
2
false
0
1
clearFocus() seems to work after a certain amount of delay after the window is visible. I also used setFocus() on the QMainWindow and then the textedit field lost focus.
2
3
0
When I start up my PyQt GUI, the focus immediately goes to the text box. I want there to be no focus on any of the buttons at the start of the program, especially not the text box. Is there a way to remove the focus entirely or at least move the focus to a button or something? Thanks
PyQt: How do you clear focus on startup?
0.099668
0
0
1,856
46,782,575
2017-10-17T04:38:00.000
0
0
0
0
python,tensorflow,deep-learning
46,807,104
1
true
0
0
pandas already has quite a bit of functionality that tf_utils provides. E.g. read_csv could capture what I was looking for from load_dataset. get_dummies does what convert_to_one_hot does. I could not find any direct functionality in pandas that came close to random_mini_batches but it can be achieved with sampling and random numbers.
1
0
1
I am using CPU based tensorflow (on non GPU platform) from python. I want to use functionality like load_dataset, random_mini_batches, convert_to_one_hot etc. from tf_utils package. However the one from neuroailab github has dependency on tensorflow-gpu. Is there any other CPU based (for non GPU platform) equivalent package for this?
Is there non-GPU equivalent of tf_utils package?
1.2
0
0
462
46,782,716
2017-10-17T04:53:00.000
2
0
1
0
python
46,782,764
2
false
0
0
I assume that tuple.__hash__() calls hash(item) for each item in the tuple and then XOR's the results together. If one of the items isn't hashable, then that will raise a TypeError that bubbles up to the original caller.
2
5
0
For example, the tuple (1,[0,1,2]). I understand why from a design perspective; if the tuple were still hashable, then it would be trivial to make any unhashable type hashable by wrapping it in a tuple, which breaks the correct behavior of hashability, since you can change the value of the object without changing the hash value of the tuple. But if the tuple is not hashable, then I don't understand what makes an object hashable -- I thought it simply had to have __hash__(self) implemented, which tuple does. Based on other answers I've looked at, and from testing examples, it seems that such an object is not hashable. It seems like sensible behavior would be for tuple.__hash__() to call __hash__ for its component objects, but I don't understand how that would work from an implementation perspective, e.g. I don't know how a dictionary recognizes it as an unhashable type when it is still type tuple and tuple still defines __hash__.
Why is a tuple containing an unhashable type unhashable?
0.197375
0
0
765
46,782,716
2017-10-17T04:53:00.000
5
0
1
0
python
46,782,773
2
true
0
0
tuple implements its own hash by computing and combining the hashes of the values it contains. When hashing one of those values fails, it lets the resulting exception propagate unimpeded. Being unhashable just means calling hash() on you triggers a TypeError; one way to do that is to not define a __hash__ method, but it works equally well if, in the course of your __hash__ method you raise a TypeError (or any other error really) by some other means. Basically, tuple is a hashable type (isinstance((), collections.abc.Hashable) is true, as is isinstance(([],), collections.abc.Hashable) because it's a type level check for the existence of __hash__), but if it stores unhashable types, any attempt to compute the hash will raise an exception at time of use, so it behaves like an unhashable type in that scenario.
2
5
0
For example, the tuple (1,[0,1,2]). I understand why from a design perspective; if the tuple were still hashable, then it would be trivial to make any unhashable type hashable by wrapping it in a tuple, which breaks the correct behavior of hashability, since you can change the value of the object without changing the hash value of the tuple. But if the tuple is not hashable, then I don't understand what makes an object hashable -- I thought it simply had to have __hash__(self) implemented, which tuple does. Based on other answers I've looked at, and from testing examples, it seems that such an object is not hashable. It seems like sensible behavior would be for tuple.__hash__() to call __hash__ for its component objects, but I don't understand how that would work from an implementation perspective, e.g. I don't know how a dictionary recognizes it as an unhashable type when it is still type tuple and tuple still defines __hash__.
Why is a tuple containing an unhashable type unhashable?
1.2
0
0
765
46,783,029
2017-10-17T05:26:00.000
0
1
0
0
python,facebook,chatbot,messenger
46,815,095
1
true
0
0
Nope, there is no way to detect the indicator programmatically.
1
0
0
I'm trying to build a chatbot in FB messenger. I could SEND typing indicator with sender action API. However, I can't find information about receiving it. Is there any way to do that or is it unavailable?? Thank you!
how to receive typing indicator from user in facebook messenger
1.2
0
1
162
46,783,732
2017-10-17T06:23:00.000
1
0
0
0
python,web-applications
46,784,214
1
false
0
1
If I got your idea right then you just want to create python web application. First, you should check out python WebFrameworks and choose right one for you. Then you should check how to work with it on your existing web server. And last but not least (if I didn't forget anything) you should check some info on front-end programming. Sorry, if I gor your idea wrong or misguiding you at somwe point.
1
0
0
I wrote a really simple app in Python for a school project. It is for a skillsharing community that our club is trying to start up. Imagine Venmo, but without any money involved. It's essentially a record of favors done for those in the community. The users' info is stored as a dictionary within a dictionary of all users. The dictionary is pickled and the .pkl is updated automatically whenever user info is changed. I want the users to be able to access the information online, by logging in via username and password. It will be used regularly by people and security isn't really a concern since it doesn't store personal info and the users are a small group that are in our club. Anyhow, my issue is that I only know how to do backend stuff and basic tkinter GUIs (which, AFAIK can't be used for my needs). I'm a self-taught and uncommitted novice programmer, so I don't even know how to search for what I'm trying to do. I imagine what I need to do is put the program and the .pkl on the server that will host my website. (I have a domain name, but never figured out how to actually make it a website...) From there, I imagine I have to write some code that will create a login screen and allow users to login and view the attributes associated with their profile as well as send "payment" (as favors) to other users. How do I do any/all of this? I'm looking for online resources or explanations from the community that will help me get this project off the ground. Also, telling me what it is called that I am trying to do would be greatly appreciated. Thanks!
How do I make an app available online? [Python]
0.197375
0
0
60
46,783,775
2017-10-17T06:26:00.000
0
1
1
0
python,maya
46,792,329
1
true
0
0
as suggested by Andrea, I opened the commandPort of maya and connected to it using socket in python script. now i can send commands to maya using that python script as long as the maya commandPort is open.
1
1
0
I want to create a simple python script which will directly transfer objects from blender to Maya. I created a python script which exports the object from blender to a temp folder. now I want to import that object into Maya without actually going to Maya>file>import. I searched for the solution for a while and found out that I can create a standalone instance of Maya with mayapy.exe and work in non-GUI instance of Maya. but what I want to do is import the object into an already running instance of Maya(GUI version) as soon as the exporting script is done running.
how to give commands to the running instance of maya with mayapy.exe?
1.2
0
0
249
46,784,260
2017-10-17T07:01:00.000
0
0
0
0
python,tkinter
46,797,450
2
false
0
1
Can't comment so I add a short tip to the detailed Ethan answer. You can design most of the GUIs in tkinter with either pack, grid or a combination of both (by placing frames on a window with one of them, and using either grid or pack inside of each frame, to "place" the widgets). You can tune the configurations for proper location and size when the window resizes. Keep placer use for special cases (like overlaying some widget on the top of others)
1
0
0
I just made an app using python and tkinter widgets. There are Labels, Frames, Buttons, etc in the Tk and Toplevel widgets. However, it includes thousands of codes and its really annoying to resize every widgets when I support multiple resolutions. Is there any way to expand the resolution ratio for existing Tkinter Tk() and Toplevel() widget and their child widgets? (zooming-in) If not, what would be the best approach to support multiple resolutions of a python app with the same ratio? Any help would be much appreciated, sorry for bad English.
Python - is there any way to expand the resolution ratio for existing Tkinter Tk() and Toplevel() widget? (zooming-in)
0
0
0
421
46,784,908
2017-10-17T07:38:00.000
0
0
0
1
python,python-2.7,pants
47,027,841
1
false
0
0
Explicitly mentioning following versions in requirements.txt for pants builds will resolve the issue pycparser==2.17 cryptography==2.0.1
1
0
0
I try to run a dockerized pants build for a scala project and it fails with an error message "error in cryptography setup command: Invalid environment marker: python_version < '3' ". I haven't manually specified anything to install cryptography. In the documentation of cryptography I could see that it happens either because of pip or setuptools is out of date. I tried to update this as well. But in case of pants I'm not very sure where should I specify this also. I specified this in pants file and in thidparty "requirements.txt" file. But no difference . It was working fine but suddenly it failed one day. I use the following versions of Ubuntu -14.04 python -2.7.4 pants -1.0.0 (tried upgrading to 1.1.0 but no difference)
Fails to install cryptography when a pants build is run
0
0
0
689
46,785,536
2017-10-17T08:15:00.000
0
0
0
0
python,web,beautifulsoup,data-extraction
46,786,396
1
true
0
0
For Google Maps, try the API. Using web scraping tools for Maps data extraction is highly discouraged by Google TOS. If you are using Python, it has very nice libraries BeautifulSoup and Scrapy for this purpose. Other means? You can extract POIs from OSM data, try the open source tools. Property Info? May be it's available for your county / state from Govt Office, give it a try.
1
0
0
I am planning to do a data extraction from web sources (web scraping) as part of my work. I would like to extract info around my company's 10km radius. I would like to extract information such as condominiums, its address, number of units and its price per sqft. Other things like number of schools and kindergarten in the area and hotels. I understand I need to extract from few sources/webpages. I will also be using Python. I would like to know which library or libraries should I be using. Is web scraping the only means? Can we extract info from Google Maps? Also, if anyone has any experience I will really appreciate if you can guide me on this. Thanks a lot, guys.
data extraction from web
1.2
0
1
360
46,789,770
2017-10-17T12:08:00.000
0
0
0
0
python,directory,python-requests
46,790,562
2
true
0
0
You won't find the "corresponding functions" (=> to os.walk()) in requests because there's no concept of "directory" or "directory listing" in the HTTP protocol. All you have are urls and resources... Given your specs ("I try to get data from my own server, so it lists directory contents"), the best you can do is parsing the HTML your server generates for directory listing and issue requests for the links you find. BeautifulSoup is possibly the simplest solution for the parsing.
1
0
0
I'm trying to download a directory with python3 using the requests library. I wanted to do it by "walking" (like os.walk), but never found the corresponding functions in requests. I struggle to find another way to do it.
Trying to download a directory with requests
1.2
0
1
501
46,789,946
2017-10-17T12:16:00.000
0
1
0
1
python,kubernetes
69,754,077
3
false
0
0
For me, The reason for 500 was basically pod unable to pull the image from GCR
2
2
0
I'm using python kubernetes 3.0.0 library and kubernetes 1.6.6 on AWS. I have pods that can disappear quickly. Sometimes when I try to exec to them I get ApiException Handshake status 500 error status. This is happening with in cluster configuration as well as kube config. When pod/container doesn't exist I get 404 error which is reasonable but 500 is Internal Server Error. I don't get any 500 errors in kube-apiserver.log where I do find 404 ones. What does it mean and can someone point me in the right direction.
kubectl exec returning `Handshake status 500`
0
0
1
1,730
46,789,946
2017-10-17T12:16:00.000
0
1
0
1
python,kubernetes
70,119,831
3
false
0
0
For me the reason was, I had two pods, with same label attached, 1 pod was in Evicted state and other was running , i deleted that pod, which was Evicted and issue was fixed
2
2
0
I'm using python kubernetes 3.0.0 library and kubernetes 1.6.6 on AWS. I have pods that can disappear quickly. Sometimes when I try to exec to them I get ApiException Handshake status 500 error status. This is happening with in cluster configuration as well as kube config. When pod/container doesn't exist I get 404 error which is reasonable but 500 is Internal Server Error. I don't get any 500 errors in kube-apiserver.log where I do find 404 ones. What does it mean and can someone point me in the right direction.
kubectl exec returning `Handshake status 500`
0
0
1
1,730
46,790,782
2017-10-17T12:59:00.000
4
0
1
0
python-3.x,spyder
46,813,155
1
false
0
0
(Spyder developer here) My answers: Cancel the request to view the variable without closing the program? No, that's not possible, sorry. Set a default so Spyder will only display the first 1000 rows of very large objects such as dataframes? That's already in place. The problem is the size in memory of your dataframes because Spyder needs to make a copy of them to graphically display them. To fix this problem, we are planning to use more efficient serialization libraries (e.g. pyarrow) in Spyder 5, to be released in 2021.
1
5
1
I am working with large Pandas dataframes in Spyder. Occasionally I accidentally click the large dataframes in the Variable Explorer window and Spyder will very hang for very long periods while it tries to open. The only way I have found to stop this process is to close Spyder completely and then reopen. Is it possible to: Cancel the request to view the variable without closing the program? Set a default so Spyder will only display the first 1000 rows of very large objects such as dataframes?
Spyder IDE - Cancel opening large variables without restarting the entire program
0.664037
0
0
890
46,791,798
2017-10-17T13:48:00.000
2
0
0
1
python,amazon-web-services,amazon-ec2,deployment,ffmpeg
46,792,730
1
true
1
0
Beanstalk is certainly an option. You don't necessarily have to use it for web apps and you can configure all of the dependencies needed via .ebextensions. Containerization is usually my go to strategy now. If you get it working within Docker locally then you have several deployment options and the whole thing gets much easier since you don't have to worry about setting up all the dependencies within the AWS instance. Once you have it running in Docker you could use Beanstalk, ECS or CodeDeploy.
1
1
0
I have a python project and i want to deploy it on an AWS EC2 instance. My project has dependencies to other python libraries and uses programs installed on my machine. What are the alternatives to deploy my project on an AWS EC2 instance? Further details : My project consist on a celery periodic task that uses ffmpeg and blender to create short videos. I have checked elastic bean stalk but it seems it is tailored for web apps. I don't know if containerizing my project via docker is a good idea... The manual way and the cheapest way to do it would be : 1- Launch a spot instance 2- git clone the project 3- Install the librairies via pip 4- Install all dependant programs 5- Launch periodic task I am looking for a more automatic way to do it. Thanks.
What are the ways to deploy python code on aws ec2?
1.2
0
0
467
46,792,424
2017-10-17T14:20:00.000
2
0
1
0
python,dynamic,powerpoint,seaborn,figure
49,083,874
3
false
0
0
We now went with this approach: You can save the figures as a .png file and insert this into Powerpoint. There is an Option when inserting it, that the Picture will be updated every time you open PowerPoint, retrivining a new version of the file from the Folder I saved it to. So when I make changes in Seaborn, a new version of the file is automatically saved as a Picture which will then be updated in PowerPoint.
1
2
0
I created some figures with Seaborn in a Jupyter Notebook. I would now like to present those figures in a PowerPoint presentation. I know that it is possible to export the figures as png and include them in the presentation. But then they would be static, and if something changes in the dataframe, the picture would be the same. Is there an option to have a dynamic figure in PowerPoint? Something like a small Jupyter Notebook you could Display in the slides?
How to use Python Seaborn Visualizations in PowerPoint?
0.132549
0
0
4,350
46,792,940
2017-10-17T14:45:00.000
2
0
0
0
python,polynomial-math,coordinate-transformation
46,792,992
2
false
0
0
Create a burner variable, store x-tau into it, and feed that into your function
1
3
1
I'm trying to shift a polynomial. I'm currently using numpy.poly1d() to make a quadratic equation. example: 2x^2 + 3x +4 but I need to shift the function by tau, such that 2(x-tau)^2 + 3(x-tau) + 4 Tau is a value that will change base on some other variables in my code.
polynomial transformation in python
0.197375
0
0
487
46,793,573
2017-10-17T15:15:00.000
0
0
1
0
python,xml,python-3.6
46,877,010
1
true
0
0
It seems the best solution was to replace 4Suite-XML with lxml given that the 4Suite-XML project is dead. In many respects, this reduced the amount of code that was needed to perform similar functions in 4Suite-XML.
1
0
0
I have an application that was written in Python 2.4 that I have taken over development for. I have since converted the application to Python 3, but parts of the application use 4suite-xml. Is it possible to get 4suite-xml under Python 3? I have tried 'pip3 install 4suite-xml' but it fails to install.
4suite-xml for Python 3.6?
1.2
0
0
188
46,800,824
2017-10-17T23:50:00.000
0
1
1
0
python,git
46,801,007
3
false
0
0
Under your constraints, there's no difference. If there ARE staged changes, there is, though: reset reverts staged changes. checkout does not ...
2
7
0
I have a file foo.py. I have made some changes to the working directory, but not staged or commited any changes yet. I know i can use git checkout foo.py to get rid of these changes. I also read about using git reset --hard HEADwhich essentially resets your working directory, staging area and commit history to match the latest commit. Is there any reason to prefer using one over the other in my case, where my changes are still in working directory?
git reset --hard HEAD vs git checkout
0
0
0
2,533
46,800,824
2017-10-17T23:50:00.000
6
1
1
0
python,git
46,800,935
3
false
0
0
Is there any reason to prefer using one over the other in my case, where my changes are still in working directory? No, since they will accomplish the same thing. Is there any reason to prefer using [git checkout -- path/to/file] over [git reset --hard] in [general but not in my specific case]? Yes: this will affect only the one file. If your habit is git reset --hard to undo changes to one file, and you have work-tree and/or staged changes to other files too, and you git reset --hard, you may be out of luck getting those changes back without reconstructing them by hand. Note that there is a third construct: git checkout HEAD path/to/file. The difference between this and the one without HEAD (with -- instead1) is that the one with HEAD means copy the version of the file that is in a permanent, unchangeable commit into the index / staging-area first, and then into the work-tree. The one with -- means copy the version of the file that is in the index / staging-area into the work-tree. 1The reason to use -- is to make sure Git never confuses a file name with anything else, like a branch name. For instance, suppose you name a file master, just to be obstinate. What, then, does git checkout master mean? Is it supposed to check out branch master, or extract file master? The -- in git checkout -- master makes it clear to Git—and to humans—that this means "extract file master". Summary, or, things to keep in mind There are, at all times, three active copies of each file: one in HEAD; one in the index / staging-area; one in the work-tree. The git status command looks at all three, comparing HEAD-vs-index first—this gives Git the list of "changes to be committed"—and then index-vs-work-tree second. The second one gives Git the list of "changes not staged for commit". The git add command copies from the work-tree, into the index. The git checkout command copies either from HEAD to the index and then the work-tree, or just from the index to the work-tree. So it's a bit complicated, with multiple modes of operation: git checkout -- path/to/file: copies from index to work-tree. What's in HEAD does not matter here. The -- is usually optional, unless the file name looks like a branch name (e.g., a file named master) or an option (e.g., a file named -f). git checkout HEAD -- path/to/file: copies from HEAD commit, to index, then to work-tree. What's in HEAD overwrites what's in the index, which then overwrites what's in the work-tree. The -- is usually optional, unless the file name looks like an option (e.g., -f). It's wise to use the -- always, just as a good habit. The git reset command is complicated (it has many modes of operation). (This is not to say that git checkout is simple: it, too, has many modes of operation, probably too many. But I think git reset is at least a little bit worse.)
2
7
0
I have a file foo.py. I have made some changes to the working directory, but not staged or commited any changes yet. I know i can use git checkout foo.py to get rid of these changes. I also read about using git reset --hard HEADwhich essentially resets your working directory, staging area and commit history to match the latest commit. Is there any reason to prefer using one over the other in my case, where my changes are still in working directory?
git reset --hard HEAD vs git checkout
1
0
0
2,533
46,803,803
2017-10-18T06:09:00.000
0
0
0
0
python,excel,file,exe,explorer
46,803,941
1
true
0
0
Yes this is perfectly doable. I suggest you look at PyQT5 or TkInter for the user interface, pyexcel for the excel interface and pyinstaller for packaging up an executable as you asked. There are many great tutorials on all of these modules.
1
0
0
The program would follow the below steps: Click on executable program made through python File explorer pops up for user to choose excel file to alter Choose excel file for executable program to alter Spits out txt file OR excel spreadsheet with newly altered data to same folder location as the original spreadsheet
Python - how to get executable program to get the windows file browser to pop up for user to choose an excel file or any other document?
1.2
1
0
123
46,808,342
2017-10-18T10:44:00.000
0
0
0
1
python,redis,queue,worker
46,808,422
2
false
0
0
If your worker need to do a long task with data, it's a solution. but each data must be treated by a single worker. By this way, you can easly (without thread,etc..) distribute your tasks, it's better if your worker doesn't work in the same server
2
0
0
This program listen to Redis queue. If there is data in Redis, worker start to do their jobs. All these jobs have to run simultaneously that's why each worker listen to one particular Redis queue. My question is : Is it common to run more than 20 workers to listen to Redis ? python /usr/src/worker1.py python /usr/src/worker2.py python /usr/src/worker3.py python /usr/src/worker4.py python /usr/src/worker5.py .... .... python /usr/src/worker6.py
Is it common to run 20 python workers which uses Redis as Queue ?
0
0
1
834
46,808,342
2017-10-18T10:44:00.000
0
0
0
1
python,redis,queue,worker
46,808,475
2
false
0
0
Having multiple worker processes (and when I mean "multiple" I'm talking hundreds or more), possibly running on different machines, fetching jobs from a job queue is indeed a common pattern nowadays. There even are whole packages/frameworks devoted to such workflows, like for example Celery. What is less common is trying to write the whole task queues system from scratch in a seemingly ad-hoc way instead of using a dedicated task queues system like Celery, ZeroMQ or something similar.
2
0
0
This program listen to Redis queue. If there is data in Redis, worker start to do their jobs. All these jobs have to run simultaneously that's why each worker listen to one particular Redis queue. My question is : Is it common to run more than 20 workers to listen to Redis ? python /usr/src/worker1.py python /usr/src/worker2.py python /usr/src/worker3.py python /usr/src/worker4.py python /usr/src/worker5.py .... .... python /usr/src/worker6.py
Is it common to run 20 python workers which uses Redis as Queue ?
0
0
1
834
46,812,351
2017-10-18T14:16:00.000
0
0
0
1
python,amazon-web-services,apache-kafka,multiprocessing,scale
46,812,799
2
false
0
0
You could do this, but shouldn't. The basic unit of parallelism in Kafka is the partition: in a consumer group, each consumer reads from one or more partitions and consumers do not share partitions. In order to share a partition, you would need to use a tool like ZooKeeper to lock access to the partition (and keep track of each consumer's position). The use case that you're describing is better served by SQS and an Auto-scaling group.
2
0
0
I am trying to make an application in python that has 1 topic (demo-topic) and 1 partition. In this topic messages are pushed randomly I have 1 consumer (consumer1) (demo-group) that uses this messages to makes some background calculations (that take some time). Having this application on amazon i want to be able to scale it (when computation takes to long) in the way that the newly created machine will have another consumer (consumer 2) from the same group (demo-group) reading in the same topic (demo-topic) but in the way that they start splitting the load (consumer 1 takes some load and consumer 2 takes the rest but they never get the same messages) After the surge of data comes to an halt, second machine is decommissioned and consumer 1 takes all the load again. Is this even possible to do (without adding before hand more partitions). Is there a workaround?? Thank you
Multiprocessing with python kafka consumers
0
0
0
1,106
46,812,351
2017-10-18T14:16:00.000
1
0
0
1
python,amazon-web-services,apache-kafka,multiprocessing,scale
46,812,746
2
false
0
0
You cannot have multiple consumers within the same group consume at the same time from the same partition. If you subscribe a second consume within the same group to the same partition, it will act as a hot standby and won't be consuming any messages until the first one stops. The best solution is to add partitions to your topic. That way, you can add consumers when you see a surge in traffic and remove them when traffic slows down. Kafka will do all load balancing for you.
2
0
0
I am trying to make an application in python that has 1 topic (demo-topic) and 1 partition. In this topic messages are pushed randomly I have 1 consumer (consumer1) (demo-group) that uses this messages to makes some background calculations (that take some time). Having this application on amazon i want to be able to scale it (when computation takes to long) in the way that the newly created machine will have another consumer (consumer 2) from the same group (demo-group) reading in the same topic (demo-topic) but in the way that they start splitting the load (consumer 1 takes some load and consumer 2 takes the rest but they never get the same messages) After the surge of data comes to an halt, second machine is decommissioned and consumer 1 takes all the load again. Is this even possible to do (without adding before hand more partitions). Is there a workaround?? Thank you
Multiprocessing with python kafka consumers
0.099668
0
0
1,106
46,814,272
2017-10-18T15:48:00.000
0
0
0
1
python,bash,sockets,networking,scripting
46,846,940
2
false
0
0
its hard to understand the context and the applications structure, but i would assume that putting the connection in a separate thread would help. So that the connection is always open and the messages can be processed in preferred fashion.
1
0
0
I am writing a small chat app with python using bash commands. I'm using nc to accomplish this but I want to be able to append a username before the user's message. How do I go about doing this without breaking the connection? The command I'm using to connect is just nc -l -p 1234 -q 0 and the desired outcome is that when the person sends something it would look like: <User1> Hello Thank you!
How to print text while netcat is listening at a port?
0
0
1
232
46,815,043
2017-10-18T16:33:00.000
0
0
1
0
python,queue
48,274,477
1
false
0
0
In your request queue entry include a response queue. When finished place a response on the response queue. The requesting thread waits on the response queue. A callback method could alternately be used.
1
0
0
I'm trying to write a program which starts new tasks in new threads. Data is passed from task threads to a single worker/processing thread via a priority queue (so more important jobs are processes first). The worker/processing thread gets higher priority data from the queue and limits calls to a REST API How can I passed the data back to it's origionating task thread, while tracking that all that particular task threads data has been processed? Thanks
python managing tasks in threads when using priority queue
0
0
0
414
46,817,031
2017-10-18T18:37:00.000
1
0
0
0
python-3.x,bokeh
46,832,946
1
true
0
0
This is a known bug with current versions (around 0.12.10) for now the best workaround is to increase plot.min_border (or p.min_border_left, etc) to be able to accommodate whatever the longest label you expect is. Or to rotate the labels to be parallel to the axis so that they always take up the same space, e.g. p.yaxis.major_label_orientation = "vertical"
1
1
1
I have just started exploring bokeh and here is a small issue I am stuck with. This is in regards with live graphs. The problem is with the axis values. Initially if I start with say 10, till 90 it shows correct values but while printing 100, it only show 10 and the last zero(0) is hidden. It's not visible. That is when it switches from 2 digit number to a 3 or more digit number only the first two digits are visible. Is there any figure property I am missing or what I am not sure of.
Bokeh Plots Axis Value don't show completely
1.2
0
0
601
46,817,939
2017-10-18T19:37:00.000
0
0
1
0
python,database,python-3.x,mongodb,pymongo
53,745,939
1
false
0
0
After some A/B testing, it seems like there isn't really a way to speed this up, unless you change your Python interpreter. Alternatively, bulk pulling from the DB could speed this up.
1
3
0
After running some tests (casting a PyMongo set to a list vs iterating over the cursor and saving to a list) I've noticed that the step from cursor to data in memory is negligible. For a db cursor of about 160k records, it averages about 2.3s. Is there anyway to make this conversion from document to object faster? Or will I have to choose between casting to a list and iterating over the cursor?
PyMongo Cursor to List Fastest Way Possible
0
1
0
871
46,820,135
2017-10-18T22:20:00.000
1
0
1
0
python,memory
46,820,161
1
true
0
0
The details depend on the OS, but in general, when a process exits, the OS deletes everything it touched - memory, file handles, sockets, etc. At least, it tries to.
1
0
0
TL:DR What does python do with a programs allocated memory space after it terminates? This may seem like a basic question but I want to know what happens to, for example, a python list that contains 1000 ints after I shut down the program. I ask this because I've recently been working with programs that have quite large dict's and lists. I feel I should manually delete the programs allocated memory space, esspecially if I'm just trying to see if the program runs correctly. Is there a way to do this in python, does python do this automatically? Should I even be worried about it at all?
What happens to the memory used by your program after it's finished running?
1.2
0
0
36
46,822,710
2017-10-19T04:09:00.000
0
0
1
0
python
47,638,385
1
false
0
0
A python dictionary would definitely be the easiest way to store word : definitions (key-value pairs), and you can import from or export to a json file as well. Of course, there are other structures like lists, but they are far less optimal. A pandas dataframe could work and would be useful if you wanted to have multiple entries with the same key but different definitions: e.g. key + def 1, key + def 2, key + def 3, since much like an excel sheet, each entry could be its own row which are then sorted based on key.
1
0
0
Recently I made a English word definition dictionary as described in a course in udemy. They have used a data.json file which contains a dictionary inside it with all the words and their meaning as key value pair, so it was an easy task. Are there some more ways to create dictionary and what data to use, like if I don't have a dictionary (key value pair). Any ideas ??
Creating Dictionary (English dictionary ) with python
0
0
0
612
46,823,445
2017-10-19T05:44:00.000
0
0
0
0
python,dataframe
46,823,775
1
false
0
0
I think the way you are passing your argument is wrong, try passing it in the same pattern as its in csv. Like ["1/10/2011"], it should work. Good luck :)
1
1
1
I'am reading a data off a csv file. The columns are as below: Date , Buy, Sell, Price 1/10/2011, 1 , 5, 500 1/15/2011, 4, 2, 500 When I tried to pull data based on index like df["2011-01-10"], I got an error KeyError: '2011-01-10' Anyone know what this is might be the case? Thanks,
Error when trying to pull row based on index value in the dataframe
0
0
0
15
46,826,801
2017-10-19T09:30:00.000
0
0
1
0
python,ms-word,activex
46,828,184
1
false
0
0
Seems like i mis-spelled the namespace, should be msinkaut.InkPicture.1
1
0
0
I'm trying to add MSINKAUTLib.InkPicture to an existing word document using python, so far i have failed finding any documentation about that. Is it even possible?
Adding InkPicture programmatically to a word document
0
0
0
59
46,826,969
2017-10-19T09:39:00.000
0
0
0
1
python,linux,windows
46,827,088
1
true
0
0
The easiest way is to use Samba. This way you don't have to install anything on Windows. You can do something like mount //windowspc/share /mnt, then use Python to copy files from /mnt to wherever you want. You can call smbclient from Python as a system command Alternatively, you can install rsync or ssh from Cygwin, or even something like FTP. PS: this is not really a Python question, but basic filesharing.
1
0
0
I have some files on a remote Windows system, that I need to retrieve. It's running Windows Server 2008 R2. I would like to retrieve those files to my Ubuntu System and I would like to do it using Python. I have gone through using wmi-client-wrapper, paramiko and wmic. None of them seem to work. Some guidance would be appreciated.
Python on Linux remotely connect to a Windows PC to retrieve files
1.2
0
0
133
46,827,310
2017-10-19T09:58:00.000
-1
0
1
0
python,pycharm,pydev
46,831,003
1
false
0
0
[SOLVED] Ok, since in my situation the server is apache2/wsgi, when remote debugging is activated the python environment is the one used by apache/wsgi and specified in the WSGIDaemonProcess directive Bye
1
1
0
I can debug my scripts in local interpreter mode in Pycharm, but when a remote debugging environment for python is configured via: import pydevd add pycharm-debug-egg in path etc I get No module named configobj. Module Configobj is installed in all my local python environments, so the question is: which python interpreter/environment is used when runnng under PYDEVD? Where should I install configobj?
PyCharm: "No module named configobj" when under RemoteDebugServer
-0.197375
0
0
704
46,828,118
2017-10-19T10:44:00.000
0
0
0
0
python,machine-learning,svm,naivebayes,document-classification
46,828,277
2
false
0
0
Because of the continuous score, which i assume is your label, it's a regression problem. SVMs are more common for classification problems. There are lots of possible algorithms out there. Logistic Regression would be pretty common to solve something like this. Edit Now that you edited your post your problem became a classification problem :-) Classification = some classes you want your data to classify as like boolean(True, False) or multinomial(Big, Middle, Small, Very Small) Regression = continuous values(all real numbers between 0 and 1) Now you can try your SVM and see if it works well enough for your data. See @Maxim's answer he has some good points (balancing, scaling)
1
0
1
I'm trying to classify pages, in particular search for a page, in documents based on bag of words, page layout, contain tables or not, has bold titles, etc. With this premise I have created a pandas.DataFrame like this, for each document: page totalCharCount matchesOfWordX matchesOfWordY hasFeaturesX hasFeaturesY hasTable score 0 0.0 608.0 0.0 2.0 0.0 0.0 0.0 0.0 1 1.0 3292.0 1.0 24.0 7.0 0.0 0.0 0.0 2 2.0 3302.0 0.0 15.0 1.0 0.0 1.0 0.0 3 3.0 26.0 0.0 0.0 0.0 1.0 1.0 1.0 4 4.0 1851.0 3.0 25.0 20.0 7.0 0.0 0.0 5 5.0 2159.0 0.0 27.0 6.0 0.0 0.0 0.0 6 6.0 1906.0 0.0 9.0 15.0 3.0 0.0 0.0 7 7.0 1825.0 0.0 24.0 9.0 0.0 0.0 0.0 8 8.0 2053.0 0.0 20.0 10.0 2.0 0.0 0.0 9 9.0 2082.0 2.0 16.0 3.0 2.0 0.0 0.0 10 10.0 2206.0 0.0 30.0 1.0 0.0 0.0 0.0 11 11.0 1746.0 3.0 31.0 3.0 0.0 0.0 0.0 12 12.0 1759.0 0.0 38.0 3.0 1.0 0.0 0.0 13 13.0 1790.0 0.0 21.0 0.0 0.0 0.0 0.0 14 14.0 1759.0 0.0 11.0 6.0 0.0 0.0 0.0 15 15.0 1539.0 0.0 20.0 3.0 0.0 0.0 0.0 16 16.0 1891.0 0.0 13.0 6.0 1.0 0.0 0.0 17 17.0 1101.0 0.0 4.0 0.0 1.0 0.0 0.0 18 18.0 2247.0 0.0 16.0 5.0 5.0 0.0 0.0 19 19.0 598.0 2.0 3.0 1.0 1.0 0.0 0.0 20 20.0 1014.0 2.0 1.0 16.0 3.0 0.0 0.0 21 21.0 337.0 1.0 2.0 1.0 1.0 0.0 0.0 22 22.0 258.0 0.0 0.0 0.0 0.0 0.0 0.0 I'm taking a look to Naive Bayes and SVM algorithms but I'm not sure which one fits better with the problem. The variables are independent. Some of them must be present to increase the score, and some of them matches the inverse document frequency, like totalCharCount. Any help? Thanks a lot!
What classification algorithm should I use for document classification with this variables?
0
0
0
258
46,828,251
2017-10-19T10:52:00.000
3
0
0
0
python,django,postgresql,database-migration
46,828,778
1
true
1
0
Make sure you have _ init _.py file under the migrations folder for that paticular app. Running manage.py makemigrations always makes a migration file if there are any changes in your models.py If nothing works and there isnt much data present in your database, end resolution is to delete all migrations files (if any) for that app or for the project, in terminal type "sudo -su postgres" then type "psql". Drop your database and create a new one. Run manage.py makemigrations to check if migration file has been created or not. Then migrate with manage.py migrate
1
1
0
When using the Django Admin panel, I am able to add new instances to Users and Groups. The same thing goes for all my apps, except for one. This one app, resultregistration, always gives me a ProgrammingError whenever I try to add a new model. For example, this app has a Competition-model where data about a sport competition should be stored. When I click "+ Add" I am taken to the correct site where I can add a new competition. However, when I press save I get: ProgrammingError at /admin/resultregistration/competition/add/ column "competition_category" of relation "resultregistration_competition" does not exist LINE 1: INSERT INTO "resultregistration_competition" ("competition_c... Of course, I assume something is wrong with the migrations. However, I have run python manage.py makemigrations appname, and python manage.py migrate appname, and that works fine. I get a message that there are "No changes detected", and "No migrations to apply". I have tried solutions posted on SO, but none of them worked. What is the general reason for this error? And does anyone know what could be wrong in this specific case? Could one get this error if something is wrongly defined in the model? Or does it have to be a migration problem? Thank you so much! Any help would be truly appreciated. Also, I am using PostgreSQL, if that helps.
ProgrammingError in Django when adding a new model-instance to the Database, what could be wrong?
1.2
0
0
688
46,833,561
2017-10-19T15:30:00.000
0
0
0
0
python,tcp,send
46,833,696
2
true
0
0
Option 1 Use two threads, one will write to the socket and the second will read from it. This works since sockets are full-duplex (allow bi-directional simultaneous access). Option 2 Use a single thread that manages all keep alives using select.epoll. This way one thread can handle multiple clients. Remember though, that if this isn't the only thread that uses the sockets, you might need to handle thread safety on your own
1
1
0
Im trying to make a tcp communication, where the server sends a message every x seconds through a socket, and should stop sending those messages on a certain condition where the client isnt sending any message for 5 seconds. To be more detailed, the client also sends constant messages which are all ignored by the server on the same socket as above, and can stop sending them at any unknown time. The messages are, for simplicity, used as alive messages to inform the server that the communication is still relevant. The problem is that if i want to send repeated messages from the server, i cannot allow it to "get busy" and start receiving messages instead, thus i cannot detect when a new messages arrives from the other side and act accordingly. The problem is independent of the programming language, but to be more specific im using python, and cannot access the code of the client. Is there any option of receiving and sending messages on a single socket simultaneously? Thanks!
Detecting when a tcp client is not active for more than 5 seconds
1.2
0
1
53
46,835,221
2017-10-19T17:02:00.000
1
0
1
0
python,windows,anaconda
47,003,648
2
false
0
0
I have the same problem, I rolled back to version 4.4.0. Nothing I tried worked for versions 5.0.0 and 5.0.1
2
2
0
My old anaconda installation behaved strangely (updating didn't work properly but conda list did work fine). So I decided to reinstall Anaconda, that's where the problems started. After uninstalling the old Anaconda version via the Win10 remove programs tool, I downloaded the installer from the website and installed it. After Installation, no navigator was available, conda listsays the command conda is misspelled or missing, and when opening the anaconda prompt is just says anaconda3\scripts\activate.bat is misspelled or missing. The installed Python 3.6.2 version works just fine (confirmed via python --version). I tried reinstalling multiple times, with reboots in between each step, and even on different drives. Nothing worked. If someone has an idea what is causing this strange behavior I'd be very thankful.
Anaconda 5.0.0 Installation on Windows 10 not working properly
0.099668
0
0
989
46,835,221
2017-10-19T17:02:00.000
0
0
1
0
python,windows,anaconda
46,858,975
2
true
0
0
Well after the Windows Fall update a reinstall solved the problem! Seems something was amiss in the windows installation afterall!
2
2
0
My old anaconda installation behaved strangely (updating didn't work properly but conda list did work fine). So I decided to reinstall Anaconda, that's where the problems started. After uninstalling the old Anaconda version via the Win10 remove programs tool, I downloaded the installer from the website and installed it. After Installation, no navigator was available, conda listsays the command conda is misspelled or missing, and when opening the anaconda prompt is just says anaconda3\scripts\activate.bat is misspelled or missing. The installed Python 3.6.2 version works just fine (confirmed via python --version). I tried reinstalling multiple times, with reboots in between each step, and even on different drives. Nothing worked. If someone has an idea what is causing this strange behavior I'd be very thankful.
Anaconda 5.0.0 Installation on Windows 10 not working properly
1.2
0
0
989
46,836,289
2017-10-19T18:09:00.000
0
0
0
0
python,python-2.7,search,zip
46,899,599
1
true
0
0
Use Zip module and os.walk(path)
1
0
0
I have many zip folders, each of them have many subfolders(subfolders includes zip folder or PDF or txt or xml). Some of them includes a file called "configuration.xml", some are not. I want to search each zip folders and search keyword "1.2.0" in configuration.xml. print the zip folder's name if its subfolders includes "1.2.0" in configuration.xml. How can I do this? Thanks, Jennifer.
How to Print zip file name which subfolders includes a specific string?
1.2
0
0
42
46,837,459
2017-10-19T19:27:00.000
1
0
0
0
python,pandas,latex
61,232,784
1
true
0
0
As of version 1.0.0, released on 29 January 2020, to_latex accepts caption and label arguments.
1
3
1
I am exporting a table from a pandas script into a .tex and would like to add a caption. with open('Table1.tex','w') as tf: tf.write(df.to_latex(longtable=True)) (I invoke the longtable argument to span the long table over multiple pages.) The Table1.tex file gets imported into a bigger LaTeX document via the \import command. I would like to add a caption to the table, ideally on its top. Do you think there would be any way for doing that? I cannot manually edit the Table1.tex, it gets updated regularly using a Python script.
pandas: create a caption with to_latex?
1.2
0
0
1,703
46,840,384
2017-10-19T23:40:00.000
1
0
0
0
python
46,840,632
1
false
0
0
On Linux you can do this with xinetd. You edit /etc/services to give a name to your port, then add a line to /etc/xinetd.conf to run your server when someone connects to that service. The TCP connection will be provide to the Python script as its standard input and output.
1
0
0
Im trying to build a python application that will be run on a specific port so that when i try to connect on that port the python application will be run. Im guessing i have to do that with socket library but im not very sure about that.
Python server with library socket
0.197375
0
1
46
46,841,117
2017-10-20T01:26:00.000
0
0
0
0
python,pandas,periodicity
46,842,124
1
false
0
0
First, you need to define what output you need, then, deduce how to treat the input to get the desired output. Regarding daily data for the first 10 years, it could be a possible option to keep only one day per week. Sub-sampling does not always mean loosing information, and does not always change the final result. It depends on the nature of the collected data: speed of variations of the data, measurement error, noise. Speed of variations: Refer to Shannon to decide whether no information is lost by sampling once every week instead of every day. Given that for the 2 last year, some people had decided to sample only once every week, it seems to say that they have observed that data does not vary much every day and that a sample every week is enough information. That provides a hint to vote for a final data set that would include one sample every week for the total 12 years. Unless they reduced the sampling for cost reason, making a compromise between accuracy and cost of doing the sampling. Try to find in the literature a what speed your data is expected to vary. Measurement error: If the measurement error contains a small epsilon that is randomly positive or negative, then, taking the average of 7 days to make a "one week" data will be better because it will increase the chances to cancel this variation. Otherwise, it is enough to do a sub-sampling taking only 1 day per week and throwing other days of the week. I would try both methods, averaging, and sub-sampling, and see if the output is significantly different.
1
0
1
I have a data set which contains 12 years of weather data. For first 10 years, the data was recorded per day. For last two years, it is now being recorded per week. I want to use this data in Python Pandas for analysis but I am little lost on how to normalize this for use. My thoughts Convert first 10 years data also into weekly data using averages. Might work but so much data is lost in translation. Weekly data cannot be converted to per day data. Ignore daily data - that is a huge loss Ignore weekly data - I lose more recent data. Any ideas on this?
Data Periodicity - How to normalize?
0
0
0
205
46,841,795
2017-10-20T03:02:00.000
1
0
0
0
python,machine-learning,visualization
46,845,150
1
true
0
0
Feature engineering is more of art than technique. That might require domain knowledge or you could try adding, subtracting, dividing and multiplying different columns to make features out of it and check if it adds value to the model. If you are using Linear Regression then the adjusted R-squared value must increase or in the Tree models, you can see the feature importance, etc.
1
0
1
I am having a ML language identification project (Python) that requires a multi-class classification model with high dimension feature input. Currently, all I can do to improve accuracy is through trail-and-error. Mindlessly combining available feature extraction algorithms and available ML models and see if I get lucky. I am asking if there is a commonly accepted workflow that find a ML solution systematically. This thought might be naive, but I am thinking if I can somehow visualize those high dimension data and the decision boundaries of my model. Hopefully this visualization can help me to do some tuning. In MATLAB, after training, I can choose any two features among all features and MATLAB will give a decision boundary accordingly. Can I do this in Python? Also, I am looking for some types of graphs that I can use in the presentation to introduce my model and features. What are the most common graphs used in the field? Thank you
Machine Learning, What are the common techniques for feature engineering and presenting the model?
1.2
0
0
211
46,843,096
2017-10-20T05:55:00.000
0
0
0
1
javascript,couchdb,backup,couchdb-futon,couchdb-python
47,079,805
2
false
0
0
Using CouchDB 1.X you can just save *.couch files corresponding to your databases. Then later you'll be able to use these files to recover your data. Using 2.X I have no idea how to do achieve that since each database is split in shards and the _dbs.couch seems to be needed to restore data. Then you can have a complete backup but not a single database backup. If someone knows how to do that I need the answer too :)
1
0
0
I have some couchdb document need to be removed, Is there good way/steps to backup and delete and backout those document from couchdb?
Is there good way to backup and delete and backout the document from couchdb?
0
0
0
225
46,844,090
2017-10-20T07:15:00.000
0
0
0
0
python,button,lambda,tkinter
46,853,395
1
false
0
1
Just an idea but you could write a function to make a batch file and use os.startfile("batch.bat")
1
0
0
I have a python script that I execute from the command line, and I pass to it some parameters from there as well. Now I'm designing the GUI to execute that script, so I want to execute the script after clicking a button, and passing the arguments from a Listbox. I have seen some suggestions that used lambda functions. However, I couldn't find any example of executing a whole different script that takes arguments from command line basically. I also tried ecevfile, but it executes the script without taking any parameters. How can I execute the script by a button press and passing the arguments as well?
Tkinter: Passing parameters to a python script executed by button
0
0
0
273
46,844,771
2017-10-20T08:03:00.000
0
0
0
0
python,css,selenium,selenium-webdriver,ui-automation
46,849,694
1
true
1
0
Good question. Try to use another selector, for example: css class or use xpath method contains(). Example: //div[contains(text(), "checkbox")] I can help you if you can provide source code of the page or needed element.
1
1
0
one more problem i hv,i asked similar question earlier and i tried that method but not able use that methon in this problem so pls help me. it's element html code is - Filters  So basically, question is that there is one button its kind of toggle button and i want click on that button to select device like Desktop, Tablet & Mobile all check boxes are already (default) selected now i have to uncheck or deselect device, to do this, first i have to click on that toggle button , when i click on toggle button its id (gwt-uid-598) 598 is getting changed every time or every refresh. Can you pls help me, what should or which method should i follow in this case. i am using below python code. Click on device Filters elem = driver.find_element_by_xpath('//*[@id="gwt-uid-598"]/div/div/span') elem.click() Thanks in advance.
id of xpath is getting changed every time in selenium python 2.7 chrome
1.2
0
1
400
46,848,067
2017-10-20T11:20:00.000
0
0
1
1
python,anaconda
46,848,373
1
false
0
0
After installing Anaconda, mostly the path environment variable is overridden hence the system now refers to the Anaconda python interpreter, a quick fix is to correct the path environment (this depends on the type of OS you're running).
1
0
0
I downloaded python anaconda 2.7. I used to work with the regular python from python.org, but I was asked to work with anaconda. any how, I have 2 problems. right click ->edit with idle (does not exist). can't run py file as a program (like cmd)/.
anaconda python running py file
0
0
0
53
46,854,451
2017-10-20T17:36:00.000
22
0
1
0
python,django,git,pip,virtualenv
54,682,545
15
false
1
0
If you are using a virtual environment just use the following line. pip freeze > requirements.txt It commands to create the requirements file first. Or in dockerfile, RUN pip freeze > requirements.txt .
4
33
0
I am setting up the base for a django project, I have cloned a repo and I have just created a virtual environment for the project in the same directory. But when I try to run the command pip install -r requirements.txt in the project directory I get this error: [Errno 2] No such file or directory: 'requirements.txt' I believe I'm just running it in the wrong directory, but I don't really know where I should run it. Do you have any idea where the file could be located?
pip install -r requirements.txt [Errno 2] No such file or directory: 'requirements.txt'
1
0
0
123,318
46,854,451
2017-10-20T17:36:00.000
0
0
1
0
python,django,git,pip,virtualenv
61,945,907
15
false
1
0
Check if you have requirements.txt file in the directory. and then run the following command. pip install -r requirements.txt
4
33
0
I am setting up the base for a django project, I have cloned a repo and I have just created a virtual environment for the project in the same directory. But when I try to run the command pip install -r requirements.txt in the project directory I get this error: [Errno 2] No such file or directory: 'requirements.txt' I believe I'm just running it in the wrong directory, but I don't really know where I should run it. Do you have any idea where the file could be located?
pip install -r requirements.txt [Errno 2] No such file or directory: 'requirements.txt'
0
0
0
123,318
46,854,451
2017-10-20T17:36:00.000
0
0
1
0
python,django,git,pip,virtualenv
63,977,270
15
false
1
0
Make sure the requirements.txt file is in the same folder where you are installing it using pip install -r requirements.txt
4
33
0
I am setting up the base for a django project, I have cloned a repo and I have just created a virtual environment for the project in the same directory. But when I try to run the command pip install -r requirements.txt in the project directory I get this error: [Errno 2] No such file or directory: 'requirements.txt' I believe I'm just running it in the wrong directory, but I don't really know where I should run it. Do you have any idea where the file could be located?
pip install -r requirements.txt [Errno 2] No such file or directory: 'requirements.txt'
0
0
0
123,318
46,854,451
2017-10-20T17:36:00.000
0
0
1
0
python,django,git,pip,virtualenv
65,883,165
15
false
1
0
I had this problem, 3 years too late however move the req file into downloads and then try it again
4
33
0
I am setting up the base for a django project, I have cloned a repo and I have just created a virtual environment for the project in the same directory. But when I try to run the command pip install -r requirements.txt in the project directory I get this error: [Errno 2] No such file or directory: 'requirements.txt' I believe I'm just running it in the wrong directory, but I don't really know where I should run it. Do you have any idea where the file could be located?
pip install -r requirements.txt [Errno 2] No such file or directory: 'requirements.txt'
0
0
0
123,318
46,854,766
2017-10-20T17:58:00.000
0
0
0
1
python,luigi
47,312,020
1
false
0
0
I can tell you that mine workflow works with 10K log files every day without any glitches. The main good stuff here that I have created one task for work with each file.
1
0
0
My first gut reaction is that Luigi isn't suited for this sort of thing, but I would like the "pipeline" functionality and everything keeps pointing me back to Luigi/Airflow. I can't use Airflow as it is a Windows environment. My use-case: So currently from my “source” folder we have 20 or so machines that produce XML data. Over time, some process puts these files into a folder continually on each machine (its log data). On any given day these machines could have 0 files in the folder or it could have 100k+ files (each machine). Eventually someone will go delete all of the files. One part of this process is to to watch all of these directories on all of these machines, and copy the files down to an archive folder if they are new. My current process makes a listing of all the files on each machine every 5 minutes, grabs a listing of the files and loops over the source checking if the file is available at the destination. Copies if it doesn't exist at destination, skips if it does. It seems that Luigi wants to work with only "a" (singular) file in its output and/or target. The issue is, I could have 1 new file, or several thousand files that shows up. This same exact issue happens through the entire process. As the next step in my pipeline is to add the files and its metadata information (size, filename, directory location) to a db record. At that point another process reads all of the metadata record rows and puts them into a content extracted table of the XML log data. Is Luigi even suited for something like this? Luigi seems to want to deal with one thing, do some work on it, and then emit that information out to another single file.
Is Luigi suited for building a pipeline around lots of small files (100k+)
0
0
0
252
46,855,255
2017-10-20T18:32:00.000
0
0
1
0
python,parsing,dataframe,web
46,855,571
1
false
0
0
Unless you need to do that in a hurry, you could just chip off letters from the beginning or the end of the string, and check if it's a known word; if it is, cut it off and repeat. With e.g. 50k words 20 letters each, at worst you'll do 1M lookups. With a lookup taking e.g. 5ms (hitting an HDD every time), it will take 5000 seconds (about 1.5 hours), shorter than you'd spend coming up with a better algorithm.
1
0
0
I am trying to parse some web domains (tens of thousands) to see if they contain any English words. It is easy for me to parse the domains to grab the main part of the domain with tldextract and then I tried to use enchant to see if they exist in the English dictionary. The problem is I do not know how to split the domains in to multiple words to check, i.e. latimes returns as False but times would return as True. Does anyone know a clever way to do look if there is an english word contained at all in the strings? Thanks!
How to find if english words exist in string
0
0
0
87
46,855,278
2017-10-20T18:33:00.000
1
0
1
0
python,shell
46,856,845
1
false
0
0
Autocomplete and navigating history in python shell uses readline library (and module). You can check its availability by import readline. Install readline-dev library by sudo apt-get install libreadline-dev Recompile python Appreciation for @user2357112 !
1
0
0
I've installed python 3.6.2 from source on Linux Mint 17. Also I've python 3.4.3 with OS installation. Just noticed that autocompete on TAB in interactive shell works only in 3.4.3. In 3.6.2 it just inserts tab character. Any solutions?
Python3.6.2 shell on TAB pressed inserts tab character instead of autocomplete. How to fix it?
0.197375
0
0
72
46,860,616
2017-10-21T06:18:00.000
0
0
1
0
java,python
46,861,207
3
true
0
0
Use kivy ,but it won't be as good as Android studio. It is an open source framework. The app will be working fine ,but won't be as a native Android app.
1
0
0
I am pursuing 7th semester btech computer science. I just want to develop a software for Mobile application and I would like to develop that software in python language instead of java. Can I proceed the software in python language?
Can I use software made up of Python language for mobile application?
1.2
0
0
56
46,860,706
2017-10-21T06:34:00.000
0
0
1
0
python,memory,time,operating-system,cpu
46,860,767
1
true
0
0
Efficient way is using time.sleep. Second method is just sleeping (idle) the process for it's own for 1 second. It doesn't use any other resources more that itself. First method is making an another process, which takes more memory space, CPU, etc., and waiting to end (os.system's behavior). Luckily the another process was just timeout, so the result seems same.
1
0
0
What is the difference between os.system("timeout 1") and time.sleep(1) in Python? I know the first one will call out the command line and let it do the timeout, but not sure how the second one make the system idle. Also, which one can save more CPU power or make less memory occupied? Thanks!!
What is the difference between os.system("timeout 1") and time.sleep(1)? Python
1.2
0
0
372
46,860,891
2017-10-21T07:05:00.000
0
0
0
0
python,django,django-admin,makemessages
46,861,958
1
false
1
0
I believe it should work if you did all right. Take a look at this docs part: In addition, manage.py is automatically created in each Django project. manage.py does the same thing as django-admin but takes care of a few things for you: It puts your project’s package on sys.path. It sets the DJANGO_SETTINGS_MODULE environment variable so that it points to your project’s settings.py file. Carefully check this two moments. Since your manage.py works as expected, you already added your app in INSTALLED_APPS (after that Django can find and override default management command).
1
0
0
I'm using Django1.11.5 and I created makemessages.py file in "my-app/management/commands/" directory to customise makemessages command. And I made it to execute this command by running "python ../manage.py makemessages" from my-app directory. But I want to execute by "django-admin makemessages -l ja". (Running "django-admin makemessages -l ja" just executes default makemessages command) Is there any way to execute this customised command by running "django-admin makemessages -l ja"?
Is there any way to execute custom makemessages command by running "django-admin makemessages -l ja"?
0
0
0
401
46,864,270
2017-10-21T14:24:00.000
4
1
0
0
python,python-3.x,scikit-learn,atom-editor,hydrogen
48,893,827
2
false
0
0
I don't have high enough reputation to comment, so my barebones answer will have to be put here. I think your problem is to do with where the kernel is starting. In the Hydrogen settings, look for the option 'Directory to start kernel in'. The default is to always start in the directory in which Hydrogen was first invoked. If you have installed modules in a different working directory, then they will not be found, unless you change this option to 'current directory of the file' (restart required) You can check your sys.path() to see where the kernel is looking for modules. If all else fails, you can manually move the installed packages to the 'site-packages' folder, whose location is revealed by sys.path() I thought pip would put the packages in the right place by default, but maybe not - especially if you have virtual environments set up. You can use the command pip show <package name> to get the path to which pip has installed the package in question.
1
1
0
I can run the code but trying to use the Hydrogen package in Atom I have problems importing some (not all) modules and I do not why. I do use Hydrogen with Python3.6 and i did install all needed modules with pip3. ImportErrorTraceback (most recent call last) in () ----> 1 import sklearn ImportError: No module named sklearn
Importing Modules with Hydrogen in Atom
0.379949
0
0
2,045
46,864,368
2017-10-21T14:34:00.000
1
1
1
0
python,c++,constructor
46,864,702
3
true
0
0
Yes, Python's __init__ is analogous to C++'s constructor. Both are typically where non-static data members are initialized. In both languages, these functions take the in-creation object as the first argument, explicit and by convention named self in Python and implicit and by language named this in C++. In both languages, these functions can return nothing. One notable difference between the languages is that in Python base-class __init__ must be called explicitly from an inherited class __init__ and in C++ it is implicit and automatic. C++ also has ways to declare data member initializers outside the body of the constructor, both by member initializer lists and non-static data member initializers. C++ will also generate a default constructor for you in some circumstances. Python's __new__ is analogous to C++'s class-level operator new. Both are static class functions which must return a value for the creation to proceed. In C++, that something is a pointer to memory and in Python it is an uninitialized value of the class type being created. Python's __del__ has no direct analogue in C++. It is an object finalizer, which exist also in other garbage collected languages like Java. It is not called at a lexically predetermined time, but the runtime calls it when it is time to deallocate the object. __exit__ plays a role similar to C++'s destructor, in that it can provide for deterministic cleanup and a lexically predetermined point. In C++, this tends to be done through the C++ destructor of an RAII type. In Python, the same object can have __enter__ and __exit__ called multiple times. In C++, that would be accomplished with the constructor and destructor of a separate RAII resource holding type. For example, in Python given an instance lock of a mutual exclusion lock type, one can say with lock: to introduce a critical section. In C++, we create an instance of a different type taking the lock as a parameter std::lock_guard g{lock} to accomplish the same thing. The Python __enter__ and __exit__ calls map to the constructor and destructor of the C++ RAII type.
2
1
0
I have worked with Python for about 4 years and have recently started learning C++. In C++ you create a constructor method for each class I I was wondering if it is correct to think that this is equivalent to the __init__(self) function in Python? Are there any notable differences? Same question for a C++ destructor method vs. Python _exit__(self)
Python __init__ Compared to C++ Constructor
1.2
0
0
1,097
46,864,368
2017-10-21T14:34:00.000
1
1
1
0
python,c++,constructor
46,864,771
3
false
0
0
The best you can say is that __init__ and a C++ constructor are called at roughly the same point in the lifetime of a new object, and that __del__ and a C++ destructor are also called near the end of the lifetime of an object. The semantics, however, are markedly different, and the execution model of each language makes further comparison more difficult. Suffice it to say that __init__ is used to initialize an object after it has been created. __del__ is like a destructor that may be called at some unspecified point in time after the last reference to an object goes away, and __exit__ is more like a callback invoked at the end of a with statement, whether or not the object's reference count reaches zero.
2
1
0
I have worked with Python for about 4 years and have recently started learning C++. In C++ you create a constructor method for each class I I was wondering if it is correct to think that this is equivalent to the __init__(self) function in Python? Are there any notable differences? Same question for a C++ destructor method vs. Python _exit__(self)
Python __init__ Compared to C++ Constructor
0.066568
0
0
1,097
46,864,499
2017-10-21T14:48:00.000
1
0
1
0
python-3.x,amazon-web-services,aws-lambda
70,804,457
3
false
0
0
Came across the same issue and found the solution. What you want is remove_permission() on the lambda client
1
3
0
I have a lambda function and for that lambda function my cloudwatch event is a trigger on it... at the end of the lambda function i need to delete the trigger (cloud watch event ) on that lambda function programatically using python . how can i do that ? is there any python library to do that?
Delete trigger on a AWS Lambda function in python
0.066568
0
0
2,408
46,864,680
2017-10-21T15:08:00.000
0
0
1
1
python,windows-10,virtualenvwrapper
47,361,363
2
false
0
0
mkproject is included in v1.2.4 of virtualenvwrapper-win. There is no need to call a setup script like you would do in linux. The linux version implements virtualenvwrapper as shell functions, while virtualenvwrapper-win implements them as .bat files located in your python's Scripts directory.
1
1
0
Why is the virtualenvwrapper mkproject command is not working in Windows 10? And what file do you edit for shell startup in Windows 10 to setup virtualenvwrapper?
virtualenvwrapper mkproject and shell startup in windows issue?
0
0
0
418
46,864,915
2017-10-21T15:35:00.000
1
0
0
0
python-3.x,opencv,ffmpeg,opencv-python
61,983,790
4
false
0
0
You can use pygame for audio. You need to initialize pygame.mixer module And in the loop, add pygame.mixer.music.play() But for that, you will need to choose audio file as well. However, I have found better idea! You can use webbrowser module for playing videos (and because it would play on browser, you can hear sounds!) import webbrowser webbrowser.open("video.mp4")
1
8
1
I use python cv2 module to join jpg frames into video, but I can't add audio to it. Is it possible to add audio to video in python without ffmpeg? P.S. Sorry for my poor English
Python add audio to video opencv
0.049958
0
0
19,102
46,865,570
2017-10-21T16:43:00.000
0
0
1
0
python
46,869,856
1
false
0
0
I agree with Zindarod on multiprocessing, especially during the proof of concept phase. If you hope to hook this up at scale then I'd recommend a different system architecture. Each raspberry pi is a client which communicates with a server. This will help logically abstract tasks and make it easier to integrate with celery or other asynch task management tools.
1
0
0
I am designing a traffic light signal based on image processing. I get the traffic congestion as a numeric value. I am using a RaspberryPi. I have to design a working model. Now the problem is I use time.sleep() to generate delays in the switching of lights. But during this time I cant do any other processing. so I want to write the numeric values some where and access them in a separate script.
I want to simultaneously write and read numeric values in two different python scripts
0
0
0
59
46,867,161
2017-10-21T19:25:00.000
1
0
0
1
python,google-app-engine
46,868,601
1
true
1
0
You must redeploy the service. App Engine isn't like a standard hosting site where you FTP single files, rather you upload a service that becomes containerized that can scale out to run on many instances. For a small site, this might feel weird, but consider a site serving huge amounts of traffic that might have hundreds of instances of code running that is automatically load balanced. How would you replace that single file in that situation across all your instances? So you upload a new version of a service and then you can migrate traffic to the new version either immediately or ramped up. What you might consider an annoyance is part of the tradeoff that makes App Engine hugely powerful in not having to worry about how your app scales or is networked.
1
0
0
If I'm just updating, say, my main.py file, is there a better way to update the app then running gcloud app deploy, which takes several minutes? I wouldn't think I need to completely blow up and rebuild the environment if I'm just updating one file.
How can I update one (or a couple) files in a Google App Engine Flask App?
1.2
0
0
50
46,867,272
2017-10-21T19:36:00.000
1
0
1
0
python
46,867,396
4
false
0
0
U can use the len() function by specifying the inner sublists movies = [ [list1] , [list2] ] ; print(len(movies[0])); # prints length of 1st sublist print(len(movies[1])); #prints length of second sublist
1
0
0
I have a list called movies with two sublists embedded it. Do the sublists have names too? I want to use len() BIF to measure all items in the list and sublists, how do I do that?
Python len function on list with embedded sublists and strings
0.049958
0
0
65
46,869,761
2017-10-22T01:46:00.000
0
0
0
0
multithreading,psycopg2,python-3.6
46,941,465
1
false
0
0
Each thread should use a distinct connection to avoid problems with inconsistent states and to make debugging easier. On web servers, this is typically achieved by using a pooled connection. Each thread (http request processor) picks up a connection from the pool when it needs it and then returns it back to the pool when done. In your case, you can just create a new connection for each thread and pass it to the thread which can close it when done.
1
3
0
I have different threads running which all write to the same database (though not the same table). Currently I have it setup that I create a connection, and pass that to each thread, which then creates it own cursor for writing. I haven't implementing the writing to db part yet, but am wondering if not every thread needs it's own connection? Thanks!
using python psycopg2: multiple cursors (1 per thread) on same connection
0
1
0
2,665
46,871,459
2017-10-22T07:14:00.000
1
0
0
0
javascript,java,python,asp.net,angularjs
46,871,523
2
false
1
0
First of all, if you don't trust one application to not clobber the log file of another one, then you have a really serious problem. `Cos the applications could also do other more harmful things. If the applications are believed to be insecure or untrustworthy, the most secure answer is to not run them at all. If you are concerned that applications might accidentally write to the same log file1, just configure them to use different log file names or different log file directories2. If you are simply trying to secure the logging, then one solution is to do your logging via an independent (secure) logging service such as Linux syslog. Another approach is to create separate accounts for running each of the applications, and make sure that you set the file and directory access so that one application doesn't have permission to modify another's log file. 1 - Actually applications can safely write to the same file if they all open the logfile in "append" mode. Log file rotation could be problematic though. 2 - You've probably already thought of that.
2
0
0
I have a 5 applications(programmed in 5 different languages) deployed in one common web server and these applications are written to work on a particular task and create a log file for any action. How can we achieve a security of log file by not being over written by other applications?
Applications / code security
0.099668
0
0
35
46,871,459
2017-10-22T07:14:00.000
0
0
0
0
javascript,java,python,asp.net,angularjs
46,872,119
2
false
1
0
If your logs are stored in a commong log folder, then a simple solution would be to have proper naming for logs of different applications e.g. pythonApp_log_timestamp aspNetApp_log_timestamp This way applications will write to its designated logs and not overwrite other application's log and would work irrespective of your hosting environment linux or windows. Also you could limit the file size of each of the log file that your applications create, this way you could ensure that applications create new file if specified file size has been reached and not too much data is dumped into a single file, this will help in debugging or verifying the logs later. I use this method in my application and helps me find the log from specific duration and not searching logs too much.
2
0
0
I have a 5 applications(programmed in 5 different languages) deployed in one common web server and these applications are written to work on a particular task and create a log file for any action. How can we achieve a security of log file by not being over written by other applications?
Applications / code security
0
0
0
35
46,871,609
2017-10-22T07:34:00.000
2
1
0
0
rabbitmq,messaging,microservices,pika,python-pika
56,149,895
1
false
0
0
Yes of course. You can achieve that in many ways (via the same connection, different connection, same channel, different channel etc.) What I do when I have implemented this in the past is, I create my connection, get the channel and setup my consumer with it's delegate (function). When my consume message function is called I get the channel parameter that comes with it, which I sub-sequentially use to publish the next message to a different queue. If you don't want to use the same channel, you can simply setup another then.
1
3
0
Could someone with Pika experience give me a quick yes/no response as to whether the following functionality is possible, or whether my thinking that it is indicates a lack of conceptual understanding of Pika. My desired functionality: Python service (single threaded script) has one connection to my RabbitMQ broker using the SelectConnection adapter. That connection has two channels. Using one channel, A, the service declares a queue and binds to some exchange E1. The other channel, B, is used to declares some other exchange, E2. The service consumes messages from the queue via A. It does some small processing of those messages, [possibly carries out CRUD operates through its connection to a MongoDB instance,] then publishes a message to exchange E2 via B. I have read the Pika docs thoroughly, and have not found enough information to understand whether this is doable. To put it simply - can a single python script both publish and consume via one selectconnection adapter connection?
Can a Pika RabbitMQ client service both consume and publish messages?
0.379949
0
0
1,125
46,871,665
2017-10-22T07:42:00.000
0
0
1
0
matlab,python-3.x,memory,variable-assignment
46,873,435
1
false
0
0
You can use indexing such as x = func()[0] for the first return value, or x = func()[1] for the second. Ranges like x = func()[5:10] also work.
1
0
0
I want to make an assingment without allocating memory for particular elements. In MATLAB there is the ~ symbol, which acts as a placeholder and allows for the the assignment. For example, (~,b) = deal(1,2) can be executed without allocating any memory in the a variable ~.
python assignment placeholder for no memory allocation
0
0
0
68
46,871,792
2017-10-22T07:59:00.000
0
1
0
0
python,raspberry-pi,sandbox,iot,kaa
48,001,019
1
false
0
0
Mashid. From what I know, to use a KAA server, you should utilize the SDK obtained when you create a new application on the KAA server. This SDK also functions as an API key that will connect the device with a KAA server (on a KAA server named with Application Token). The platforms provided for using this SDK are C, C ++, Java, Android, and Objective C. There is currently no SDK for the Python platform.
1
0
0
I have some sensor nodes. they are connected to a Raspberry Pi 2 and send data on it. the data on Raspberry Pi is sending the data to Thingspeak.com and it shows the data from sensor nodes. now I am developing a Kaa server and wanna see my data (from Raspberry Pi) on Kaa. is there any chance to connect the current programmed Raspberry Pi(in Python) to Kaa? Many thanks, Shid
how to connect a programmed raspberry pi to a Kaa platform?
0
0
0
495
46,878,736
2017-10-22T20:16:00.000
1
1
0
0
python,c++,ide
46,878,851
1
true
0
0
check that in dev-C++ tools > compiler options > directories > c includes and c++ includes have the path to where your Python.h is.
1
1
0
I'm using Dev C++. Include Python.h doesnt work, and IDE states it cant find the file or directory. I can pull up the Python.h C file, so I know I have it. How do I connect two-and-two? I imagine I have to tell my IDE where the file path is, but how would I do that?
How to get Dev C++ to find Python.h
1.2
0
0
1,269
46,879,611
2017-10-22T22:06:00.000
0
0
0
0
python,postgresql,sqlalchemy,flask-sqlalchemy
46,879,648
2
false
0
0
For simple INSERTs, yes, you can safely have four producers adding rows. I'm assuming you don't have long running queries, as consistent reads can require allocating an interesting amount of log space if inserts keep happening during an hour-long JOIN. if I am inserting large amounts of data and one insert causes another to timeout? You suggest that a timeout could arise from multiple competing INSERTs, but I don't understand what might produce that. I don't believe that is a problem you have thus far observed. Readers and writers can contend for locks, but independent INSERTing processes are quite safe. If four processes were doing BEGIN, UPDATE 1, ... UPDATE N, COMMIT, then respecting a global order would matter, but your use case has the advantage of being very simple.
2
2
0
I have a PostgreSQL database in which I am collecting reports from 4 different producers. Back when I wrote this I defined 4 different schemas (one per producer) and since the reports are similar in structure each schema has exactly the same tables inside. I'd like to combine the schemas into one and add an extra column with the producer id to the tables. At the moment I have 4 python processes running - one per producer. A process collects a report and inserts it in the DB. My very simple code has been running without crashing for the past few months. The current design makes it impossible for 2 processes to want to insert data into the DB at the same time. If I made the DB changes (single schema with single table) several processes might want to insert data simultaneously. For the moment, I will exclude combining the processes into a single one, please assume I don't do this. I am unsure if I need to worry about any special code to handle the case of more than one process inserting data into the DB? I am using python3 + SQLAlchemy + Flask. I would imagine the ACID properties of a DB should automatically handle the case of 2 or more processes wanting to insert data simultaneously (data in report is small and insertion will take less than 1s). Can I combine the schemas without worrying about processes insert collisions?
Can I safely combine my schemas
0
1
0
80
46,879,611
2017-10-22T22:06:00.000
1
0
0
0
python,postgresql,sqlalchemy,flask-sqlalchemy
46,879,647
2
true
0
0
This won't be a problem if you are using a proper db such as Postgres or MySQL. They are designed to handle this. If you are using sqlite then it could break.
2
2
0
I have a PostgreSQL database in which I am collecting reports from 4 different producers. Back when I wrote this I defined 4 different schemas (one per producer) and since the reports are similar in structure each schema has exactly the same tables inside. I'd like to combine the schemas into one and add an extra column with the producer id to the tables. At the moment I have 4 python processes running - one per producer. A process collects a report and inserts it in the DB. My very simple code has been running without crashing for the past few months. The current design makes it impossible for 2 processes to want to insert data into the DB at the same time. If I made the DB changes (single schema with single table) several processes might want to insert data simultaneously. For the moment, I will exclude combining the processes into a single one, please assume I don't do this. I am unsure if I need to worry about any special code to handle the case of more than one process inserting data into the DB? I am using python3 + SQLAlchemy + Flask. I would imagine the ACID properties of a DB should automatically handle the case of 2 or more processes wanting to insert data simultaneously (data in report is small and insertion will take less than 1s). Can I combine the schemas without worrying about processes insert collisions?
Can I safely combine my schemas
1.2
1
0
80
46,885,005
2017-10-23T08:33:00.000
2
0
1
0
pip,anaconda,python-3.6
46,904,976
1
true
0
0
I have a solutions: 1, First open Anaconda Navigator , and install 'setuptools' 2, open Anaconda prompt and input 'easy_install pip' pip work properly
1
0
0
I downloaded Anaconda3 and create new enviroment env_python36_for_dl , when i open terminal, the error appear as Traceback (most recent call last): File "C:\Users\Yurak\AppData\Local\conda\conda\envs\env_python36_for_dl\Scripts\pip-script.py", line 3, in <module> import pip File "C:\Users\Yurak\AppData\Local\conda\conda\envs\env_python36_for_dl\lib\site-packages\pip\__init__.py", line 26, in <module> from pip.utils import get_installed_distributions, get_prog File "C:\Users\Yurak\AppData\Local\conda\conda\envs\env_python36_for_dl\lib\site-packages\pip\utils\__init__.py", line 27, in <module> from pip._vendor import pkg_resources File "C:\Users\Yurak\AppData\Local\conda\conda\envs\env_python36_for_dl\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 3018, in <module> @_call_aside File "C:\Users\Yurak\AppData\Local\conda\conda\envs\env_python36_for_dl\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 3004, in _call_aside f(*args, **kwargs) File "C:\Users\Yurak\AppData\Local\conda\conda\envs\env_python36_for_dl\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 3046, in _initialize_master_working_set dist.activate(replace=False) File "C:\Users\Yurak\AppData\Local\conda\conda\envs\env_python36_for_dl\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2578, in activate declare_namespace(pkg) File "C:\Users\Yurak\AppData\Local\conda\conda\envs\env_python36_for_dl\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2152, in declare_namespace _handle_ns(packageName, path_item) File "C:\Users\Yurak\AppData\Local\conda\conda\envs\env_python36_for_dl\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2092, in _handle_ns _rebuild_mod_path(path, packageName, module) File "C:\Users\Yurak\AppData\Local\conda\conda\envs\env_python36_for_dl\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2121, in _rebuild_mod_path orig_path.sort(key=position_in_sys_path) AttributeError: '_NamespacePath' object has no attribute 'sort' I had set up pip and setuptools with Anaconda3. If i open Anaconda3 root environment in terminal , pip work properly. I don't know the new environment differents between root environment.
Anaconda3 new environments pip `AttributeError: '_NamespacePath' object has no attribute 'sort'`
1.2
0
0
477
46,886,704
2017-10-23T10:05:00.000
4
0
1
0
python,python-2.7,python-module,pyc
46,929,991
2
false
0
0
Even as compiled bytecode, names in modules are still strings. As long as the interfaces of the modules are compatible the code will still work with different module versions.
1
2
0
Suppose I compile some python files (.py to .pyc / .pyo) containing code using modules like NumPy, SciPy, MatPlotLib. If I execute them on another configuration (i.e. the client), is it required that versions of modules are the same ? Or have I only to be in the range of compatible versions ?
Compatibility of versions for compiled Python code using modules
0.379949
0
0
233
46,886,770
2017-10-23T10:08:00.000
1
0
0
0
python,tensorflow,neural-network
46,887,660
2
false
0
0
A simple option would be to replicate W. If the original W is, lets say p X q, you can just do l = tf.matmul(X, tf.tile(W, (k, 1))).
1
0
1
I am trying to build a neural network with shared input weights. Given pk inputs in the form X=[x_1, ..., x_p, v_1,...,v_p,z1,...,z_p,...] and a weight matrix w of shape (p, layer1_size), I want the the first layer to be defined as sum(w, x_.) + sum(w, v_.) + .... In other words the input and the first layer should be fully connected where weights are shared across the different groups of inputs. l = tf.matmul(X, W) where each row of W must have a structure like: (w1, ... ,wp, w1, ..., wp, ...) How can I do that in tensorflow?
Tensorflow share weights across input placeholder
0.099668
0
0
547
46,887,360
2017-10-23T10:38:00.000
0
0
0
1
python,ping,command-window,stream-operators
46,953,793
1
true
0
0
It works with os.system('powershell "ping 127.0.0.1 -t | tee ping.txt')
1
0
0
Using os.system('ping 127.0.0.1 -t >> new.txt'). I am able to get the ping result in new.txt document. How to get the ping result in command window and text file at the same time in case of stream output like this...?
How to get stream output in command window and text document simultaneously
1.2
0
0
62
46,887,939
2017-10-23T11:10:00.000
0
0
1
0
python,linux,bash,shell,anaconda
47,472,416
1
true
0
0
Having experimented with both approaches over the last couple of weeks I have settled on the first option: Adding the Anaconda binary path to the PATH variable in user .bashrc files. I have found the benefits of this method to be: Regardless of whether the Anaconda activate script is sourced in the user's .bashrc, when the user switches to one of their virtual environments and then runs source deactivate to deactivate the current environment they will always end up outside of the Anaconda environment. In this case, unless the Anaconda binary path has also been explicitly added to their PATH variable the deactivate script will remove the Ansible binary path from the PATH variable. The Anaconda activate script performs a bunch of actions which may be an unnecessary overhead when performing non-Python related actions in ones shell, if it is sourced every time a new shell is started. If one sets the Anaconda binaries to be in their PATH variable then it's simple enough to run source activate to enable the "root" Anaconda environment. (With either solution, something similar has to be done anyway if one is commonly making use of virtual environments.)
1
4
0
I would like to move to Anaconda Python as my default Python environment. In order to use Anaconda over the system Python I have been looking at the following two options: Adding the Anaconda bin path to my bash PATH variable (in my .bashrc) so that the Anaconda binaries take precedence over those elsewhere on the system. Sourcing the Anaconda activate script in my bash shell (again, automated by adding it to my .bashrc). As someone who is relatively new to Anaconda Python, I'm not sure which of the two approaches is generally considered as better. Therefor I was wondering if there is any general guidance in this regard? As far as I can work out the main difference between the two approaches is that the activate script sets up a number of additional shell environment variables such as: CONDA_PREFIX, PS1, CONDA_PS1_BACKUP and CONDA_DEFAULT_ENV.
Sourcing Anaconda activate script vs. adding the Anaconda bin directory to PATH
1.2
0
0
553
46,893,461
2017-10-23T15:49:00.000
1
0
1
0
python,import,pycharm
46,893,773
4
false
0
0
File >> Settings >> Project interpreter. You should see a list of currently installed packages/libraries. If the module is not specified in there click the plus sign and look for your module in there. Also make sure you specify it correctly when you import it.
4
2
0
I can't import standard math module in pycharm, I'm getting an error: "No module named math", from python shell I can import it, what can cause that?
Pycharm cannot import math module
0.049958
0
0
21,831
46,893,461
2017-10-23T15:49:00.000
3
0
1
0
python,import,pycharm
46,991,033
4
true
0
0
Check that you are pointing to the correct python file within Settings > Project > Project Interpreter. When I've had this problem in the past my interpreter was pointing at python3.6 and not python within the bin folder of my venv. I simply dropped the interpreter and added it again pointing to the venv-name/bin/python
4
2
0
I can't import standard math module in pycharm, I'm getting an error: "No module named math", from python shell I can import it, what can cause that?
Pycharm cannot import math module
1.2
0
0
21,831
46,893,461
2017-10-23T15:49:00.000
0
0
1
0
python,import,pycharm
47,998,476
4
false
0
0
I had a similar issue and was able to fix it by creating a new Python system interpreter. Project Interpreter -> Add Local -> System Interpreter -> /usr/local/bin/python3
4
2
0
I can't import standard math module in pycharm, I'm getting an error: "No module named math", from python shell I can import it, what can cause that?
Pycharm cannot import math module
0
0
0
21,831
46,893,461
2017-10-23T15:49:00.000
0
0
1
0
python,import,pycharm
63,342,979
4
false
0
0
import math -> instead of using "print(sqrt(25))" use "print(math.sqrt(25))" (here sqrt is for exp. only)
4
2
0
I can't import standard math module in pycharm, I'm getting an error: "No module named math", from python shell I can import it, what can cause that?
Pycharm cannot import math module
0
0
0
21,831
46,894,762
2017-10-23T17:02:00.000
1
0
1
0
python,tkinter,pycharm,tk,python-3.6
51,811,625
6
false
0
1
install Packages : 'future' import tkinter
4
2
0
I've been playing around with tkinter a lot at my school using pycharm, but when i go home and use the pycharm on my pc, the module is not installed for tkinter. I can't seem to find how to install it. I am running python 3.6. I do not have python running on cmd.exe. All help is appreciated, thanks.
How to install tkinter onto pycharm
0.033321
0
0
15,283
46,894,762
2017-10-23T17:02:00.000
0
0
1
0
python,tkinter,pycharm,tk,python-3.6
65,334,470
6
false
0
1
Depending on your setup, you may need to uninstall and reinstall Pycharm. I had this problem on Linux Mint 20, and the issue was I was using the Flathub version of Pycharm. I uninstalled, and downloaded pycharm from the website, installed through terminal and suddenly everything works.
4
2
0
I've been playing around with tkinter a lot at my school using pycharm, but when i go home and use the pycharm on my pc, the module is not installed for tkinter. I can't seem to find how to install it. I am running python 3.6. I do not have python running on cmd.exe. All help is appreciated, thanks.
How to install tkinter onto pycharm
0
0
0
15,283
46,894,762
2017-10-23T17:02:00.000
-3
0
1
0
python,tkinter,pycharm,tk,python-3.6
58,871,000
6
false
0
1
In pycharm project interpreter you need to install future package then restart pycharm, after that use import tkinter. Remember to use lower case.
4
2
0
I've been playing around with tkinter a lot at my school using pycharm, but when i go home and use the pycharm on my pc, the module is not installed for tkinter. I can't seem to find how to install it. I am running python 3.6. I do not have python running on cmd.exe. All help is appreciated, thanks.
How to install tkinter onto pycharm
-0.099668
0
0
15,283
46,894,762
2017-10-23T17:02:00.000
0
0
1
0
python,tkinter,pycharm,tk,python-3.6
50,565,829
6
false
0
1
Just import Tkinter with lowercase t: import tkinter as tk
4
2
0
I've been playing around with tkinter a lot at my school using pycharm, but when i go home and use the pycharm on my pc, the module is not installed for tkinter. I can't seem to find how to install it. I am running python 3.6. I do not have python running on cmd.exe. All help is appreciated, thanks.
How to install tkinter onto pycharm
0
0
0
15,283
46,895,616
2017-10-23T17:54:00.000
2
0
1
0
python,python-3.x,ipython
46,895,617
1
true
0
0
It appears that I was working in ipython and that the builtin sum function was masked by "function sum in module numpy.core.fromnumeric" (according to help(sum)). I suspect an effect of having issued the command %pylab. __builtin__.sum(map(len, ["a", "aa", "aaa"])) gives the expected 6.
1
3
0
sum(map(len, ["a", "aa", "aaa"])) gives me a map instead of a number. I expected this to give the same result as sum(len(thing) for thing in ["a", "aa", "aaa"]) (that is 6). I see that list(sum(map(len, ["a", "aa", "aaa"]))) returns me [1, 2, 3], as if the sum had no effect. I assume there is a reason for such a behaviour. Is there an intended use case for this?
Intended effect of sum over a python map
1.2
0
0
304
46,895,772
2017-10-23T18:05:00.000
1
0
0
0
python,image,numpy,opencv,image-processing
58,123,095
3
false
0
0
I'd take a look at morphological operations. Dilation sounds closest to what you want. You might need to work on a subregion with your line if you don't want to dilate the rest of the image.
1
8
1
I'm using OpenCV to do some image processing on Python. I'm trying to overlay an outline on an image where the outline was made from a mask. I'm using cv2.Canny() to get the outline of the mask, then changing that to a color using cv2.cvtColor() then finally converting that edge to cyan using outline[np.where((outline == [255,255,255]).all(axis=2))] = [180,105,255]. My issue now is that this is a one pixel thick line and can barely be seen on large images. This outline is all [0,0,0] except on the points I apply as a mask onto my color image using cv2.bitwise_or(img, outline. I'm currently thickening this outline by brute forcing and checking every single pixel in the bitmap to check if any of its neighbors are [180,105,255] and if so, that pixel will also change. This is very slow. Is there any way using numpy or openCV to do this automatically? I was hoping for some conditional indexing with numpy, but can't find anything.
Thicken a one pixel line
0.066568
0
0
7,385
46,897,003
2017-10-23T19:22:00.000
2
0
0
0
python,python-docx
46,897,992
1
false
0
0
There are a few different possibilities your description would admit, but none of them have direct API support in python-docx. The simplest case is copying a table from one part of a python-docx Document object to another location in the same document. This can probably be accomplished by doing a deep copy of the XML for the table. The details of how to do this are beyond the scope of this question, but there are some examples out there if you search on "python-docx" OR "python-pptx" deepcopy. More complex is copying a table between one Document object and another. A table may contain external references that are available in the source document but not the target document. Consequently, a deepcopy approach will not always work in this case without locating and resolving any dependencies. Finally, there is copying/embedding a table OLE object, such as might be found in a PowerPoint presentation or formed from a range in an Excel document. Embedding OLE objects is not supported and is not likely to be added anytime soon, mostly because of the obscurity of the OLE format embedding format (not well documented).
1
0
0
I want to add docx.table.Table and docx.text.paragraph.Paragraph objects to documents. Currently table = document.add_table(rows=2, cols=2) Would create a new table inside the document, and table would hold the docx.table.Table object with all its properties. What I want to do instead is add a table OBJECT to the document that I previously read from another document for example. I'm guessing the iterating through every property of the newly added table and the table object I read earlier and setting the values would be enough, but is there an alternative method? Thank you!
python-docx add table object to document
0.379949
1
0
1,713
46,897,967
2017-10-23T20:29:00.000
1
0
1
1
python,linux,centos,yum,system-configuration
46,924,884
1
false
0
0
Sure, but you'll have to do it the hard way. Dig through /usr/local looking for anything Python-related and remove it. The python in /usr/bin should be revealed once the one in /usr/local/bin is removed. Also, next time make altinstall. It will install a versioned executable that won't get in the way of the native executable.
1
2
0
I accidentally downloaded python2.6.6 on my centos Virtual Machine from python.org's official download package and compiled it to source. Now in my /usr/local/bin I have a python2.6 shell available and now if I use which python it will give me the path of /usr/local/bin instead of original python2.7's path which is /usr/bin. Since I installed it from source, yum doesn't recognise python2.6.6 as a package and I want to get rid of it. If I do rpm -q python it gives me a result of python-2.7.5-48.0.1.el7.x86_64 Is it possible to uninstall python2.6.6 and I will just re-point my python system variable to /usr/bin again?
Uninstall python 2.6 without yum
0.197375
0
0
2,370
46,898,131
2017-10-23T20:40:00.000
4
1
0
0
java,python
46,918,732
1
true
0
0
I think you missed the part where it says but no group of (num_required - 1) bunnies can. I can explain my solution further, but I will ruin the fun. (I'm the owner of that repo). Let's try it with your answer. [[0], [0, 1, 2], [0, 1, 2], [1], [2]] Your consoles are 3. Bunny 2 can open it on its own, Bunny 3 can open it also on his own -> it does NOT satisfy the rule.
1
3
0
I'm working my way through Google Foobar and I'm very confused about "Free the Bunny Prisoners". I'm not looking for code, but I could use some insight from anyone that's completed it. First, the problem: Free the Bunny Prisoners You need to free the bunny prisoners before Commander Lambda's space station explodes! Unfortunately, the commander was very careful with her highest-value prisoners - they're all held in separate, maximum-security cells. The cells are opened by putting keys into each console, then pressing the open button on each console simultaneously. When the open button is pressed, each key opens its corresponding lock on the cell. So, the union of the keys in all of the consoles must be all of the keys. The scheme may require multiple copies of one key given to different minions. The consoles are far enough apart that a separate minion is needed for each one. Fortunately, you have already freed some bunnies to aid you - and even better, you were able to steal the keys while you were working as Commander Lambda's assistant. The problem is, you don't know which keys to use at which consoles. The consoles are programmed to know which keys each minion had, to prevent someone from just stealing all of the keys and using them blindly. There are signs by the consoles saying how many minions had some keys for the set of consoles. You suspect that Commander Lambda has a systematic way to decide which keys to give to each minion such that they could use the consoles. You need to figure out the scheme that Commander Lambda used to distribute the keys. You know how many minions had keys, and how many consoles are by each cell. You know that Command Lambda wouldn't issue more keys than necessary (beyond what the key distribution scheme requires), and that you need as many bunnies with keys as there are consoles to open the cell. Given the number of bunnies available and the number of locks required to open a cell, write a function answer(num_buns, num_required) which returns a specification of how to distribute the keys such that any num_required bunnies can open the locks, but no group of (num_required - 1) bunnies can. Each lock is numbered starting from 0. The keys are numbered the same as the lock they open (so for a duplicate key, the number will repeat, since it opens the same lock). For a given bunny, the keys they get is represented as a sorted list of the numbers for the keys. To cover all of the bunnies, the final answer is represented by a sorted list of each individual bunny's list of keys. Find the lexicographically least such key distribution - that is, the first bunny should have keys sequentially starting from 0. num_buns will always be between 1 and 9, and num_required will always be between 0 and 9 (both inclusive). For example, if you had 3 bunnies and required only 1 of them to open the cell, you would give each bunny the same key such that any of the 3 of them would be able to open it, like so: [ [0], [0], [0], ] If you had 2 bunnies and required both of them to open the cell, they would receive different keys (otherwise they wouldn't both actually be required), and your answer would be as follows: [ [0], [1], ] Finally, if you had 3 bunnies and required 2 of them to open the cell, then any 2 of the 3 bunnies should have all of the keys necessary to open the cell, but no single bunny would be able to do it. Thus, the answer would be: [ [0, 1], [0, 2], [1, 2], ] Languages To provide a Python solution, edit solution.py To provide a Java solution, edit solution.java Test cases Inputs: (int) num_buns = 2 (int) num_required = 1 Output: (int) [[0], [0]] Inputs: (int) num_buns = 5 (int) num_required = 3 Output: (int) [[0, 1, 2, 3, 4, 5], [0, 1, 2, 6, 7, 8], [0, 3, 4, 6, 7, 9], [1, 3, 5, 6, 8, 9], [2, 4, 5, 7, 8, 9]] Inputs: (int) num_buns = 4 (int) num_required = 4 Output: (int) [[0], [1], [2], [3]] I can't figure out why answer(5, 3) = [[0, 1, 2, 3, 4, 5], [0, 1, 2, 6, 7, 8], [0, 3, 4, 6, 7, 9], [1, 3, 5, 6, 8, 9], [2, 4, 5, 7, 8, 9]]. It seems to me that [[0], [0, 1, 2], [0, 1, 2], [1], [2]] completely satisfies the requirements laid out in the description. I don't know why you'd ever have keys with a value greater than num_required-1. One possibility I thought of was that there are unwritten rules that say all of the minions/bunnies need to have the same number of keys, and you can only have num_required of each key. However, if that's the case, then [[0, 1, 2], [0, 1, 2], [0, 3, 4], [1, 3, 4], [2, 3, 4]] would be okay. Next I thought that maybe the rule about needing to be able to use all num_required keys at once extended beyond 0, 1, and 2, regardless of how many consoles there are. That is, you should be able to make [6, 7, 8] as well as [0, 1, 2]. However, that requirement would be broken because the first bunny doesn't have any of those numbers. I'm stuck. Any hints would be greatly appreciated!
Google Foobar: Free the Bunny Prisoners clarification
1.2
0
0
4,292