Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
33,541,692
2015-11-05T10:06:00.000
68
0
0
0
python,excel,openpyxl
33,543,305
3
true
0
0
ws.max_row will give you the number of rows in a worksheet. Since version openpyxl 2.4 you can also access individual rows and columns and use their length to answer the question. len(ws['A']) Though it's worth noting that for data validation for a single column Excel uses 1:1048576.
1
38
0
I'm using openpyxl to put data validation to all rows that have "Default" in them. But to do that, I need to know how many rows there are. I know there is a way to do that if I were using Iterable workbook mode, but I also add a new sheet to the workbook and in the iterable mode that is not possible.
How to find the last row in a column using openpyxl normal workbook?
1.2
1
0
117,258
33,542,429
2015-11-05T10:39:00.000
0
0
1
0
python
59,877,295
3
false
0
0
yes, In your pc may be default python2 version.But current date it expired. If you want to run this file go to step by step in cd dir/ dir/ upto pycharm project directory . Then, $ cd project name python filename.py or for python 3- $ python3 filename.py
2
0
0
I have created one simple application as(Hello.py) in "JetBrains PyCharm Community Edition 4.5.4". Now i am trying to run this .py file on Windows CE 5.0, I am getting the error as: Cannot find 'Hello' (or one of it's components). Make sure the path and filename are correct and that all the required libraries are available. Please tell me the solution what should i do to resolve this error. Do needful. Thanks & Regards Rushali Watane
Run .py file in Windows ce 5.0
0
0
0
658
33,542,429
2015-11-05T10:39:00.000
0
0
1
0
python
33,542,814
3
false
0
0
First, make sure that python was installed or not. after, goto command prompt and run below command: python /file_path/Hello.py
2
0
0
I have created one simple application as(Hello.py) in "JetBrains PyCharm Community Edition 4.5.4". Now i am trying to run this .py file on Windows CE 5.0, I am getting the error as: Cannot find 'Hello' (or one of it's components). Make sure the path and filename are correct and that all the required libraries are available. Please tell me the solution what should i do to resolve this error. Do needful. Thanks & Regards Rushali Watane
Run .py file in Windows ce 5.0
0
0
0
658
33,545,829
2015-11-05T13:29:00.000
0
0
0
0
python,django,textile
33,550,462
1
true
1
0
Use <strong> tags, like "*<span style="color:#ff0000"><strong>Text</strong></span>*":http://www.example.com/lifestyle/tpage2
1
1
0
I want to make this "*<span style="color:#ff0000">Text</span>*":http://www.example.com/lifestyle/tpage2 in textile become bold and red.
Django textile make a link both red and bold
1.2
0
0
37
33,546,911
2015-11-05T14:19:00.000
1
0
0
0
python,firefox,selenium,firefox-developer-tools
33,547,119
6
false
0
0
This works: ActionChains(driver).key_down(Keys.F12).key_up(Keys.F12).perform() Without firebug installed at least :)
1
6
0
Im trying to open firefox console through Selenium with Python. How can I open firefox console with python selenium? Is it possible to send keys to the driver or something like that?
How to open console in firefox python selenium?
0.033321
0
1
9,299
33,549,854
2015-11-05T16:30:00.000
3
0
1
0
python
33,550,046
1
true
0
0
In case anyone else ever reads this: whos is as close as I could get. It displays a list of user-generated variables and modules imported, and displays a few attributes too. Not as clean as ls() in R but some people might find this useful. Works better than dir() for sure.
1
3
0
This has been asked multiple times here but I can only find references to dir(), which lists all names in the current scope. This is overkill and I'm looking for a way to ONLY list the objects in the scope that have been generated by the user in this session, instead of all objects / functions available in the environment.
Equivalent of R ls() but ONLY for user generated objects / functions?
1.2
0
0
327
33,550,976
2015-11-05T17:25:00.000
0
1
0
1
python,linux
36,415,642
1
false
0
0
It could be related to many things: things that I had to fix also: check the external power supply of the router, needs to be stable, the usb drives could drain too much current than the port can handle, a simple fix is to add a externally powered usbhub or the same port but with capacitors in parallel to the powerline and at the beginning of the usb port where the drive is, maybe 1000uF
1
0
0
I've got several MR-3020's that I have flashed with OpenWRT and mounted a 16GB ext4 USB drive on it. Upon boot, a daemon shell script is started which does two things: 1) It constantly looks to see if my main program is running and if not starts up the python script 2) It compares the lasts heartbeat timestamp generated by my main program and if it is older than 10 minutes in the past kills the python process. #1 is then supposed to restart it. Once running, my main script goes into monitor mode and collects packet information. It periodically stops sniffing, connects to the internet and uploads the data to my server, saves the heartbeat timestamp and then goes back into monitor mode. This will run for a couple hours, days, or even a few weeks but always seems to die at some point. I've been having this issue for nearly 6 months (not exclusively) I've run out of ideas. I've got files for error, info and debug level logging on pretty much every line in the python script. The amount of memory used by the python process seems to hold steady. All network calls are encapsulated in try/catch statements. The daemon writes to logread. Even with all that logging, I can't seem to track down what the issue might be. There doesn't seem to be any endless loops entered into, none of the errors (usually HTTP request when not connected to internet yet) are ever the final log record - the device just seems to freeze up randomly. Any advice on how to further track this down?
Crashing MR-3020
0
0
0
61
33,551,143
2015-11-05T17:34:00.000
1
1
0
0
python,api,amazon-web-services,amazon-s3,boto
33,590,521
1
true
1
0
You are correct that AWS Lambda can be triggered when objects are added to, or deleted from, an Amazon S3 bucket. It is also possible to send a message to Amazon SNS and Amazon SQS. These settings needs to be configured by somebody who has the necessary permissions on the bucket. If you have no such permissions, but you have the ability to call GetBucket(), then you can retrieve a list of objects in the bucket. This returns up to 1000 objects per API call. There is no API call available to "get the newest files". There is no raw code to "monitor" uploads to a bucket. You would need to write code that lists the content of a bucket and then identifies new objects. How would I approach this problem? I'd ask the owner of the bucket to add some functionality to trigger Lambda/SNS/SQS, or to provide a feed of files. If this wasn't possible, I'd write my own code that scans the entire bucket and have it execute on some regular schedule.
1
2
0
I have access to a S3 bucket. I do not own the bucket. I need to check if new files were added to the bucket, to monitor it. I saw that buckets can fire events and that it is possible to make use of Amazon's Lambda to monitor and respond to these events. However, I cannot modify the bucket's settings to allow this. My first idea was to sift through all the files and get the latest one. However, there are a lot of files in that bucket and this approach proved highly inefficient. Concrete questions: Is there a way to efficiently get the newest file in a bucket? Is there a way to monitor uploads to a bucket using boto? Less concrete question: How would you approach this problem? Say you had to get the newest file in a bucket and print it's name, how would you do it? Thanks!
How to monitor a AWS S3 bucket with python using boto?
1.2
0
1
4,530
33,552,557
2015-11-05T18:55:00.000
0
0
0
0
python,face-recognition
33,553,902
1
false
0
0
Eigenfaces require supervised learning. You generally supply several of each subject, classifying them by identifying the subject. The eigenface model then classifies later images (often real-time snapshots) as to identity.
1
0
1
Eigenface method is a powerful method in face recognition. It uses the training images to find the eigenfaces and then use these eigenfaces to represent a new test image. Do the images in training dataset need to be labeled, or it is unsupervised training?
Does Eigenface method use unsupervised trainning
0
0
0
116
33,553,549
2015-11-05T19:51:00.000
5
0
1
0
python,numpy,memory,32bit-64bit
33,553,660
2
false
0
0
64 bit python won't load 32 bit NumPy (at least that has been my experience with 2.7.10 python and "official" distribution of NumPy for Windows). So start Python (if you have both 32 bit version and 64 bit version do it for each one) and then try to import NumPy module. If it works with 32 bit Python, then it's a 32 bit version of NumPy. If it works with 64 bit Python, then it's a 64 bit version of NumPy.
1
13
0
How do I check if my installed numpy version is 32bit or 64bit? Bonus Points for a solution which works inside a script and is system independent.
Do I have Numpy 32 bit or 64 bit?
0.462117
0
0
10,153
33,555,068
2015-11-05T21:25:00.000
2
0
0
0
dronekit-python,dronekit
33,577,166
1
true
0
0
Just use MAVProxy. You can add the following line to a startup script on your companion computer: mavproxy.py --master=/dev/ttyUSB0 --out=0.0.0.0:14550 Replace the IP address with the IP of the computer running MP.
1
1
0
I need to connect Mission Planner to a companion computer which is then connected to an APM. However, as far as I can tell, DKPY2 does not output like Mavproxy used to. I know that the Pixhawk can take multiple inputs from Mavlink commands, but I'd rather not have to upgrade right now. Is there a workaround besides adding my own implementation to DKPY2? Thanks
Use companion computer as pass through for mission planner
1.2
0
0
512
33,555,957
2015-11-05T22:24:00.000
2
0
1
0
python,ubuntu,psychopy
33,607,682
1
true
0
0
Given that pymedia hasn't been touched for 9 years I think you'll be struggling to build it. Is the aim to build a movie from some frames? I'd suggest outputting them from PsychoPy as png images and then combining them with ffmpeg. I intend in the future to add support back in to output movie files directly using pymovie to combine the frames (which uses ffmpeg in the background), but this isn't on the top of the priorities list! ;-)
1
0
0
I have all the dependencies for pymedia, but they are all "not found" by the installation script. I see that pymedia is no longer in active development, but just want to see if anyone has been able to install it on Ubuntu 14.04, on a 64-bit machine. I require it for a function in pyschopy.
Is it possible to install pymedia on recent Ubuntu builds?
1.2
0
0
337
33,560,269
2015-11-06T05:45:00.000
0
0
0
0
python,cosine-similarity,trigonometry
33,560,748
1
true
0
0
That approach will only work for 2-D vectors. For higher dimensions any two vectors will define a hyperplane, and only if the third (reference) vector also lies within this hyperplane will your approach work. Unfortunately instead of only calculating n angles and subtracting, in order to determine the angles between each pair of vectors you would have to calculate all n choose 2 of them.
1
0
1
I have been trying to find a fast algorithm of calculating all the angle between n vectors that are of length x. For example if x=3 and n=4, my data would look something like this: A: [1,2,3] B: [2,3,4] C: [...] D: [...] I was wondering is it acceptable to find the the angle between all of be vectors (A,B,C,D) with respect to some fix vector (i.e. X:[100,100,100,100]) and then the subtract the angles of (A,B,C,D) found with respect to that fixed value, to find the angle between all of them. I want to do this because I would only have to compute the angle once and then I can subtract angles all of my vectors to find the different between them. In short, I want to know is it safe to make this assumption? angle_between(A,B) == angle_between(A,X) - angle_between(B,X) and the angle_between function is the Cosine similarity.
Calculating the Angle Between Vectors by using a vector as a reference point:
1.2
0
0
390
33,562,499
2015-11-06T08:32:00.000
1
0
0
1
python,websocket,tornado
33,562,632
2
false
0
0
Maybe you should delimit the messages you send so it is easy to split them up - in this case you could add a \n, obviously the delimiter mustn't happen within the message. Another way would be to prefix each message with its length in also a clearly-delimited way, then the receiver reads the length then that number of bytes and parses it.
1
0
0
I use tornado websocket send/recv message, the client send json message, and server recv message and json parse, but why the server get message which is mutil json message, such as {"a":"v"}{"a":"c"}, how to process this message
tornado websocket get multi message when on_message called
0.099668
0
1
247
33,562,616
2015-11-06T08:39:00.000
3
0
0
0
python,touch,kivy
33,562,894
1
true
0
1
I personally find kivy better than pygame. The latter one is kind of outdated and doesn't get much support today while kivy is still growing and gaining new possibilities. It's also not true that kivy is only for touch displays. It's completely multiplatform so you can create a kivy app for almost any operating system without needing to change anything in your code! Although you might get a little bit better performance with pygame, kivy is much more intuitive framework and if you don't aim to create a Crysis I'd go with kivy. Cheers! P.S. You couldn't make anything like Crysis even with pygame :D If you want to create a 3D game in Python you better use kivent, a 3D engine written in C with Python API dedicated for kivy.
1
2
0
I have to create a game in Python for University and it should run on Windows. Now I have found kivy and I am asking myself, if kivy is even better than pygame or is it just for touch displays and I better use pygame? We don't have touch displays btw! The questions are, is kivy easier to program, easier to learn or more efficiently? P.S.: I'm aware of the fact, that pygame is part of kivy!
When I write a game for Windows, should I use kivy or pygame?
1.2
0
0
4,160
33,567,719
2015-11-06T13:18:00.000
0
0
0
0
javascript,jquery,python,html,css
33,571,398
1
false
1
0
I have used JSPDF to download pdf from html. It is easy to use. It should help you in your case: JS Fiddle Html to pdf: http://jsfiddle.net/xzz7n/1/ JSPDF Site https://parall.ax/products/jspdf
1
0
0
I have a html-content with dynamic-css in python class which will be later passed to a js file. I need to download this html-content as pdf format. I have gone through various html to pdf convertor tools(pdfkit,pdfcrowd,wkhtmltopdf) but none of them is able to render the dynamic css content. I have even tried using windows.document.documentElement for obtaining html content with dynamic css rendered. But this did not work. My question is: can we generate dynamic css seperately on python or download the complete pdf using js? Thanks in advance
Download html with dynamic-css as pdf using python or js
0
0
0
371
33,568,681
2015-11-06T14:09:00.000
0
0
0
0
python,selenium,selenium-webdriver,webdriver,chrome-web-driver
33,569,953
1
false
0
0
Based on the low details provided on your question, and that you're mentioning that this happens only between tests, leads me to think that you have something in your setUp or tearDown methods. Check what gets executed there and see whether that is the source of your problem.
1
0
0
Running selenium tests in python and chrome. Between each test, it logs that its creating a new http connection. Is it possible to create a connection pool so that a new one isn't created for each test?
Connection pool for selenium in python
0
0
1
433
33,570,762
2015-11-06T15:59:00.000
0
0
0
0
python,get,python-requests,wait
33,570,899
1
false
1
0
No, since requests is just an HTTP client. It looks like the page is being modified by JS after finishing other request. You should figure out, what request changes the page and use it (by network inspector in Chrome, for example).
1
0
0
So I'm trying to get some data from a search engine. This search engine returns some results and then, after for example 2 seconds, it changes it's html and show some approximate results. I want to get these approximate results but that's the problem. I use requests.get which gets first response and does not wait those for example 2 seconds. So I'm curious if it is possible. I don't want to use a Selenium since it has to be as lite as possible because it will be a part of a web page. So my question is: Is it possible to make requests.get wait for another data?
How to wait for a response
0
0
1
455
33,571,084
2015-11-06T16:17:00.000
0
0
0
0
python,django,admin,backend
33,571,196
2
true
1
0
You can create a second AdminSite etc, but that will still be mainly a standard django admin. If you have more special needs, you'd possibly be better developping a custom one (using forms and ModelForms and possibly a couple generic apps for tables etc) - this is all standard django programming... wrt/ validations and permissions, they are nothing specific to the admin app.
1
0
0
I'm new with django and I was thinking if it's possible to do a thing. I know I can access to /admin/ and have django default admin but what if I want to create another admin interface with different URL using the default admin model/widget? To be clear, new different interface will be for example /customer-backend and people who join it (what about permission?) have another graphical interface but using things (like validation) that are available in django admin backend. Thank you!
Different admin backend
1.2
0
0
62
33,572,836
2015-11-06T17:58:00.000
3
0
1
0
python,python-3.x,iterator
33,573,766
1
true
0
0
The __iter__ protocol needs to return an iterator -- that can be either the same object (self, and self needs to have a __next__ method), or another object (and that other object will have __iter__ and __next__). In just about all non-trivial use cases, a different and dedicated object should be returned for the iterator as it makes the whole process much simpler, easier to reason about, more likely to be thread-safe, etc.
1
2
0
In most code examples of implementing __iter__ in Python, I see them returning self instead of another object. I know this is the required behavior for an iterator, but for an object that is an iterable, shouldn't it return something other than self? My reasoning: (GIL Aside, not CPU bound) If you have multiple threads iterating the same container, tracking your iterator progress in your iterable class (via the __next__ method) will have values written from both threads, thus each loop will only get part of the data. Once your iterator/iterable object has been iterated once, you need to reset it. This can easily be done in __iter__ but it seems to go against the grain of how I've seen other languages implement iterators. I suppose it doesn't matter if it's idiomatic Python. Any example I have in mind is a class called Search that uses an HTTP REST API to initialize a search. Once it's done, you can do several things. Make the Search object iterable with __iter__ returning self. NOT thread safe. Have the Search impliment __iter__ that returns a different object that tracks iteration and queries results from the server as it needs. IS thread safe. Have a results method return an iterator/iterable SearchResults object that is NOT thread safe, but the method to get it IS thread safe. Overall, what's the most robust way to implement an iterator in idiomatic Python that is thread safe. Maybe I should even abstract the HTTP REST API to a cursor object like many database libraries (which would be like option 3).
Should my iterable objects in Python also be the iterator like most examples
1.2
0
0
150
33,577,252
2015-11-06T23:37:00.000
0
0
0
1
python,django,twisted,wamp-protocol,crossbar
34,815,287
1
false
1
0
With a Web app using WAMP, you have two separate mechanisms: Serving the Web assets and the Web app then communicating with the backend (or other WAMP components). You can use Django, Flask or any other web framework for serving the assets - or the static Web server integrated into Crossbar.io. The JavaScript you deliver as part of the assets then connects to Crossbar.io (or another WAMP router), as do the backend or other components. This is then used to e.g. send data to display to the Web frontend or to transmit user input.
1
0
0
As I understand it (please do correct misunderstandings, obviously), the mentioned projects/technologies are as follows:- Crossover.io - A router for WAMP. Cross-language. WAMP - An async message passing protocol, supporting (among other things) Pub/Sub and RPC. Cross-language. twisted - An asynchronous loop, primarily used for networking (low-level). Python specific. As far as I can tell, current crossover.io implementation in python is built on top of twisted. klein - Built on top of twisted, emulating flask but asynchronously (and without the plugins which make flask easier to use). Python specific. django/flask/bottle - Various stacks/solutions for serving web content. All are synchronous because they implement the WSGI. Python specific. How do they interact? I can see, for example, how twisted could be used for network connections between various python apps, and WAMP between apps of any language (crossover.io being an option for routing). For networking though, some form of HTTP/browser based connection is normally needed, and that's where in Python django and alternatives have historically been used. Yet I can't seem to find much in terms of interaction between them and crossover/twisted. To be clear, there's things like crochet (and klein), but none of these seem to solve what I would assume to be a basic problem, that of saying 'I'd like to have a reactive user interface to some underlying python code'. Or another basic problem of 'I'd like to have my python code update a webpage as it's currently being viewed'. Traditionally I guess its handled with AJAX and similar on the webpage served by django et. al., but that seems much less scalable on limited hardware than an asynchronous approach (which is totally doable in python because of twisted and tornado et. al.). Summary Is there a 'natural' interaction between underlying components like WAMP/twisted and django/flask/bottle? If so, how does it work.
How do crossover.io, WAMP, twisted (+ klein), and django/flask/bottle interact?
0
0
0
263
33,587,761
2015-11-07T21:04:00.000
0
0
0
0
python,opencv,image-processing,computer-vision
33,596,513
1
false
0
0
I'd take advantage of the fact that dilations are efficiently implemented in OpenCV. If a point is a local maximum in 3d, then it is also in any 2d slice, therefore: Dilate each image in the array with a 3x3 kernel, keep as candidate maxima the points whose intensity is unchanged. Brute-force test the candidates against their upper and lower slices.
1
0
1
I'm trying to implement a blob detector based on LOG, the steps are: creating an array of n levels of LOG filters use each of the filters on the input image to create a 3d array of h*w*n where h = height, w = width and n = number of levels. find a local maxima and circle the blob in the original image. I already created the filters and the 3d array (which is an array of 2d images). I used padding to make sure I don't have any problems around the borders (which includes creating a constant border for each image and create 2 extra empty images). Now I'm trying to figure out how to find the local maxima in the array. I need to compare each pixel to its 26 neighbours (8 in the same picture and the 9 pixels in each of the two adjacent scales) The brute force way of checking the pixel value directly seems ugly and not very efficient. Whats the best way to find a local maxima point in python using openCV?
finding a local maximum in a 3d array (array of images) in python
0
0
0
1,030
33,597,798
2015-11-08T19:01:00.000
0
0
0
0
python,python-3.x
33,597,842
1
true
0
0
Use it as you normally do. You'd connect to it by its external IP address. You might need to setup your router to allow that. Read up on port forwarding, for example.
1
0
0
How can i use the command: /python -m http.server and make the server reachable through my public ip? I know this is NOT safe, but I just wanna try some things out.
Using command python -m http.server and make server reachable by public ip?
1.2
0
1
74
33,599,791
2015-11-08T22:30:00.000
0
0
0
0
python,qt,debian
33,642,196
2
false
0
1
Ok after hours and hours of fooling around I figured it out. I actually used the built-in startup manager in GNOME. I tried this before but it didn't work. So this time I created an entry to start gnome-calculator.. the built in calculator. Then I edited the entry in ~/.config/autostart/gnome-calculator.desktop using gedit (gedit ~/.config/autostart/gnome-calculator.desktop). I changed Exec=gnome-calculator to Exec=python /home/me/myapp.py and it works. Yes it is a lame solution and it doesn't identify what the problem was but it's a start. Thanks for the help.
1
0
0
I created a Pyside QT GUI app (Python 2.7) and part of the spec is to launch it automatically when the system starts. Normal init.d stuff doesn't seem to work since it's a GUI app. So far I've tried x11 init.d and xdg xyz.desktop solutions and they don't seem to work. How would you solve this? How are Pyside apps auto started on system boot? (Debian Wheezy)
How can I auto start a python/pyside GUI
0
0
0
121
33,603,304
2015-11-09T06:03:00.000
1
0
0
0
python,image-processing
33,729,058
2
true
0
0
I finally did it with ImageMagick, using Python to calculate the various coordinates, etc. This command will create the desired circle (radius 400, centered at (600, 600): convert -size 1024x1024 xc:none -stroke black -fill steelblue -strokewidth 1 -draw "translate 600,600 circle 0,0 400,0" drawn.png This command will then convert it to B/W to get a rudimentary mask: convert drawn.png -alpha extract mask.png This command will blur the mask (radius 180, sigma 16): convert -channel RGBA -blur 100x16 mask.png mask2.png The above three commands gives me the mask I need. This command will darken the whole image (without the mask): convert image.jpg -level 0%,130%,0.7 dark.jpg And this command will put all 3 images together (original image, darkened image, and mask): composite image.jpg dark.jpg mask2.png out.jpg
1
1
1
Here's what I'm trying to do: I have an image. I want to take a circular region in the image, and have it appear as normal. The rest of the image should appear darker. This way, it will be as if the circular region is "highlighted". I would much appreciate feedback on how to do it in Python. Manually, in Gimp, I would create a new layer with a color of gray (less than middle gray). I would then create a circualr region on that layer, and make it middle gray. Then I would change the blending mode to soft light. Essentially, anything that is middle gray on the top layer will show up without modification, and anything darker than middle gray would show up darker. (Ideally, I'd also blur out the top layer so that the transition isn't abrupt). How can I do this algorithmically in Python? I've considered using the Pillow library, but it doesn't have these kinds of blend modes. I also considered using the Blit library, but I couldn't import (not sure it's maintained any more). Am open to scikit-image as well. I just need pointers on the library and some relevant functions. If there's no suitable library, I'm open to calling command line tools (e.g. imagemagick) from within the Python code. Thanks!
Create a "spotlight" in an image using Python
1.2
0
0
687
33,603,442
2015-11-09T06:16:00.000
0
0
0
0
python,unicode,tkinter,tkinter-entry
61,833,731
2
true
0
1
As mentioned by @mohit-bhasi, upgrading my python version to 3.8 which has tkinter 8.6 in it solved the problem. I can now type Korean directly into the widgets. Only caveat is that I need to press right arrow once when I finished typing to have the last letter appear. Otherwise, the last letter is not recognized.
1
4
0
I am making a p2p chat program in Python 3 using Tkinter. I can paste Korean text into the Entry widget and send to the other user and it works. However, I can't 'type' Korean into the widget directly. Why is this happening? I am using Mac OS X Yosemite.
Python Tkinter Entry. I can't type Korean in to the Entry field
1.2
0
0
669
33,605,979
2015-11-09T09:27:00.000
2
0
0
0
python,statistics,statsmodels
33,611,826
1
true
0
0
programmer's answer: statsmodels Logit and other discrete models don't have weights yet. (*) GLM Binomial has implicitly defined case weights through the number of successful and unsuccessful trials per observation. It would also allow manipulating the weights through the GLM variance function, but that is not officially supported and tested yet. update statsmodels Logit still does not have weights, but GLM has obtained var_weights and freq_weights several statsmodels releases ago. GLM Binomial can be used to estimate a Logit or a Probit model. statistician's/econometrician's answer: Inference, standard errors, confidence intervals, tests and so on, are based on having a random sample. If weights are manipulated, then this should affect the inferential statistics. However, I never looked at the problem for rebalancing the data based on the observed response. In general, this creates a selection bias. A quick internet search shows several answers, from rebalancing doesn't have a positive effect in Logit to penalized estimation as alternative. One possibility is to also try different link function, cloglog or other link functions have asymmetric or heavier tails that are more appropriate for data with small risk in one class or category. (*) One problem with implementing weights is to decide what their interpretation is for inference. Stata, for example, allows for 3 kinds of weights.
1
3
1
I'd like to run a logistic regression on a dataset with 0.5% positive class by re-balancing the dataset through class or sample weights. I can do this in scikit learn, but it doesn't provide any of the inferential stats for the model (confidence intervals, p-values, residual analysis). Is this possible to do in statsmodels? I don't see a sample_weights or class_weights argument in statsmodels.discrete.discrete_model.Logit.fit Thank you!
Statsmodels Logistic Regression class imbalance
1.2
0
0
3,765
33,608,541
2015-11-09T11:55:00.000
2
0
0
0
python,scikit-learn,cluster-analysis
33,617,441
1
false
0
0
It does not make sense to run mean-shift on one-dimensional data. Do regular kernel density estimation instead. Locate the minima, and split the data set there. Mean shift is for data that is too complex for proper KDE. One dimensional data never is.
1
1
1
how can I run a mean shift clustering on a 1D array? Here there is my dataframe: >>>df INFO FREQ R2 31 0.2468213 R5 27 0.003670532 UR 25 0.00337465 I need to apply the clustering on the "INFO" column. Whit the kmeans I solved this problem using the reshape(-1,1) command: kmeans.fit(df["INFO"].values.reshape(-1,1)) , but with the mean shift clustering I get this error: meanshift.fit(df["INFO"].values.reshape(-1,1)) output: ValueError: Invalid shape in axis 1: 0.
scikit learn mean shift clustering in one-dimensional array
0.379949
0
0
1,483
33,608,604
2015-11-09T11:58:00.000
0
0
0
0
python,diff,lxml,difflib
33,608,719
1
false
1
0
You can use the difflib module. It's available as a part of standard python library.
1
0
0
I need to run a diff mechanism on two HTML page sources to kick out all the generated data (like user session, etc.). I wondering if there is a python module that can do that diff and return me the element that contains the difference (So I will kick him in the rest of my code in another sources)
How to diff between two HTML codes?
0
0
1
308
33,613,948
2015-11-09T16:43:00.000
3
1
1
0
python,dronekit-python,dronekit
33,615,308
1
true
0
0
Use mavproxy to make the initial connection, and then fork it to DK and MP. mavproxy.py --master=/dev/ttyUSB0 --out=127.0.0.1:14550 --out=127.0.0.1:14551 Connect mission planner to UDP port 14550. Connect DK to port 14551.
1
3
0
I like Dronekit for controlling my copter, and I like Mission Planner for monitoring my copter during a flight. I'd really like to have both sets of functionality. Is there a way to connect Dronekit and Mission Planner to the Pixhawk at the same time? I'm using the 3DR radio set to connect from a laptop on the ground. If that's not possible, is there a way to relay the connection through Dronekit to Mission Planner?
Connect Dronekit and Mission Planner simultaneously to Pixhawk over 3DR radio
1.2
0
0
1,931
33,614,453
2015-11-09T17:12:00.000
1
0
0
0
python-2.7,apache-spark,hadoop-yarn,pyspark
33,621,420
3
false
0
0
How many CPUs do you have and how many are required per job? YARN will schedule the jobs and assign what it can on your cluster: if you require 8CPUs for your job and your system has only 8CPUs, then other jobs will be queued and ran serially. If you requested 4 per job then you would see 2 jobs run in parallel at any one time.
1
2
1
I have setup a Spark on YARN cluster on my laptop, and have problem running multiple concurrent jobs in Spark, using python multiprocessing. I am running on yarn-client mode. I tried two ways to achieve this: Setup a single SparkContext and create multiple processes to submit jobs. This method does not work, and the program crashes. I guess a single SparkContext does not support python multiple processes For each process, setup a SparkContext and submit the job. In this case, the job is submitted successfully to YARN, but the jobs are run serially, only one job is run at a time while the rest are in queue. Is it possible to start multiple jobs concurrently? Update on the settings YARN: yarn.nodemanager.resource.cpu-vcores 8 yarn.nodemanager.resource.memory-mb 11264 yarn.scheduler.maximum-allocation-vcores 1 Spark: SPARK_EXECUTOR_CORES=1 SPARK_EXECUTOR_INSTANCES=2 SPARK_DRIVER_MEMORY=1G spark.scheduler.mode = FAIR spark.dynamicAllocation.enabled = true spark.shuffle.service.enabled = true yarn will only run one job at a time, using 3 containers, 3 vcores, 3GB ram. So there are ample vcores and rams available for the other jobs, but they are not running
How to run multiple concurrent jobs in Spark using python multiprocessing
0.066568
0
0
5,740
33,614,947
2015-11-09T17:42:00.000
1
0
0
0
python,seaborn,marker
46,858,249
1
false
0
0
As per the comment from @Sören, you can add the markeredges with the keyword scatter_kws. For example scatter_kws={'linewidths':1,'edgecolor':'k'}
1
1
1
sns.lmplot(x="size", y="tip", data=tips) gives a scatter plot. By default the markers have no edges. How can I add markeredges? Sometimes I prefer to use edges transparent facecolor. Especially with dense data. However, Neither markeredgewidth nor mew nor linewidths are accepted as keywords. Does anyone know how to add edges to the markers?
Add markeredges in seaborn lmplot?
0.197375
0
0
1,371
33,616,094
2015-11-09T18:51:00.000
4
0
0
1
python,windows,tensorflow
33,623,888
7
false
0
0
Another way to run it on Windows is to install for example Vmware (a free version if you are not using it commercially), install Ubuntu Linux into that and then install TensorFlow using the Linux instructions. That is what I have been doing, it works well.
1
61
0
I haven't seen anything about Windows compatibility -- is this on the way or currently available somewhere if I put forth some effort? (I have a Mac and an Ubuntu box but the Windows machine is the one with the discrete graphics card that I currently use with theano).
Is Tensorflow compatible with a Windows workflow?
0.113791
0
0
22,555
33,616,593
2015-11-09T19:23:00.000
1
0
0
0
python,parallel-processing,deep-learning,tensorflow
33,622,414
3
false
0
0
Update As you may have noticed. Tensorflow has already supported distributed DNN training for quite some time. Please refer to its offcial website for details. ========================================================================= Previous No, it doesn't support distribute training yet, which is a little disappointing. But I don't think it is difficult to extend from single machine to multi-machine. Compared to other open source libraries, like Caffe, TF's data graph structure is more suitable for cross-machine tasks.
1
14
1
Google released TensorFlow today. I have been poking around in the code, and I don't see anything in the code or API about training across a cluster of GPU servers. Does it have distributed training functionality yet?
How do I use distributed DNN training in TensorFlow?
0.066568
0
0
4,046
33,623,184
2015-11-10T04:59:00.000
3
0
1
0
python,algorithm,time-complexity
33,623,292
6
false
0
0
EDIT: this assumes that the list is immutable. If the list is an array and can be modified there are linear methods available. You can get the complexity down to O(n * log k) by using a heap of size k + 1. Initially get the first k elements into a min-heap. For every subsequent element, add the element as a leaf and heapify. Replace the last element with the next element. Heapify can be done in logarithmic time and hence the time complexity is as above.
1
6
0
What is the fastest method to get the k smallest numbers in an unsorted list of size N using python? Is it faster to sort the big list of numbers, and then get the k smallest numbers,or to get the k smallest numbers by finding the minimum in the list k times, making sure u remove the found minimum from the search before the next search?
fastest method of getting k smallest numbers in unsorted list of size N in python?
0.099668
0
0
7,062
33,625,683
2015-11-10T08:29:00.000
2
0
1
0
python,python-3.x,anaconda,failed-installation
39,781,337
6
false
0
0
I had the same problem, then I Installed "for all users": solved. Much easier than the links provided for a beginner as I am.
3
15
0
I have just made up my mind to change from python 2.7 to python 3.5 and therefore tried to reinstall Anaconda (64 bit) with the 3.5 environment. When I try to install the package I get several errors in the form of (translation from German, so maybe not exact): The procedure entry "__telemetry_main_return_trigger" could not be found in the DLL "C:\Anaconda3\pythonw.exe". and The procedure entry "__telemetry_main_invoke_trigger" could not be found in the DLL "C:\Anaconda3\python35.dll". The title of the second error message box still points to pythonw.exe. Both errors appear several times - every time an extraction was completed. The installation progress box reads [...] extraction complete. Execute: "C:\Anaconda3\pythonw.exe" "C:\Anaconda3\Lib_nsis.py" postpkg After torturing myself through the installation I get the warning Failed to create Anaconda menus If I ignore it once gives me my lovely error messages and tells me that Failed to initialize Anaconda directories then Failed to add Anaconda to the system PATH Of course nothing works, if I dare to use this mess it installs. What might go wrong? On other computers with Windows 10 it works well. P.S.: An installation of Anaconda2 2.4 with python 2.7 works without any error message, but still is not able to be used (other errors).
Anaconda3 2.4 with python 3.5 installation error (procedure entry not found; Windows 10)
0.066568
0
0
19,534
33,625,683
2015-11-10T08:29:00.000
0
0
1
0
python,python-3.x,anaconda,failed-installation
42,300,834
6
false
0
0
If you are getting errors like: Failed to create Anaconda menus Failed to initialize Anaconda directories Failed to add Anaconda to the system PATH just ignore them while installation and when installation is done look for the directory "anaconda3" is installed and correct the path accordingly in environment variables path. In my system, path was set "C:\Anaconda3" but actually it was installed at "C:\ProgramData\Anaconda3". You have to change all 3 path entries for anaconda3 and then try to run "jupyter notebook" in CMD.
3
15
0
I have just made up my mind to change from python 2.7 to python 3.5 and therefore tried to reinstall Anaconda (64 bit) with the 3.5 environment. When I try to install the package I get several errors in the form of (translation from German, so maybe not exact): The procedure entry "__telemetry_main_return_trigger" could not be found in the DLL "C:\Anaconda3\pythonw.exe". and The procedure entry "__telemetry_main_invoke_trigger" could not be found in the DLL "C:\Anaconda3\python35.dll". The title of the second error message box still points to pythonw.exe. Both errors appear several times - every time an extraction was completed. The installation progress box reads [...] extraction complete. Execute: "C:\Anaconda3\pythonw.exe" "C:\Anaconda3\Lib_nsis.py" postpkg After torturing myself through the installation I get the warning Failed to create Anaconda menus If I ignore it once gives me my lovely error messages and tells me that Failed to initialize Anaconda directories then Failed to add Anaconda to the system PATH Of course nothing works, if I dare to use this mess it installs. What might go wrong? On other computers with Windows 10 it works well. P.S.: An installation of Anaconda2 2.4 with python 2.7 works without any error message, but still is not able to be used (other errors).
Anaconda3 2.4 with python 3.5 installation error (procedure entry not found; Windows 10)
0
0
0
19,534
33,625,683
2015-11-10T08:29:00.000
0
0
1
0
python,python-3.x,anaconda,failed-installation
42,996,378
6
false
0
0
If you are using windows, launch the command prompt as administrator and execute the following commands "C:\ProgramData\Anaconda3\pythonw.exe" -E -s "C:\ProgramData\Anaconda3\Lib_nsis.py" addpath "C:\ProgramData\Anaconda3\pythonw.exe" -E -s "C:\ProgramData\Anaconda3\Lib_nsis.py" mkdirs "C:\ProgramData\Anaconda3\pythonw.exe" -E -s "C:\ProgramData\Anaconda3\Lib_nsis.py" mkmenus Don't forget to change the path to the path in your system. Before running this commands there will not be any Anaconda Navigator app in your start menu. After executing this commands make sure Anaconda Navigator app is available in the start menu.
3
15
0
I have just made up my mind to change from python 2.7 to python 3.5 and therefore tried to reinstall Anaconda (64 bit) with the 3.5 environment. When I try to install the package I get several errors in the form of (translation from German, so maybe not exact): The procedure entry "__telemetry_main_return_trigger" could not be found in the DLL "C:\Anaconda3\pythonw.exe". and The procedure entry "__telemetry_main_invoke_trigger" could not be found in the DLL "C:\Anaconda3\python35.dll". The title of the second error message box still points to pythonw.exe. Both errors appear several times - every time an extraction was completed. The installation progress box reads [...] extraction complete. Execute: "C:\Anaconda3\pythonw.exe" "C:\Anaconda3\Lib_nsis.py" postpkg After torturing myself through the installation I get the warning Failed to create Anaconda menus If I ignore it once gives me my lovely error messages and tells me that Failed to initialize Anaconda directories then Failed to add Anaconda to the system PATH Of course nothing works, if I dare to use this mess it installs. What might go wrong? On other computers with Windows 10 it works well. P.S.: An installation of Anaconda2 2.4 with python 2.7 works without any error message, but still is not able to be used (other errors).
Anaconda3 2.4 with python 3.5 installation error (procedure entry not found; Windows 10)
0
0
0
19,534
33,627,789
2015-11-10T10:23:00.000
1
0
0
0
python,xml,python-2.7,odoo-8,odoo
33,636,388
2
false
1
0
You cannot make few or some of the fields as 'readonly' in odoo based on the groups. If you need to do it, you can use the custom module 'smile_model_access_extension'. For loading appropriate view on menu click you can create record of 'ir.actions.act_window' (view_ids) field of 'ir.action', where you can specify the sequence and type of view to be loaded when the menu action is performed. In your case you can specify the specific 'form view' for your action.
1
1
0
I've created a new group named accountant. If an user of this group opens the res.partner form for example, he must be able to read all, but only modify some specific fields (the ones inside the tab Accountancy, for example). So I set the permissions create, write, unlink, read to 0, 1, 0, 1 in the res.partner model for the accountant group. The problem: if I'm an user of the accountant group and I go to the form of res.partner, I will see the Edit button, if I click on it, I will be able to modify any field I want (and I should not, only the ones inside the tab). So I thought to duplicate the menuitem (put the attribute groups="accountant" to the copy) and the form (put all fields readonly except for the content of the tab). The problem: if I'm an user of a group over accountant group (with accountant in its implied_ids list), I will see both menuitems (the one which takes to the normal form and the one which takes to the duplicated form with the readonly fields). Is it possible to create a menuitem which opens a specific set of views depending on the group of the user who is clicking on the mentioned menuitem? Any ideas of how can I succesfully implement this?
How to allow an user of a group to modify specific parts of a form in Odoo?
0.099668
0
0
2,097
33,627,996
2015-11-10T10:34:00.000
3
0
1
0
python,macos,pip,virtualenv,macports
33,648,870
1
true
0
0
MacPorts cannot install an executable named virtualenv in /opt/local/bin because MacPorts supports multiple versions of Python and the different virtualenvs for the different Python versions would conflict on these files. You can, however, install the py27-virtualenv port using sudo port install py27-virtualenv, which will give you virtualenv-2.7 in /opt/local/bin. Additionally, installing the py27-virtualenv port will pull in the virtualenv_select port, which allows you to use MacPorts' select mechanism to choose your preferred version of virtualenv: sudo port select --set virtualenv virtualenv27 should then create a symlink /opt/local/bin/virtualenv -> virtualenv-2.7, which sounds like what you want.
1
1
0
I failed to install virtualenv on Mac OS 10.9 using pip and macport. After installation with pip install virtuanenv I found the virtualenv was installed into /opt/local/Library/Fraemworks/Python.framework/Versions/2.7/bin. But it should be in /opt/local/bin. How to fix it?
Wrong virtualenv path installed with pip and macport
1.2
0
0
885
33,628,679
2015-11-10T11:13:00.000
15
0
0
0
python,opencv,mathematical-morphology
44,216,923
2
false
0
0
Make sure volume_start is dtype=uint8. You can convert it with volume_start = np.array(volume_start, dtype=np.uint8). Or nicer: volume_start = volume_start.astype(np.uint8)
1
9
1
I'm trying to morphologically close a volume with a ball structuring element created by the function SE3 = skimage.morphology.ball(8). When using closing = cv2.morphologyEx(volume_start, cv2.MORPH_CLOSE, SE) it returns TypeError: src data type = 0 is not supported Do you know how to solve this issue? Thank you
Python Opencv morphological closing gives src data type = 0 is not supported
1
0
0
16,369
33,630,137
2015-11-10T12:36:00.000
1
1
0
0
python,django,pootle,translate-toolkit
38,104,213
1
true
1
0
Template updates now happen outside of Pootle. The old update_against_template had performance problems and could get Pootle into a bad state. To achieve the same functionality as update_against_templates do the following. Assuming your project is myproject and you are updating language af: sync_store --project=myproject --language=af pot2po -t af template af update_store --project=myproject --language=af You can automate that in a script to iterate through all languages. Use list_languages --project=myproject to get a list of all the active languages for that project.
1
1
0
I added a new template file from my project. Now I don't know how to make the languages update or get the new template file. I've read that 2.5 has update_against_templates but it's not in 2.7. How will update my languages?
what is update_against_templates in pootle 2.7?
1.2
0
0
75
33,631,661
2015-11-10T13:56:00.000
0
0
1
0
python,selenium-webdriver,automated-tests,pycharm,browser-automation
33,639,639
1
false
0
0
Just to be clear, you'd like to use that package of locators and helpers on your current web app project and on future web app projects, right? If so, then it's a case of Python Packaging. Not the simplest topic, but well-documented and very standard (nothing unique to PyCharm.) If not, and you just want your current project to be able to import that folder, then you need to get that directory on your PYTHONPATH. In your PyCharm Run Configuration, add that path to the run configuration's PYTHONPATH.
1
1
0
Dear Test Automation Engineers, I am implementing Page Object Pattern in Python Language using Pycharm tool. My concerns are followings: Structure of Project - 2 tier: Under project folder: I want 2 packages(folder in python, pycharm): 1 folder should contain all tests to execute while other package should contain element locators blah blah etc [I would appreciate if you please share screenshot of structure heir-achy of project] I am facing problem in calling element locators from other package(folder) Locators should be page wise not complete locators of project in one file(it creates mess! - share best approaches) IMP: I don't want locators files(.py) and Testcases in one folder, should be in separate folders. I have gone through few examples on web but they are not 2-tier and don't follow Page object model structure project exactly.
SeleniumWebdriver: Implement Page Object Pattern in Python using Pycharm tool
0
0
1
362
33,632,165
2015-11-10T14:22:00.000
1
0
1
0
python,big-o
33,632,295
1
true
0
0
The definition of the "big O" is the following: f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. That is, for your example above, it should be O(n(mlog(m))), assuming that m > 1. If m == 1, then log(m) == 0, so that the notation should be O(n). In case both are possible, you might want to consider using O(nmlog(m) + n))
1
1
0
I'm writing a python definition that has a for loop of O(n), but inside it uses the list.sort() definition inside python which has O(n log n).. I've ended up with the time complexity of - 5 + n(mlogm + 21) I'm not sure how I would translate this into Big-O, as mlogm would obviously be the worst case scenario in the long run, but being inside the for loop, I'm not sure how to derive the correct answer.
Big-O notation, Python - How do I correctly expand "5 + n(mlogm + 21)"
1.2
0
0
94
33,638,010
2015-11-10T19:29:00.000
1
0
1
0
python,visual-studio-2015,ironpython
38,080,053
4
false
0
0
Go to Project -> Properties -> General -> Interpreter Set the Interpreter to IronPython 2.7 (you may need to install it).
2
2
0
So recently I've thought about trying IronPython. I've got my GUI configured, I got my .py file. I click Start in Visual Studio, and that thing pops up: The environment "Unknown Python 2.7 [...]". I have the environment in Solution Explorer set to the Unknown Python 2.7 and I have no idea how to change it. Installed 2.7, 3.5, IronPyhon 2.7 and refreshed them in Python Environments tab
Visual Studio 2015 IronPython
0.049958
0
0
7,496
33,638,010
2015-11-10T19:29:00.000
3
0
1
0
python,visual-studio-2015,ironpython
34,078,923
4
false
0
0
Generally the best approach to handle this is to right click "Python Environments" in Solution Explorer, then select "Add/remove environments" and change what you have added in there.
2
2
0
So recently I've thought about trying IronPython. I've got my GUI configured, I got my .py file. I click Start in Visual Studio, and that thing pops up: The environment "Unknown Python 2.7 [...]". I have the environment in Solution Explorer set to the Unknown Python 2.7 and I have no idea how to change it. Installed 2.7, 3.5, IronPyhon 2.7 and refreshed them in Python Environments tab
Visual Studio 2015 IronPython
0.148885
0
0
7,496
33,638,747
2015-11-10T20:15:00.000
1
0
0
0
python,usb,driver,libusb,pyusb
33,641,552
1
false
0
1
libusb doesn't interfere with anything. In PyUSB you choose to communicate directly with the device. To do this any other app holding this USB port have to be stopped. Windows driver in your mouse case. I think it may be possible for you to push the DPI activation code while driver is still using mouse, but how, I have no idea now. Or you may detach mouse temporarily , pass the code, and then release mouse back to Windows, hoping that DPI config wouldn't be reseted. If that doesn't pass, you may always completely emulate mouse driver. It is very easy. There are code samples for PyUSB on internet on how to interpret mouse data. So you do it and you pass the recognized command to OS either directly through Windows API or using PyMouse library. In this case you do not need driver, because your program is one, and it sends to mouse whatever you want. You have a lot of options how to do what you want. For instance, if this is only for your local use, and there is interface already doing what you want, automate that interface using pywinauto to perform a macro to activate/disable higher DPI. There are more possibilities. You can replace Windows driver with your own version, which will support what you wish, thus making your own layer in mouse stack. But this is extreme. I think you should start in order I wrote: See whether you can use PyUSB to send bytes needed without detaching mouse from OS If not, see whether detaching mouse, changing DPI, and returning mouse back to OS using PyUSB keeps the set DPI. 3.1 If not, make your own artificial driver using PyUSB and PyMouse or 3.2 Use pywinauto to automate the existing interface to change DPI for you.
1
1
0
I'm looking to write a program to change the DPI setting on my logitech G502 mouse (My goal being to use the program with AutoHotkey to help automate a task where I switch my DPI allot and to learn a bit about USB). I'm fairly fluent in C++ , C# and python. But I'm not at all knowledgeable on USB or drivers. So far I have: used the program USBlyser to identify a Control transfer sent to my mouse when using Logitech's software, which byte of the data corresponds to my DPI setting, and the product ID and Vendor ID of my mouse. After looking around on the net I decided PyUSB would be a good option for communicating to my mouse. After installing libUsb for use with PyUSB I realised that this then replaces my current mouse driver and makes it unusable. Am I just going about this all wrong? In my head all I want to do is send to my mouse the data "10 FF 0E CA 01 00 00", should I instead be somehow communicating with my existing logitech driver to do this? Or can I set-up libUsb without interfering with existing drivers? Any help will be appreciated, cheers Bradley.
Communicating to USB devices in windows 7 and still use pre-existing drivers?
0.197375
0
0
800
33,638,868
2015-11-10T20:21:00.000
0
0
0
1
python,dronekit-python,dronekit
37,731,570
2
false
0
0
I was having the same issue yesterday and fixed it by installing the latest build from github. I'm on Windows 10 but in this case it should be irrelevant.
1
1
0
I am tryingt to set a connection to a live quad copter using a the Drone-Kit api from the python command line. (I am using Python 2.7. I am also using OS X Yosemite 10.10.5) from dronekit import connect vehicle = connect('/dev/cu.usbserial-DJ00DA30', wait_ready=True) I get a message: Link timeout, no heartbeat in last 5 seconds In another 30 seconds, the command aborts. I know this is the correct device to use (cu.usbserial-DJ00DA30) because I am able to connect with it to the drone using APM Planner 2.0. Any help please
Using Drone-Kit to connect to Live Quad Copter
0
0
0
1,420
33,642,997
2015-11-11T01:53:00.000
0
0
0
0
python-2.7,enthought,mayavi,traitsui
33,675,492
1
true
0
0
You are going to wade into dangerous territory. As you noted the recorder has idosyncratic behavior -- what that really means is that it uses features to programatically "disable" the trait notifications while it is doing things. You can probably figure out a way to do it that way, but most likely you'll have to dig deeply into the code that assigns the vtk modules. What would probably make the most sense, is for you to write a GUI that does exactly what you want. That is, instead of listening to something like Volume._ctf, and then opening up the menu and changing the color, you can make a GUI and add a button that says "Change volume color" that when clicked brings the user to a color wheel. Then it's just a matter of listening to the GUI elements that you explicitly code for.
1
1
1
I would like to listen to changes in the transfer function in how the color and opacity (ctf/otf) of my data is represented. Listening to sensible-sounding traits such as mayavi.modules.volume.Volume._ctf does not trigger my callback. I would expect this to be changed by the user either through the "standard" mayavi pipeline display (as part of EngineRichView) or through including the Volume object's view directly. No such luck either way. It is maybe telling that when you press the big red "record" button, the recorder also does not seem to notice user changes to the ctf.
listen for ctf otf changes with traits in mayavi volume rendering
1.2
0
0
96
33,645,899
2015-11-11T07:23:00.000
2
0
0
0
python,django,middleware
33,646,559
2
false
1
0
You can implement your own RequestMiddleware (which plugs in before the URL resolution) or ViewMiddleware (which plugs in after the view has been resolved for the URL). In that middleware, it's standard python. You have access to the filesystem, database, cache server, ... the same you have anywhere else in your code. Showing the last N requests in a separate web page means you create a view which pulls the data from the place where your middleware is storing them.
1
1
0
It might be that this question sounds pretty silly but I can not figure out how to do this I believe the simplest issue (because just start learning Django). What I know is I should create a middleware file and connect it to the settings. Than create a view and a *.html page that will show these requests and write it to the urls. how can one store last (5/10/20 or any) http requests in the middleware and show them in a *.html page? The problem is I don't even know what exactly should I write into middlaware.py and views.py in the way it could be displayed in the *.html file. Ideally, this page should be also updated after the new requests occur. I read Django documentation, some other topics with middleware examples but it seems to be pretty sophisticated for me. I would be really thankful for any insights and elucidates. P.S. One more time sorry for a dummy question.
Python, Django - how to store http requests in the middleware?
0.197375
0
0
1,698
33,646,840
2015-11-11T08:39:00.000
0
0
0
0
python,callback,gtk3
33,658,899
1
true
0
1
Thank you, PM 2Ring. For posterity, the answer is that you can bind multiple callbacks to a signal in GTK3. They will be called in the order they were bound in.
1
0
0
If so, how would this be done? If not, is there a conventional way to achieve the same effect?
Is it possible to set multiple callback functions for a given event with GTK (Python)?
1.2
0
0
463
33,646,944
2015-11-11T08:46:00.000
1
0
0
0
python,django
33,649,907
2
false
1
0
textile import textile is now part of python and the reason django removed it so I've made a template filter that will use it.
1
0
0
Textile has been deprecated in django, I have't found any docs on a replacement for this, what's the alternative now for django?
Upgrading to Django 1.7 from 1.5 missing textile
0.099668
0
0
40
33,646,953
2015-11-11T08:46:00.000
1
0
0
0
python,user-interface,tkinter,widget,treeview
33,649,692
1
true
0
1
Your #2 is the right approach. You will have to write methods that calculate how to move up and down the tree. You can get the currently selected item, then use the .next() method to get the next child of the same parent. If that returns an empty string you can get the parent (by calling .parent()) and call .next() on it. You can recursively keep doing that until you hit the end of the tree (the parent is the root node, and .next() returns the empty string).
1
0
0
I have a window with a treeview and an entry widget. I would like to be able to write in the entry widget while still being able to use the up/down arrows to navigate the treeview. There are a few ways I've tried doing this: Send all keyboard events to both widgets (I have tried using custom bind_tags, unsuccessfully) Use the entry <Up> and <Down> bindings to navigate the treeview (I have not found a straightforward way to move up and down a tree with multiple parents and children, such as a file directory) Use the root <Key> binding to selectively send raw keycodes to the entry widget so things like backspace and left/right arrow work as expected (I haven't come across a method to send keycodes/events directly to the entry widget)
Split keyboard entry between two Tkinter widgets
1.2
0
0
112
33,649,021
2015-11-11T10:52:00.000
0
0
1
1
python,dll
33,650,903
1
true
0
0
The error has been resolved. The problem was that the third party dll should be present in the folder where iron python executable is present. For e.g if ipy.exe is present in the folder C:\Program Files(x86)\Iron Python2.7 then the third party dll has to be present there. The reason why it took me so long to figure this out because when i was using the previous version of the dll provided by the third party then it was not necessary to copy the dll in the path C:\Program Files(x86)\Iron Python2.7. But for the new version i have to do that. Strange it is!!
1
0
0
From last one week I got stuck in this error. I have a dll from third party which needs to be linked for my system to behave properly. I have put the dll into the folder from where i am running the command prompt. For e.g. My python script is in C:/xyz and also my dll is in the same folder. When i am running the python script (using iron python) from cmd.exe it says that the dll is not found. I am able to run the same python script using iron python environment from visual studio and it is running fine. What could possibly go wrong?
Not able to link dll using cmd.exe
1.2
0
0
49
33,650,406
2015-11-11T12:19:00.000
2
0
1
0
python
33,669,558
3
false
0
0
The error message says everything, your score variable holds an int, but it needs to be a string to be written to a file. (It even tells you the line number.) So convert it to a string, like this: text_file.write(str(score)) Apart from that, you have a bug in your code caused by a typo in a variable name:socre = score + 0. This does not cause the error, but will cause the code not to work as expected. This is a perfect example of why lazyness can be terrible when programming: By copy-pasting your code block, you copied the bug as well, and now you have multiple bugs by copying a redundant code. Using a function instead and calling it multiple times would have kept the bug in a single place. Also you are importing the same module random multiple times, which hurts readability of your code a lot. Put all imports on the top of your module, and do not write the same import multiple times (once it is imported it stays imported).
1
1
0
How to create a python file from given results, I need to save it in a text document or in excel. Any suggestions?
Creating file from python results
0.132549
0
0
50
33,652,909
2015-11-11T14:41:00.000
0
1
0
1
python
33,656,984
2
false
0
0
If you install the pywin tool set, the function win32.shell.IsUserAnAdmin() can be used to see if you a a member of the administrators group.
1
3
0
I want to check if a python script is running with admin permissions on windows, without using the ctypes module. It is important for me not to use ctypes for some reasons. I have looked with no luck. Thanks.
How to know if a python script is running with admin permissions in windows?
0
0
0
97
33,657,161
2015-11-11T18:16:00.000
0
1
0
0
dronekit-python
33,693,815
1
false
1
0
Found out from another website that there was a bug in the releases prior to Dronkit 2.0.0.rc9. They forgot to put in any way to adjust baud rate! Latest release Dronekit 2.0.0.rc10 has the fix.
1
0
0
I am unable to connect to my Pixhawk drone with 3DR radios and Dronekit 2 and Python code. I am able to connect with a USB cable attached to Pixhawk. I suspect the problem is the baud rate with the radios are too high. There seems to be no way to change the baud rate with the radios in the connect command. Please advise. Windows 8.1 Thank you!
No way to adjust baud rate in connect() with Dronekit 2 and 3DR Radios and Pixhawk?
0
0
0
204
33,660,279
2015-11-11T21:35:00.000
2
0
0
0
python,youtube,youtube-api,youtube-analytics-api
33,699,139
1
false
0
0
Apologies, download url's are provided, not sure why it wasn't showing in the first place, maybe copy and pasted the code incorrectly. It is displayed out of the retrieve_reports function. It looks like the actual problem is that I did not read the docs thoroughly! reports are not available for 48 hours! It would be nice if we had a development mode or something, pointing at non-prod youtube environment where we could get reports immediately, am I missing something that could provide this?
1
0
0
When stepping thru the example python code for creating and retrieving data using the youtube-reporting-api, when I get to the retrieve_report.py code that prompts me to: Please enter the report URL to download: What is supposed to be entered here? A specific example would be most helpful.
youtube-reporting-api retrieve_reports.py "report URL to download"
0.379949
0
1
356
33,661,870
2015-11-11T23:34:00.000
0
0
1
0
ipython,jupyter
33,726,860
3
false
0
0
It may be solved, if you are using alphabet character on your user name rather than your own language (kanji). I also search answer using ipython notebook by not to change my user name...
1
1
0
I'm using a Japanese version of windows 8.1 installed on my computer. The problem is that my windows is in japanese as such I'm not able to use ipython to open up .pynb files...Do anyone have similar issues? I will appreciate all help provided. Thank you. The error message is as shown below. [C 23:46:56.016 NotebookApp] Bad config encountered during initialization: [C 23:46:56.016 NotebookApp] Could not decode 'C:\Users\x83\x86\x81[\x83W\x81 [\x83\x93.jupyter' for unicode trait 'config_dir' of a NotebookApp instance.
Ipython notebook cannot start up (Windows 8.1)
0
0
0
2,300
33,663,145
2015-11-12T01:53:00.000
0
0
1
0
dronekit-python
33,666,610
2
false
0
0
Try pip install dronekit -UI. It ignores installed versions and any "apparent" requirements are in your environment. I also recommend pip uninstall dronekit; pip install dronekit as the easiest mechanism to avoid pip woes.
1
0
0
I am trying to upgrade my Dronekit 2.0.0rc.4 As of 10 minutes ago, they are up to 2.0.0cr.10 I need this version because apparently the older versions don't have a way to adjust baud rate when using 3DR radios. I try pip install dronekit --upgrade but get message that I am already up-to-date. Please advise. Windows 8.1 Thanks!
pip install dronekit --upgrade does not work
0
0
0
340
33,665,039
2015-11-12T05:32:00.000
1
0
1
1
ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10
67,752,486
19
false
0
0
I had the same issue under my conda virtual env in windows pc and downgrading the jedi to 0.17.2 version resolved the issue for me. conda install jedi==0.17.2
9
69
0
TAB completion works fine in iPython terminal, but not in Firefox browser. So far I had tried but failed, 1). run a command $ sudo easy_install readline, then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg, but TAB completion still doesn't work in Jupyter Notebook. 2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed. I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython. Any help would be appreciated!
TAB completion does not work in Jupyter Notebook but fine in iPython terminal
0.010526
0
0
91,871
33,665,039
2015-11-12T05:32:00.000
4
0
1
1
ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10
40,527,071
19
false
0
0
In my case, after running pip install pyreadline, I needed to re-execute all the lines in Jupyter before the completion worked. I kept wondering why it worked for IPython but not Jupyter.
9
69
0
TAB completion works fine in iPython terminal, but not in Firefox browser. So far I had tried but failed, 1). run a command $ sudo easy_install readline, then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg, but TAB completion still doesn't work in Jupyter Notebook. 2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed. I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython. Any help would be appreciated!
TAB completion does not work in Jupyter Notebook but fine in iPython terminal
0.04208
0
0
91,871
33,665,039
2015-11-12T05:32:00.000
0
0
1
1
ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10
61,810,343
19
false
0
0
The best fix I've found for this issue was to create a new Environment. If you are using Anaconda simply create a new environment to fix the issue. Sure you have to reinstall some libraries but its all worth it.
9
69
0
TAB completion works fine in iPython terminal, but not in Firefox browser. So far I had tried but failed, 1). run a command $ sudo easy_install readline, then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg, but TAB completion still doesn't work in Jupyter Notebook. 2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed. I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython. Any help would be appreciated!
TAB completion does not work in Jupyter Notebook but fine in iPython terminal
0
0
0
91,871
33,665,039
2015-11-12T05:32:00.000
0
0
1
1
ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10
63,544,019
19
false
0
0
I had the same issue when I was using miniconda, I switched to anaconda and that seems to have solved the issue. PS. I had tried everything I could find on the net but nothing resolved it except for switching to anaconda.
9
69
0
TAB completion works fine in iPython terminal, but not in Firefox browser. So far I had tried but failed, 1). run a command $ sudo easy_install readline, then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg, but TAB completion still doesn't work in Jupyter Notebook. 2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed. I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython. Any help would be appreciated!
TAB completion does not work in Jupyter Notebook but fine in iPython terminal
0
0
0
91,871
33,665,039
2015-11-12T05:32:00.000
11
0
1
1
ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10
55,069,872
19
false
0
0
you can add %config IPCompleter.greedy=True in the first box of your Jupyter Notebook.
9
69
0
TAB completion works fine in iPython terminal, but not in Firefox browser. So far I had tried but failed, 1). run a command $ sudo easy_install readline, then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg, but TAB completion still doesn't work in Jupyter Notebook. 2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed. I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython. Any help would be appreciated!
TAB completion does not work in Jupyter Notebook but fine in iPython terminal
1
0
0
91,871
33,665,039
2015-11-12T05:32:00.000
0
0
1
1
ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10
65,898,020
19
false
0
0
As the question was asked five years ago, the answer was likely different back then... but I want to add my two cents if anyone googles today: The answer by users Sagnik and more above worked for me. One thing to add is that if running anaconda, you can do what I did: simply start the anaconda-navigator software, locate the jedi package in my environment, click the little checkbox on the right of jedi under "Mark for specific version installation", choose 0.17.2 After restarting the kernel everything worked :)
9
69
0
TAB completion works fine in iPython terminal, but not in Firefox browser. So far I had tried but failed, 1). run a command $ sudo easy_install readline, then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg, but TAB completion still doesn't work in Jupyter Notebook. 2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed. I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython. Any help would be appreciated!
TAB completion does not work in Jupyter Notebook but fine in iPython terminal
0
0
0
91,871
33,665,039
2015-11-12T05:32:00.000
0
0
1
1
ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10
63,314,060
19
false
0
0
Creating a new env variable helped me to solve this problem. Use environments.txt content in .conda as path.
9
69
0
TAB completion works fine in iPython terminal, but not in Firefox browser. So far I had tried but failed, 1). run a command $ sudo easy_install readline, then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg, but TAB completion still doesn't work in Jupyter Notebook. 2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed. I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython. Any help would be appreciated!
TAB completion does not work in Jupyter Notebook but fine in iPython terminal
0
0
0
91,871
33,665,039
2015-11-12T05:32:00.000
6
0
1
1
ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10
65,775,964
19
false
0
0
I had a similar issue and unfortunately cannot comment on a post, so am adding an easy solution that worked for me here. I use conda and conda list showed I was running jedi-0.18.0. I used the command conda install jedi==0.17.2. This quickly fixed the problem for my conda environment. Additional note: I usually use jupyter-lab, and was not seeing the error messages generated. By switching to jupyter notebook, I saw the following error: [IPKernelApp] ERROR | Exception in message handler: Traceback (most recent call last): File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\ipykernel\kernelbase.py", line 265, in dispatch_shell yield gen.maybe_future(handler(stream, idents, msg)) File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\tornado\gen.py", line 762, in run value = future.result() File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\tornado\gen.py", line 234, in wrapper yielded = ctx_run(next, result) File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\ipykernel\kernelbase.py", line 580, in complete_request matches = yield gen.maybe_future(self.do_complete(code, cursor_pos)) File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\ipykernel\ipkernel.py", line 356, in do_complete return self._experimental_do_complete(code, cursor_pos) File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\ipykernel\ipkernel.py", line 381, in _experimental_do_complete completions = list(_rectify_completions(code, raw_completions)) File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\IPython\core\completer.py", line 484, in rectify_completions completions = list(completions) File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\IPython\core\completer.py", line 1815, in completions for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000): File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\IPython\core\completer.py", line 1858, in _completions matched_text, matches, matches_origin, jedi_matches = self._complete( File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\IPython\core\completer.py", line 2026, in _complete completions = self._jedi_matches( File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\IPython\core\completer.py", line 1369, in jedi_matches interpreter = jedi.Interpreter( File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\jedi\api_init.py", line 725, in init super().init(code, environment=environment, TypeError: init() got an unexpected keyword argument 'column' I highlighted a couple of the jedi messages, but this all reinforced it was a problem related to the version of jedi installed.
9
69
0
TAB completion works fine in iPython terminal, but not in Firefox browser. So far I had tried but failed, 1). run a command $ sudo easy_install readline, then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg, but TAB completion still doesn't work in Jupyter Notebook. 2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed. I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython. Any help would be appreciated!
TAB completion does not work in Jupyter Notebook but fine in iPython terminal
1
0
0
91,871
33,665,039
2015-11-12T05:32:00.000
3
0
1
1
ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10
65,650,287
19
false
0
0
The answer from Sagnik above (Dec 20, 2020) works for me on Windows 10. pip3 install jedi==0.17.2 [Sorry I'm posting this as an answer instead of comment. I have no permission to comment yet. ]
9
69
0
TAB completion works fine in iPython terminal, but not in Firefox browser. So far I had tried but failed, 1). run a command $ sudo easy_install readline, then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg, but TAB completion still doesn't work in Jupyter Notebook. 2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed. I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython. Any help would be appreciated!
TAB completion does not work in Jupyter Notebook but fine in iPython terminal
0.031568
0
0
91,871
33,665,544
2015-11-12T06:27:00.000
0
0
0
0
python,postgresql,utf-8,sqlalchemy,unicode-string
33,685,308
2
false
0
0
There's nothing Unicode about it. \x is a byte literal prefix and requires a hex value to follow. PostgreSQL also supports the \x syntax also, so it may be PostgreSQL that's dropping it. Consider escaping all slashes or find-replace on \x before handing to SQLAlchemy
2
0
0
I am using sqlAlchemy to interact with a postgres database. It is all set to work with inserting string data. The data I receive is normally utf-8 and the setup works very well. As a edge case, recently, data came up in the format somedata\xtrailingdata. SQLAlchemy is attempting to make this entry with somedata completely stripping out everything after \x. Can you please tell me if there's a way to instruct SQLAlchemy to just attempt inserting the whole thing instead of removing the unicode part. I have attempted create_engine(dbUri, convert_unicode=True, client_encoding='utf8') create_engine(dbUri, convert_unicode=False, client_encoding='utf8') create_engine(dbUri, convert_unicode=False) None worked out so far. I really would appreciate your input in inserting this data into string column. PS:Can't modify the column type of DB. This is a very edge case, not the norm.
Inserting unicode into string with sqlAlchemy
0
1
0
442
33,665,544
2015-11-12T06:27:00.000
0
0
0
0
python,postgresql,utf-8,sqlalchemy,unicode-string
34,233,985
2
false
0
0
The problem turned out to be \x00. When passed to SQLAlchemy an a value with \x00, it truncates it to the data preceding \x00. We traced the problem to the C library underneath the SQLAlchemy.
2
0
0
I am using sqlAlchemy to interact with a postgres database. It is all set to work with inserting string data. The data I receive is normally utf-8 and the setup works very well. As a edge case, recently, data came up in the format somedata\xtrailingdata. SQLAlchemy is attempting to make this entry with somedata completely stripping out everything after \x. Can you please tell me if there's a way to instruct SQLAlchemy to just attempt inserting the whole thing instead of removing the unicode part. I have attempted create_engine(dbUri, convert_unicode=True, client_encoding='utf8') create_engine(dbUri, convert_unicode=False, client_encoding='utf8') create_engine(dbUri, convert_unicode=False) None worked out so far. I really would appreciate your input in inserting this data into string column. PS:Can't modify the column type of DB. This is a very edge case, not the norm.
Inserting unicode into string with sqlAlchemy
0
1
0
442
33,666,572
2015-11-12T07:51:00.000
1
1
1
0
python,python-2.7,parameters,arguments,python-unittest
33,666,696
1
false
0
0
If you want to sample the current time on the system running the puthon code, you can use datetime.datetime.now() (local time) or datetime.datetime.utcnow() (UTC). Then you can format as string using .strftime("%Y%m%d%H%M%S").
1
1
0
I run my Python scripts with python -m unittest discover command. I'd like to passing current time yyyymmddhhmmssas parameter into execution command. How can do that?
Passing currenttime as parameter into Python command line
0.197375
0
0
1,036
33,672,506
2015-11-12T13:26:00.000
0
0
1
0
python,regex,python-3.x,repeat
33,673,417
3
false
0
0
OK. I presume that this is a homework assignment, so I'm not going to give you a complete solution. But, you really need to do a number of things. The first is to read the input file in to memory. Then split it in to its component words (tokenize it) probably contained in a list, suitably cleaned up to remove stray punctuation. You seem to be well on your way to doing that, but I would recommend you look at the split() and strip() methods available for strings. You need to consider whether you want the count to be case sensitive or not, and so you might want to convert each word in the list to (say) lowercase to keep this consistent. So you could do this with a for loop and the string lower() method, but a list-comprehension is probably better. You then need to go through the list of words and count how many times each one appears. If you check out collections.Counter you will find that this does the heavy lifting for your or, alternatively, you will need to build a dictionary which has the words as keys and the count of the words. (You might also want to check out the collections.defaultdict class here as well). Finally, you need to go through the text you've read from the file and for each word it contains which has more than one match (i.e. the count in the dictionary or counter is > 1) mark it appropriately. Regular expressions are designed to do exactly this sort of thing. So I recommend you look at the re library. Having done that, you simply then write the result to a file, which is simple enough. Finally, with respect to your file operations (reading and writing) I would recommend you consider replacing the try ... except construct with a with ... as one.
1
1
0
I'm a beginner to both Python and to this forum, so please excuse any vague descriptions or mistakes. I have a problem regarding reading/writing to a file. What I'm trying to do is to read a text from a file and then find the words that occur more than one time, mark them as repeated_word and then write the original text to another file but with the repeated words marked with star signs around them. I find it difficult to understand how I'm going to compare just the words (without punctuation etc) but still be able to write the words in its original context to the file. I have been recommended to use regex by some, but I don't know how to use it. Another approach is to iterate through the textstring and tokenize and normalize, sort of by going through each character, and then make some kind av object or element out of each word. I am thankful to anyone who might have ideas on how to solve this. The main problem is not how to find which words that are repeated but how to mark them and then write them to the file in their context. Some help with the coding would be much appreciated, thanks. EDIT I have updated the code with what I've come up with so far. If there is anything you would consider "bad coding", please comment on it. To explain the Whitelist class, the assignment has two parts, one of where I am supposed to mark the words and one regarding a whitelist, containing words that are "allowed repetitions", and shall therefore not be marked. I have read heaps of stuff about regex but I still can't get my head around how to use it.
Reading text from a file, then writing to another file with repetitions in text marked
0
0
0
115
33,674,736
2015-11-12T15:12:00.000
0
0
0
1
python,scripting,hosting,long-running-processes
33,695,205
1
false
0
0
I tweeted to my hosting corp and they said my long-running python data analysis script is probably okay as long as it doesn't over-use resources. I let it rip - just a single process churning away, generating a sub-megabyte data output file, but alas, they killed the process for me during the night, with a note that the CPU usage was too much. Just an FYI in case you have a bit of 'big data' analysis to do. I suppose I could chop it up and run it sporadically, but that would just be hiding the CPU usage. So I'll find an old machine to churn, albeit much more slowly, for me. :/ I suppose this is a task better suited to a dedicated hosting enviro? Big data apps not suited to low-cost shared/virtual hosting services?
1
0
0
I have a python data analysis script that runs for many hours, and while it was running on my desktop, with fans blazing I realized I could just run it on a hosting account remotely in bkgnd and let it rip. But I'm wondering - is this generally frowned upon by hosting providers? Are they assuming that all my CPU/memory usage is bursty-usage from my Apache2 instance and a flat-out process running for 12hrs will get killed by their sysop? Or do they assume I'm paying for usage, so knock yourself out? My script and its data is self-contained and is using no network or database resources. Any experience with that?
Long-Running processes and hosting providers?
0
0
0
45
33,677,303
2015-11-12T17:12:00.000
1
0
0
0
django,python-3.x
33,677,561
3
false
1
0
We use class based views (CBV's) to reduce the amount of code required to do repetitive tasks such as rendering a form or a template, listing the items from a queryset etc. Using CBV's drastically reduces the amount of code required and should be used where it can be.
2
1
0
I'm learning Django and I've finished 2 tutorials - official and amazing tutorial called Tango With Django. Though, I got everything I need to work, I have one question: In Tango with Django aren't used class-based views - only links to the official tutorial. Why didn't they include this information? When should we use class-based views and is it a good practice?
When do we need class-based views in Django?
0.066568
0
0
88
33,677,303
2015-11-12T17:12:00.000
0
0
0
0
django,python-3.x
33,679,195
3
false
1
0
It is good to be used in a CRM system. You have a a list view, item view, delete view etc, such as an blog. CBV could help you write less code. But also because it has do too much for you. Sometimes it will be a little bit troublesome to make some customization or add some extra logic. In this situation, it is more suitable for those really experienced and familiar with CBV so that they could change it easily.
2
1
0
I'm learning Django and I've finished 2 tutorials - official and amazing tutorial called Tango With Django. Though, I got everything I need to work, I have one question: In Tango with Django aren't used class-based views - only links to the official tutorial. Why didn't they include this information? When should we use class-based views and is it a good practice?
When do we need class-based views in Django?
0
0
0
88
33,677,594
2015-11-12T17:28:00.000
0
0
0
0
python,boto,eucalyptus
33,695,098
2
false
1
0
2.38 is the right version. boto3 is something totally different and I don't have experience with it.
1
1
0
I'm writing some code to interact with an HP Helion Eucalyptus 4.2 cloud server. At the moment I'm using boto 2.38.0, but I discovered that also exists the boto3 version. Which version should I use in order to keep the code up with the times? I mean, It seems that boto3's proposal is a ground-up rewrite more focused on the "official" Amazon Web Services (AWS).
Proper version of boto for Eucalyptus cloud
0
0
1
189
33,677,932
2015-11-12T17:46:00.000
0
0
0
0
python,scikit-learn,cluster-analysis,feature-selection
33,683,680
1
false
0
0
Are you sure it was done automatically? It sounds to me as if you should be treating this as a classification problem: construct a classifier that does the same as the human did.
1
0
1
I have a clustering of data performed by a human based solely on their knowledge of the system. I also have a feature vector for each element. I have no knowledge about the meaning of the features, nor do I know what the reasoning behind the human clustering was. I have complete information about which elements belong to which cluster. I can assume that the human was not stupid and there is a way to derive the clustering from the features. Is there an intelligent way to reverse-engineer the clustering? That is, how can I select the features and the clustering algorithm that will yield the same clustering most of the time (on this data set)? So far I have tried the naive approach - going through the clustering algorithms provided by the sklearn library in python and comparing the obtained clusters to the source one. This approach does not yield good results. My next approach would be to use some linear combinations of the features, or subsets of features. Here, again, my question is if there is a more intelligent way to do this than to go through as many combinations as possible. I can't shake the feeling that this is a standard problem and I'm just missing the right term to find the solution on Google.
Reverse-engineering a clustering algorithm from the clusters
0
0
0
497
33,678,351
2015-11-12T18:09:00.000
1
1
0
0
python,unit-testing,url,selenium
36,340,048
2
true
0
0
There are 2 good approaches to doing this, 1) Using a Configuration Manager, a singleton object that stores all your settings. 2) Using a basetest, a single base test that all your tests inherit from. My preference is towards a Configuration Manager. And within that configuration manager, you can put your logic for grabbing the base url form configuration files, system environments, command line params, etc...
1
0
0
I have 10+ test cases at the moment and planning on creating several more. That being said is there a way that I can modify the URL variable once and that would change the variable in all my other scripts? I have this in all my of test scripts: class TestCase1(unittest.TestCase): def setUp(self): self.driver = webdriver.Firefox() self.driver.implicitly_wait(30) self.base_url = "http://URL" self.verificationErrors = [] self.accept_next_alert = True I want to be able to be able to modify self.base_url = http://URL. But I don't want to have to do that 10+ times.
Python Selenium Having to Change URL Variable On All of My Test Cases
1.2
0
1
1,381
33,681,281
2015-11-12T21:01:00.000
1
0
1
0
python,matplotlib,pip
42,054,866
2
false
0
0
You can install the package from your distro with: sudo apt-get install python3-matplotlib It will probably throw an error when you import matplotlib, but it is solved by installing the package tkinter with: sudo apt-get install python3-tk
1
1
1
I have two python compilers on my Ubuntu 14.04 VM. I have installed matplotlib as pip install matplotlib But the matplotlib cannot be used from python3.It can be used from python2.7 If I use import matplotlib.pyplot as plt inside my script test.py and run it as python3 test.py I get the error ImportError: No module named 'matplotlib' How can this be fixed.
matplotlib can't be used from python3
0.099668
0
0
1,128
33,685,570
2015-11-13T03:44:00.000
0
0
1
0
python,python-2.7
33,685,613
1
false
0
0
Code comments say this: Equivalent to x**y (with two arguments) or x**y % z (with three arguments) Some types, such as ints, are able to use a more efficient algorithm when invoked using the three argument form.
1
1
0
Is there a reason why Python's built-in pow allows 3 arguments, but math.pow allows only 2? Of course we could lazily say, "because that's how it's designed". But does anyone know the reason behind this design decision? Edit: To clarify, I understand what the difference is, but I'm wondering why we need both. Why not just have the the 3 argument built-in pow.
Number of arguments passed to Python's pow
0
0
0
306
33,686,847
2015-11-13T06:04:00.000
0
0
0
0
python,pcap,packet-capture,sniffing
33,688,615
2
false
0
0
Generally will the router just send the packets to you which are addressed to you. This includes packets to you directly or broadcast messages. The messages sent to other devices in the network are not reaching your machine at all, therefore it's impossible to capture it. If you really have to monitor the whole network traffic you need to get somewhere inbetween the router and the network. Or you could try a man in the middle attack on your wifi.
2
1
0
It's the first time I use monitor mode in pcap. I think I start the monitor mode successfully since I can see that there is an "eye" symbol on wifi. However, I still cannot capture packets not sent to me :( I use handle but not sure how it works and how can I capture those packets not sent to me.
How to use pcap in python to capture other's traffic in same wifi?
0
0
0
1,544
33,686,847
2015-11-13T06:04:00.000
0
0
0
0
python,pcap,packet-capture,sniffing
33,859,909
2
false
0
0
PCAP will only display traffic sent to you. This is true of any sniffer, the software needs to be able to see the traffic. In order to see WiFi traffic that is not sent to you, you will need a WiFi adapter that supports monitor mode. Approx 90% of WiFi adapters, or more, do not support monitor mode. Both the hardware has to be capable to enter the RFMON mode and the driver for the adapter needs to support monitor mode. If monitor mode is set up correctly, you will see 802.11 management traffic (beacons, probes, etc). I don't think monitor mode will show you actual data traffic but I am not sure and maybe it's only my adapter that doesn't show it. If you can see data where the source and dest MAC address are not the MAC address of your adapter and not FF:FF:FF:FF:FF:FF (broadcast) than you are using monitor mode. I don't know which OS you are using and I don't know what handle is (program name?) so I can't help there. I would suggest that you set the adapter into monitor mode and than verify by running wireshark on the device and look at the traffic. If that works you can go back to PCAP and debug from there. If you are using Linux and your adapter supports monitor mode, you can enable it by running the following commands as root ip link set wlan0 down iwconfig wlan0 mode monitor ip link set wlan0 promisc on ip link set wlan0 up Note that if a adapter is in monitor mode, it cannot be connected to a local WiFi network. The adapter would need to be in managed mode where the WiFi AP manages what type of data an adapter receives (frequency, channel, etc). If you are connected to a local WLAN via the adapter than you are not in monitor mode. Oh! I think I remember hearing somewhere that WinPCAP doesn't support monitor mode. I don't use Windows so you may want to verify that.
2
1
0
It's the first time I use monitor mode in pcap. I think I start the monitor mode successfully since I can see that there is an "eye" symbol on wifi. However, I still cannot capture packets not sent to me :( I use handle but not sure how it works and how can I capture those packets not sent to me.
How to use pcap in python to capture other's traffic in same wifi?
0
0
0
1,544
33,688,875
2015-11-13T08:52:00.000
10
0
1
1
python,macos,pip
41,799,420
1
false
0
0
Trying to install the scrapy I need to install cryptography package on Mac OS El Capitan. As explained in Cryptography installation doc env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" pip install cryptography
1
1
0
When executing pip3 install cryptography, pip3 gives an error: fatal error: 'openssl/aes.h' file not found #include <openssl/aes.h> 1 error generated. error: command '/usr/bin/clang' failed with exit status 1 I checked with brew info openssl and got the answer: Generally there are no consequences of this for you. If you build your own software and it requires this formula, you'll need to add to your build variables: LDFLAGS: -L/usr/local/opt/openssl/lib CPPFLAGS: -I/usr/local/opt/openssl/include The problem now is: how can I tell pip add the paths into corresponding build variables when it uses clang to compile cpp file?
How to install cryptography for python3 in Mac OS X?
1
0
0
2,064
33,691,392
2015-11-13T11:10:00.000
-5
0
1
0
python,queue,python-multiprocessing,lifo
33,691,733
3
false
0
0
The multiprocessing.Queue is not a data type. It is a mean to communicate between two processes. It is not comparable to Stack That's why there is no API to pop the last item off the queue. I think what you have in mind is to make some messages to have a higher priority than others. When they are sent to the listening process, you want to dequeue them as soon as possible, bypassing existing messages in the queue. You can actually achieve this effect by creating two multiprocessing.Queue: One for normal data payload and another for priority message. Then you do not need to worry about getting the last item. Simply segregate two different type of messages into two queues.
1
11
0
I understand the difference between a queue and stack. But if I spawn multiple processes and send messages between them put in multiprocessing.Queue how do I access the latest element put in the queue first?
How to implement LIFO for multiprocessing.Queue in python?
-1
0
0
3,099
33,691,635
2015-11-13T11:23:00.000
0
0
0
0
python,python-3.x,sqlite
48,045,704
3
false
0
0
Any Cygwin users who find themselves here: run the Cygwin installation .exe... Choose "Categories", "Database", and then one item says "sqlite3 client to access sqlite3 databases", or words to that effect.
1
1
0
Ubuntu python2 and python3 both can import sqlite3, but I can not type sqlite3 in command prompt to open it, it said sqlite3 is not installed, if i want to use it out of python should I install sqlite3 solely using apt-get or I can find it in some directory of python, add it to path and use directly in command line. I also installed python3.5 on mac, the mac shipped with python2, and I can use sqlite3 in the command line by type sqlite3, it is version 3.8.10.2, seems installed by python2, but the python3.5 installed a different version of sqlite3, where can I find it?
How to open sqlite3 installed by Python 3?
0
1
0
16,159
33,692,949
2015-11-13T12:37:00.000
3
0
0
0
python,c,sockets
33,693,102
2
false
0
0
Try to connect on that port using socket.connect(), if connection is not successful, then show message that Port not opened.
2
4
0
If a socket program runs on a port(say 6053) and if the rule is not added in the firewall the functions recv read and recvfrom are blocked. How do we check this in C or python and report Port not opened error on linux machines.
How to know whether a port is open/close in socket programming?
0.291313
0
1
1,294
33,692,949
2015-11-13T12:37:00.000
1
0
0
0
python,c,sockets
33,704,573
2
false
0
0
Tools like nmap can help in determining whether the particular port is open or closed. TCP : nmap uses techniques like TCP SYN scan or TCP Connect scan where the server will reply with ACK-RST packet for SYN request incase of closed port. You can notice that, it is determined at the time of 3-way handshake (connection establishment) itself. UDP : nmap also facilitates the UDP scan, where ICMP based 'Port Unreachable' packet shall be returned in case the UDP packet arrives on a closed UDP port (This also depends on the stack in the OS). Unlike TCP, the UDP is not a connection-based protocol and ICMP is also connection-less, so you might need to send some significant number of UDP packets for a short interval and evaluate based on the responses or some similar logic. You can arrive on similar technique/logic and determine whether the particular port is open or closed and flash appropriate message for user.
2
4
0
If a socket program runs on a port(say 6053) and if the rule is not added in the firewall the functions recv read and recvfrom are blocked. How do we check this in C or python and report Port not opened error on linux machines.
How to know whether a port is open/close in socket programming?
0.099668
0
1
1,294
33,696,164
2015-11-13T15:28:00.000
0
0
1
0
ipython-notebook,jupyter
33,870,576
1
false
0
0
It works like any other checkpoint-systems. They are always safe to delete, but you never know when you will need them, which is why they are created. An interesting question could be how you can disable the whole checkpoint system.
1
1
0
IPython Notebook has been opening lots of temporary folders, including those ending with .ipynb[some random chars] and folders with checkpoints. I guess some of them were created when my compyter crashed or all sort of things happened. I would have assumed those files would clean themselves once everything is normally saved, but they don't. They keep being there, trashing my workspace. Is it safe to delete those files and folders, once I've saved my original .ipynb file? Thanks
IPython Notebook - Is it safe to delete temporary files and folders?
0
0
0
2,182
33,696,300
2015-11-13T15:34:00.000
2
0
1
0
python,python-2.7,pdb
33,710,178
1
false
0
0
The direct way, of course, is to pass the line as an argument to l. But without having to go through the trouble of finding the current line and typing it, the non-optimal way I usually do it is to return to the same line by navigating up+down the call stack, then listing again. The sequence of commands for that is: u (up), d (down), l.
1
1
0
when using pdb to debug a python script, repeating l command will continue listing the source code right after the previous listing. l(ist) [first[, last]] List source code for the current file. Without arguments, list 11 lines around the current line or continue the previous listing. With one argument, list 11 lines around at that line. With two arguments, list the given range; if the second argument is less than the first, it is interpreted as a count. How can I repeatedly show the current line (i.e. the line where the program running is paused), instead of continuing after the previous listing? Thanks.
pdb: how to show the current line, instead of continuing after the previous listing?
0.379949
0
0
849
33,697,893
2015-11-13T16:54:00.000
1
0
1
0
python-3.x,autocomplete,sublimetext3
36,128,676
1
true
0
0
in case you are still looking for the answer, I had a similar problem. I had both SublimeJEDI autocompletion as well as Anaconda. The flashing behavior is a result of you having two separate autocompletes fighting for the same space. Turning off SublimeJEDI solved this for me - I couldn't find a way to turn off Anaconda's.
1
0
0
I'm new to SublimeText and Python3, so I don't really know how to turn Sublime autocomplete on. I installed Anaconda from Package Control, but I don't know how to use it. Some autocomplete shows up, but I don't think it's Anaconda's. That autocomplete keeps on poping up, then dissapearing. I can't read what it says and it hurts my eyes. How can i properly set up the autocomplete?
SublimeText 3 Anaconda autocomplete bug
1.2
0
0
374
33,701,621
2015-11-13T20:56:00.000
1
0
0
0
python,audio,pygame
33,701,749
1
false
0
1
Check out the pygame.draw methods. You can probably take the audio values and map them to one of the draw options - like draw.arc or draw.line. You will have to map the signal output to values that remain within the X and Y max and min of the viewport. Processing can do the same thing, but is a bit easier to implement if you are interested in learning the scripting language. It has methods specifically for doing the mapping for you and you can do some pretty extreme visuals without a lot of code.
1
2
0
So I am attempting to make a little audio player using Pygame. I wanted to add a little audio visualizer similar to in Windows Media Player. I was thinking of starting with an audio wave that scrolls across the screen. But I'm not sure where to start. Right now I'm just using pygame.mixer to start, stop, and pause the music. I think I would have to use pygame.sndarray and get some samples but I don't know what to do from there. What can I do to turn those samples into a visual audio wave?
Audio visualization
0.197375
0
0
753
33,701,947
2015-11-13T21:19:00.000
2
0
1
0
python,windows
33,702,451
2
false
0
0
Ctrl-C generates KeyboardInterrupt Exception, so unless you blindly catch all exceptions and ignore them, it should do just that. (If you do catch all, can add an exception for KeyboardInterrupt). Interestingly, this does not work for me because I am using Cygwin. You can force-terminate the program using task manager. However, if you have more than one python process running, this can be tricky. To do this, have your process print PID in the first lines of the log file (you do have a log file, right?) print("started process", os.getpid()) To see process: tasklist /FI "PID eq 1234" To kill process: taskkill /PID 1234 /F Advanced process interruption: Have your program wait for command on a socket.
1
0
0
I am running Python 3 code in a windows Command line prompt. The program has an infinite loop that I use. (While (1)) Sounds like bad design but it's meant to be like this. Is there a way to force close the program without having to close the command prompt. In the terminal, Ctrl + C often works for this.
How to force terminate a Python 3 program in Windows?
0.197375
0
0
1,668
33,703,866
2015-11-14T00:37:00.000
0
0
0
0
python,django
33,704,110
1
false
1
0
I think I might have managed to solve the problem. The command, python manage.py sqlmigrate app_name 0001, produces the SQL statements required for the table creation. Thus, I copied and paste the output into the PostgreSQL console and got the tables created. It seems to work for now, but I am not sure if there will be repercussions later.
1
1
0
I am trying to reinstall one of my apps on my project site. These are the steps that I have followed to do so: Removing the name of the installed app from settings.py Manually deleting the app folder from the project folder Manually removing the data tables from PostgreSQL Copying the app folder back into the project folder; making sure that all files, except __init__.py is removed. Run python manage.py sqlmigrate app_name 0001 Run python manage.py makemigrations app_name Run python manage.py migrate app_name Run python manage.py makemigrations Run python manage.py migrate However, after all these steps the message I am getting is that there are "no changes detected" and the data tables have not been recreated in the database, PostgreSQL. Am I missing some additional steps?
Reinstalling Django App - Data tables not re-created
0
1
0
205
33,704,183
2015-11-14T01:29:00.000
1
1
0
0
mysql,python-2.7,phpmyadmin,raspberry-pi2
33,705,227
2
false
0
0
Well, from what I understand, you'd like to save the sensor data arriving in your Raspberry Pi to a database and access it from another machine. What I suggest is, install a mysql db instance and phpmyadmin in your Raspberry Pi and you can access phpmyadmin from another machine in the network by using the RPi's ip address. Hope this is what you wanted to do.
2
0
0
I would like to push sensor data from the raspberry pi to localhost phpmyadmin. I understand that I can install the mysql and phpmyadmin on the raspberry pi itself. But what I want is to access my local machine's database in phpmyadmin from the raspberry pi. Would it be possible?
Push sensor data from raspberry pi to local host phpmyadmin database
0.099668
1
0
1,407
33,704,183
2015-11-14T01:29:00.000
0
1
0
0
mysql,python-2.7,phpmyadmin,raspberry-pi2
33,716,584
2
false
0
0
Sure, as long as they're on the same network and you have granted proper permission, all you have to do is use the proper hostname or IP address of the MySQL server (what you call the local machine). In whatever utility or custom script you have that writes data, use the networked IP address instead of 127.0.0.1 or localhost for the database host. Depending on how you've installed MySQL, you may not have a user that listens for non-local connections, incoming MySQL connections may be blocked at the firewall, or your MySQL server may not listen for incoming network connections. You've asked about using phpMyAdmin from the Pi, accessing your other computer, which doesn't seem to make much sense to me (I'd think you'd want to run phpMyAdmin on your desktop computer, not a Pi), but if you've got a GUI and compatible web browser running on the Pi then you'd just have phpMyAdmin and the webserver run on the same desktop computer that has MySQL and access that hostname and folder from the Pi (such as http://192.0.2.15/phpmyadmin). If you're planning to make the MySQL server itself public-facing, you should really re-think that decision unless you know why that's a bad idea and how to properly secure it (but that may not be a concern; for instance I have one at home that is available on the local network, but my router blocks any incoming connections from external sources).
2
0
0
I would like to push sensor data from the raspberry pi to localhost phpmyadmin. I understand that I can install the mysql and phpmyadmin on the raspberry pi itself. But what I want is to access my local machine's database in phpmyadmin from the raspberry pi. Would it be possible?
Push sensor data from raspberry pi to local host phpmyadmin database
0
1
0
1,407
33,706,088
2015-11-14T07:16:00.000
2
0
1
0
python,matplotlib,ipython-notebook,jupyter-notebook,timeit
33,706,246
1
true
0
0
You can: Export the notebook as *.py file. Create a new notebook. Copy the whole content of the *.py file into one cell of this notebook. Time this cell with %%timeit (note the double %) by adding this command in the first line. You might need to edit the cell content as magic % commands are commented out. Likely that you don't want to measure the time for invoking things like %matplotlib inline. Therefore, moving these magic commands into a separate cell seems sensible.
1
3
0
I am aware of using the magic command %timeit in an IPython notebook to time individual functions. However, I currently need to supply the time required to execute the calculations of an entire IPython notebook. How can I do this? One option would be to save the IPython notebook as a Python file with extension .py and then run the entire time feature in the command line. However, I am dealing with several calls of the matplotlib functions and pylab. This make take so long there are runtime errors. How does one do this?
Using the %timeit command to time an entire IPython notebook?
1.2
0
0
4,089
33,706,378
2015-11-14T08:02:00.000
25
0
1
0
python,igraph
33,711,279
3
true
0
0
g.vcount() is a dedicated function in igraph that returns the number of vertices. Similarly, g.ecount() returns the number of edges, and it is way faster than len(g.get_edgelist()) as it does not have to construct the full edge list in advance.
1
12
0
I'm writing a function that receives a graph as input. The very first thing I need to do is determine the order of the graph (that is, the number of vertices in the graph). I mean, I could use g.summary() (which returns a string that includes the number of vertices), but then I'd have parse the string to get at the number of vertices -- and that's just nasty. To get the number of edges I'm using len(g.get_edgelist()), which works. But there is no g.get_vertexlist(), so I can't use the same method. Surely there is an easy way to do this that doesn't involve parsing strings.
How do I find the number of vertices in a graph created by iGraph in python?
1.2
0
1
18,012
33,706,579
2015-11-14T08:31:00.000
0
1
1
1
python,python-3.x,raspberry-pi,raspbian
33,706,723
3
false
0
0
Yes, there are many applications and scripts that is written for python 2, and they usually come pre-installed in your linux distribution. Those applications expect python binary to be version 2. And they will most likely break if you force them to run on python 3.
1
1
0
I am on a Raspberry Pi, and by default the following symbolic links were created in /usr/bin: /usr/bin/python -> /usr/bin/python2.7 /usr/bin/python2 -> /usr/bin/python2.7 /usr/bin/python3 -> /usr/bin/python3.2 Most of my work is done in Python 3, so I decided to recreate /usr/bin/python to point to /usr/bin/python3.2 instead. Does this have any negative consequences when I install packages or run pip? Are there utilities that depend on the alias python in the search path and end up doing the wrong things?
Is the symbolic link python important?
0
0
0
392
33,707,614
2015-11-14T10:51:00.000
0
0
1
1
python,ubuntu,pycharm
33,708,804
2
false
0
0
Delete the old Pycharm directory and replace it with the new one. Now run pycharm.sh from the termin to start Pycharm. Once opened, go to Tools> Create desktop entry. Once this is done, close the current instance and now the new icon should appear in the launcher.
1
2
0
Months ago,I installed pycharm 4.5 in Ubuntu(by run /bin/pycharm.sh),it works well. Now I found 5.0 version is released.I download the .tar.gz file and unzip it.Then I want to install it in the same way. But a matter is ,although it runs well, in launcher I found the icon of Pycharm becomes a big "?".Also,in terminal,it gives some warn: log4j:warn no appenders could be found for logger (io.netty.util.internal.logging.internalloggerfactory). log4j:warn please initialize the log4j system properly. What that mean?and is it the right way to install Pycharm?
How to install pycharm 5.0 in ubuntu
0
0
0
1,483