Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
30,814,133
2015-06-13T01:03:00.000
0
0
1
0
python,contour
30,814,337
1
false
0
0
You can save each layer into a PNG file with a transparent background and overlay them in Photoshop, Gimp or ImageMagick.
1
0
1
Using Python, how to create two(or more) color contour plots, each with their own color map, but overlaid into a single image with the same x and y axes?
How to make overlaid contour plots with python?
0
0
0
67
30,814,615
2015-06-13T02:53:00.000
2
0
1
0
python,c++,c,gdb
30,814,798
1
false
0
0
parse_and_eval does not always do exactly what you want. Unlike some other operations, it is exposed to the user's current language setting, and sometimes other things like set print object. And, if you already have a gdb.Value from some other computation, using parse_and_eval means that you must convert it to a string first -- but this can be a pain if pretty-printers are involved, instead for safety you have to convert it to a long and then to a string. It is true that dereference is not needed if accessing a member via a pointer. This is a gdb quirk that got exposed via Value. Perhaps for the better. While I still think it is best to use the Value API, it isn't always possible. For example there is no way to assign this way; and there are some other holes as well. That said, you can write your code however you like.
1
1
0
I am not able to understand the utility of the two GDB-Python APIs, Value.cast() and Value.dereference(). I feel gdb.parse_and_eval() can do exactly the same thing that these 2 guys do ( and more ). For example, I can achieve Value.cast("int*") with gdb.parse_and_eval('(int*)Value') and I can achieve Value.dereference() with gdb.parse_and_eval(*Value). Specifically I have seen people use .dereference() to dereference a struct pointer like some_struct_ptr.dereference()['some_var']. Even here I feel .dereference() is not needed at all. some_struct_ptr['some_var'] produces the exact same output. Am I missing something?
GDB Python APIs: Doesn't .parse_and_eval() make .cast() and .dereference() redundant?
0.379949
0
0
335
30,815,480
2015-06-13T05:36:00.000
4
0
1
0
python,cryptography
30,815,622
1
true
0
0
The answer is yes for a given size of integer - by default python integers that get big become long and then have potentially infinite length - the compare time then grows with the size. If you restrict the size of the integer to a ctypes.c_uint64 or ctypes.c_uint32 this will not be the case. Note that compare with 0 is a special case, normally much faster, due to the hardware actions many CPUs have a special flag for 0, but if you are using/allowing seeds or tokens with a values of 0 you are asking for trouble.
1
4
0
is integer comparison in Python constant time? Can I use it to compare a user-provided int token with a server-stored int for crypto in the way I would compare strings with constant_time_compare from django.utils.crypto, i.e. without suffering timing attacks? Alternatively, is it more secure to convert to a string and then use the above function?
Is integer comparison in Python constant time?
1.2
0
0
730
30,816,730
2015-06-13T08:40:00.000
0
0
0
1
image,google-app-engine,google-app-engine-python,serving
30,824,576
1
false
1
0
Try specifying size=0 in the images.get_serving_url method call. eg. images.get_serving_url(blob_key, size=0)
1
0
0
I have created a Google App Engine project where it's possible to upload photos. Uploading part is working fine and all the photos are uploaded in proper size. But when I try getting images.get_serving_url , it returns me serving_url appended with lh3.googleusercontent.com but according to GoogleAppEngine documentation it must return serving_url something like lh3.gghpt.com . Also, the problem which comes is that the photos on that serving_url is 4-6 times smaller than the uploaded ones and when I view in the GoogleAppEngine console, all those photos have same size as the uploaded ones. I don't know why GoogleAppEngine is not returning the actual sized images.
Google App Engine : Wrong Serving Url
0
0
0
50
30,818,792
2015-06-13T12:41:00.000
0
0
0
0
python,django,read-write
30,820,665
1
false
1
0
Django Group and Permission applies on model itself. So for a specific entry of document if you want to give access to user in that case you need to change your schema of Document model. Just add a users_who_can_read=ManyToMany(Users), users_who_can_write=ManyToMany(Users), and at your view.py when a user is trying to load a page just check if he is in users_who_can_read or not. I think it should solve your problem without much problem.
1
0
0
I have a 'Document' model which has many-to-many relationship with User model.There is a separate web page in my project which displays the Document instance in a text editor. Now suppose user who created one document wants to invite other users to this document.But he wants to give read-only permission to some and read-write permission to others. How do I implement this permission functionality in Django?How do groups and other permissions frameworks work in Django?
How to permissions to a group in Django 1.8?
0
0
0
323
30,818,814
2015-06-13T12:44:00.000
0
0
1
0
python
30,818,989
2
false
0
0
Not built-in. remove(elem) remove(elem) Remove element elem from the set. Raises KeyError if elem is not contained in the set. Try to catch the exception in your own function maybe? When exception caught return your element b.
1
0
0
Is there a deleting method for deleting an element of a set that takes a parameter to return if there is no element to delete that matches the parameter you gave it to delete? So it would be something like set.discard(a,b) where a is the parameter that you want to delete and b is the parameter that gets returned if a is not found.
Is there a python set deleting method that returns a value if the value you want to delete is not in the set?
0
0
0
456
30,820,896
2015-06-13T16:23:00.000
1
0
0
0
python
30,821,067
1
true
0
0
Step in the function using pdb. use pdb.set_trace() some where before you are calling train method. Something like this import pdb; pdb.set_trace() classifier = NaiveBayesClassifier.train(training_set) When you debug. Stop at the line where you are calling train method. press s to step in the function. This will take you inside the train function. From there you can debug normally.
1
0
1
I am calling a function like this: classifier = NaiveBayesClassifier.train(training_set) and I would like to debug the code inside the train() function. The problem is that if I add print statements or pdb calls nothing changes. I am importing this: from nltk.classify.naivebayes import NaiveBayesClassifier but even if I change something in nltk/classify/naivebayes.py nothing happens. I can also delete all the content of this file and I still have a working output. So I suppose that the function I am calling is somewhere else, but I cannot find it. Is there a way to check where my function call is actually going? I am quite confused.
How to find out what function I am calling
1.2
0
0
50
30,821,218
2015-06-13T16:55:00.000
1
1
1
0
python,git,github
30,821,244
4
false
0
0
Considering storing this kind of data in a config file that isn't tracked by git.
1
0
0
I'm creating a code to demonstrate how to consume a REST service in Python, but I don't want my API keys to be visible to people when I push my changes to GitHub. How can I hide such information?
How can I hide sensitive data before commiting to GitHub (or any other Git repo)?
0.049958
0
1
2,480
30,821,848
2015-06-13T17:59:00.000
0
0
1
0
python,spectral
71,106,530
2
false
0
0
you can use "pip install spectral"on terminal to install the spectral package. you can refer to "https://pypi.org/project/spectral/".
1
0
0
I have already installed all the SPy dependencies and trying to import the spectral module in ipython version3.0 and getting this error : no module named spectral. what could possibly be wrong?
import error: no module named spectral
0
0
0
3,186
30,825,162
2015-06-14T01:25:00.000
0
0
1
1
python,shell,command-line
30,831,720
1
true
0
0
Put the python files in a folder and then put this folder into an installation directory (may be in /usr/local/foldername). Use chdir in script/s to change directory to the file containing folder (may be os.path.dirname(os.path.realpath(sys.argv[0]))) to import dependencies from there, or use absolute path. Now make a symbolic link of the executable file and put it in /usr/local/bin.
1
0
0
I know that if my program were just one python script file, I could just start it with the shebang and put it in /usr/local/bin so that I could invoke it at any time from the command prompt. However, what if my program were multiple files, but I only want one to be invokable from the command line? For example, if I've got my_program.py and dependency.py, and my_program needs dependency, but I don't want dependency to be invokable? As I understand it, if I dump both in /usr/local/bin, then invoking either of their names will attempt to execute them... I only want my_program to be visible, but it also needs to be in the same dir as the dependency module. I know I could just copy/paste them into one single file but that feels wrong...
How can I put a python program with multiple files in /usr/local/bin?
1.2
0
0
494
30,826,123
2015-06-14T05:07:00.000
6
0
0
0
python,scapy,dpkt
37,963,958
2
false
0
0
Scapy is a better performer than dpkt. You can create, sniff, modify and send a packet using scapy. While dpkt can only analyse packets and create them. To send them, you need raw sockets. As you mentioned, Scapy can sniff live. It can sniff from a network as well as can read a .pcap file using the rdpcap method or offline parameter of sniff method. Scapy is generally used to create packet analyser and injectors. Its modules can be used to create a specific application for a specific purpose. There might be many other differences also.
1
6
0
I am trying to analyse packets using Python's Scapy from the beginning. Upon recent searching, I found there is another module in python named as dpkt. With this module I can parse the layers of a packet, create packets, read a .pcap file and write into a .pcap file. The difference I found among them is: Missing of live packet sniffer in dpkt Some of the fields need to be unpacked using struct.unpack in dpkt. Is there any other differences I am missing?
Python Scapy vs dpkt
1
0
0
9,574
30,827,316
2015-06-14T08:25:00.000
12
0
1
0
python
30,827,361
2
true
0
0
os.path.relpath(path1, path2) # that's it
1
8
0
I would like to find the relative path between two directories on my system. Example: If I have pathA == <pathA> and pathB == <pathA>/dir1/dir2, the relative path between them will be dir1/dir2. How could I find it in python? Is there a tool I could use? If pathB is contained in pathA, I could just do pathB.replace(pathA, '') to get this relative path, but what if pathB isn't contained in pathA?
How to find the relative path between two directories?
1.2
0
0
3,409
30,833,264
2015-06-14T18:54:00.000
-1
0
0
0
python,windows,command-line,pyqt4,qstring
30,833,435
2
false
0
1
I think that this problem happens because the command line parameters are actually byte arrays and not strings, strings are encoded in Unicode, but byte arrays are not. Calling str(cmd) return the content of cmd as a string.
1
1
0
Using PyQt 4, Python 2.7, Windows 7 on a x64 machine. I have been developing a bit of code using a Python console with PyQt4 and passing strings from QLineEdit() widgets to OS commands with no issues using os.system(cmd) But when I tried running from the command line in Windows I get the following error, TypeError: sequence item 0: expected string, QString found I got around this by converting the offending string via str(cmd) but it has left me curious about, why this should happen only when the code is called from the command line and not when called within a Python console?
PyQt Qstring not accepted when script run from WIndows command line, why?
-0.099668
0
0
288
30,833,432
2015-06-14T19:11:00.000
0
0
1
1
python,shell,installation,package
30,833,498
3
false
0
0
Click Windows(button)+R then type cmd. In there type pip -V, what version does it show? If you get pip 6.1.1 from C:\Python32\lib\site-packages <python 3.4> (depending on your directory) then you're good, simply make your installations from there.
2
1
0
I am trying to install some python packages from the python shell but I get syntaxError. I am using python 3.4.3 which supposed to come with pip installed and I can see pip3 pip3.4, pip, easy_install, and easy_install-3.4 under scripts but whenever I run the command in the shell I get syntax error. Am I not supposed to use the python shell for package installation? I am using windows 8.1 if that would explain something. I tried these commands: pip install packageName --- got syntaxError with this message:File '' line 1, pip install(with a mark at last l in install). easy_install packageName generated the same error message but with the mark at the last letter in the packagename, i double checked all spellings but I can't seem to see what the problem is. How can I install packages?
Can't install any python package from the python command prompt
0
0
0
4,366
30,833,432
2015-06-14T19:11:00.000
3
0
1
1
python,shell,installation,package
30,833,440
3
false
0
0
Am I not supposed to use the python shell for package installation? No. Commands like pip are to be run on the operating system command line (i.e., the "DOS prompt" on Windows).
2
1
0
I am trying to install some python packages from the python shell but I get syntaxError. I am using python 3.4.3 which supposed to come with pip installed and I can see pip3 pip3.4, pip, easy_install, and easy_install-3.4 under scripts but whenever I run the command in the shell I get syntax error. Am I not supposed to use the python shell for package installation? I am using windows 8.1 if that would explain something. I tried these commands: pip install packageName --- got syntaxError with this message:File '' line 1, pip install(with a mark at last l in install). easy_install packageName generated the same error message but with the mark at the last letter in the packagename, i double checked all spellings but I can't seem to see what the problem is. How can I install packages?
Can't install any python package from the python command prompt
0.197375
0
0
4,366
30,833,978
2015-06-14T20:06:00.000
0
0
1
0
python,module
30,834,175
2
true
0
0
Since it is supposed to be used commonly among modules, use should create a file called constants inside your package and initialize the variable with file contents in it. As an afterthought, init file of package is also good candidate for such kind of data.
1
1
0
I have written some Python libraries and structured them as a module. All files within the module require some data from a text file to work. The easiest solution would be to let each library read the file whenever they need it. However, reading the same file several times seems inefficient. I would prefer to read the file just once and share these data among the different libraries. I could write an additional library to read the data upon initialization and store it in a global variable, so it can be imported by other libraries afterwards. Although that may work, I still think that this solution is not the most elegant. Is there any best practice for this kind of problem?
Sharing data within a Python module
1.2
0
0
76
30,835,522
2015-06-14T23:11:00.000
-2
0
1
1
python,environment-variables
60,003,845
2
false
0
0
you can get all the variables by import os print(os.environ) this will return you the dictionary type of output with keys as enviornment variables and values as the values of the enviornment variables. to get the current username: print(os.environ['USERNAME']) Look for yourself what you want in the dictionary.
1
6
0
Is there way to determine which environment variables returned by os.environ belongs to current user and which one - to all users? I do not want to change it, only get. UPD: I am using Microsoft Windows 7.
Get environment variable for current user and for all users in Python
-0.197375
0
0
4,434
30,835,547
2015-06-14T23:16:00.000
32
0
0
1
python,automation
30,835,954
3
true
0
0
You can use cron for this if you are on a Linux machine. Cron is a system daemon used to execute specific tasks at specific times. cron works on the principle of crontab, a text file with a list of commands to be run at specified times. It follows a specific format, which can is explained in detail in man 5 crontab Format for crontab Each of the sections is separated by a space, with the final section having one or more spaces in it. No spaces are allowed within Sections 1-5, only between them. Sections 1-5 are used to indicate when and how often you want the task to be executed. This is how a cron job is laid out: minute (0-59), hour (0-23, 0 = midnight), day (1-31), month (1-12), weekday (0-6, 0 = Sunday), command 01 04 1 1 1 /usr/bin/somedirectory/somecommand The above example will run /usr/bin/somedirectory/somecommand at 4:01am on January 1st plus every Monday in January. An asterisk (*) can be used so that every instance (every hour, every weekday, every month, etc.) of a time period is used. Code: 01 04 * * * /usr/bin/somedirectory/somecommand The above example will run /usr/bin/somedirectory/somecommand at 4:01am on every day of every month. Comma-separated values can be used to run more than one instance of a particular command within a time period. Dash-separated values can be used to run a command continuously. Code: 01,31 04,05 1-15 1,6 * /usr/bin/somedirectory/somecommand The above example will run /usr/bin/somedirectory/somecommand at 01 and 31 past the hours of 4:00am and 5:00am on the 1st through the 15th of every January and June. The "/usr/bin/somedirectory/somecommand" text in the above examples indicates the task which will be run at the specified times. It is recommended that you use the full path to the desired commands as shown in the above examples. Enter which somecommand in the terminal to find the full path to somecommand. The crontab will begin running as soon as it is properly edited and saved. You may want to run a script some number of times per time unit. For example if you want to run it every 10 minutes use the following crontab entry (runs on minutes divisible by 10: 0, 10, 20, 30, etc.) */10 * * * * /usr/bin/somedirectory/somecommand which is also equivalent to the more cumbersome 0,10,20,30,40,50 * * * * /usr/bin/somedirectory/somecommand
1
25
0
I have two Python scripts on my machine that I want to execute two times a day on specific time period. How do I automate this task? Since I will be away from home and thus my computer for a while, I want to upload them to a site and be executed from there automatic without me doing anything. How can I do this?
How to execute script on schedule?
1.2
0
0
56,111
30,838,479
2015-06-15T06:20:00.000
1
0
0
0
python,compression,tar,tarfile
31,104,811
2
false
0
0
You can pipe the result of the tar command directly to the lz4 utility. This will avoid usage of any intermediate file. Here is an example (assuming you have both tar and lz4 installed on your system) : tar cvf - * | lz4 > mypack.tar.lz4 The - here tells to output the result from tar to stdout. Of course, you can change the * with whichever target you want to tar. The reverse operation is also possible : lz4 -d mypack.tar.lz4 | tar xv
1
3
0
I'm trying to set up a code to pack a few big files (from tens to hundreds of gigabytes) into one archive. The compression methods that supported in tarfile module are a bit slow for such a big amount of data, so I would like to use some external compress module like lz4 to achive better speed of compression. Unfortunately I can't find a way how to create tar file and compress it with lz4 on the fly to avoid creating temporary tar file. The documentation of tarfile module says that there's a way to open an uncompressed stream for writing using 'w|' mode. Is it the way to stream tar file directly to lz4 module? If so, what's the proper way to use it? Thank you very much.
Python: how to create tar file and compress it on the fly with external module, using different compression methods not available in tarfile module?
0.099668
0
0
2,015
30,838,875
2015-06-15T06:48:00.000
1
0
0
0
python,python-3.x,requirements
30,839,105
2
false
0
0
The pattern makes sense in some cases, but for me it's when you want to be able to run each module as a self sustained executeable. I.E. Should you want to use the script from within FORTRAN or similar language, it is the easiest way, to build the python module to an executeable, and then call it from FORTRAN. That would not mean that one module is pr definition 1 python file, just that it only has one entry point, and is in fact executeable. The one module pr script, could be to make it easier to locate the code. Or to mail it to someone for code inspection or peer review (done often in scientific communities) So the requirements may be a mix of technical and social requirements. Anyway back to the problem. I would use the subprocess module to call the next module. (with close_fds set to true) If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. (Unix only). Or, on Windows, if close_fds is true then no handles will be inherited by the child process. Note that on Windows, you cannot set close_fds to true and also redirect the standard handles by setting stdin, stdout or stderr.
1
0
0
How would the output of one script be passed as the input to another? For example if a.py outputs format.xml then how would a.py call b.py and pass it the argument format.xml? I think it's supposed to work like piping done on the command line. I've been hired by a bunch of scientists with domain specific knowledge but sometimes there computer programming requirements don't make sense. There's a long chain of "modules" and my boss is really adamant about 1 module being 1 python script, and the output of one module is the input of the next. I'm very new to Python, but if this design pattern rings a bell to anyone let me know. Worse yet the project is to be converted to executable format (using py2exe) and there still has to be the same number of executable files as .py files.
output of one file input to next
0.099668
0
0
360
30,847,948
2015-06-15T14:36:00.000
0
0
0
0
python,search,search-engine,pyramid
30,852,953
2
false
1
0
I could see this being posted to the UX SE (ux.stackexchange.com I think?) site, or they might already have a question there touching on something like this. But personally I would probably lean toward either a dropdown selector with different types, or keeping the separate searches as-is. And I think I'd lean more toward the dropdown box - that doesn't seem unreasonable to me for a search interface. I guess one question would be - could there be any expectation to want results from more than one of the tables in the same search? If that were the case I could see implementing your unified search idea even though it would mean querying multiple tables. Or some sort of additive interface where you select the tables you'd want to query.
2
0
0
Not sure if this question is better suited for a different StackExchange site but, here goes: I have a search page that searches a number of different type of things. All (at the moment) requiring a different input field for each type of search. For example, one might search for a school or district name, a class name, a staff member email address, etc. I have 9 different 'type' of searches each with their own input field on the search page. I've already concatenated one of these (a username and UID search) but I'm wondering if it makes sense both design (user friendly) and performance wise to bring these all into one input field (and therefore one singular search) These different types are of course a number of different tables, so it would have to query a number of different times for each type, just for one search. Any ideas? or should I just keep it how it is? I could add a drop-down menu to choose a different 'type' of search but that seems just as messy. I'm already doing this for my navbar when not on the main search page (which also happens to be the home page) My project is written in Python with the Pyramid framework.
Search box/field design with multiple search locations
0
0
0
171
30,847,948
2015-06-15T14:36:00.000
0
0
0
0
python,search,search-engine,pyramid
30,860,645
2
true
1
0
What happens to your search interface if the application changes? This is a very important aspect. Do you add another search type? A search returns matches on the query string. A result set can contain different entities from different types. Then your search/resultset interface could apply filtering based on entity type or entity facets (examples: Ebay, Amazon) Learn from Solr/elasticsearch and add-on projects. Faceted search is what users are used to these days. The concept of separating data storage (RDBMS) and full-text search engine is much more powerful and extendable compared to writing complex queries spanning multiple tables. Any larger internet company (Ebay, Facebook, Amazon, Instagram) can tell stories about using the patterns. Separating storage from search offers scalability, flexibility. Instead of writing query/search code, better learn how to feed these search engines from any data store. This is much more powerful & fun, I promise.
2
0
0
Not sure if this question is better suited for a different StackExchange site but, here goes: I have a search page that searches a number of different type of things. All (at the moment) requiring a different input field for each type of search. For example, one might search for a school or district name, a class name, a staff member email address, etc. I have 9 different 'type' of searches each with their own input field on the search page. I've already concatenated one of these (a username and UID search) but I'm wondering if it makes sense both design (user friendly) and performance wise to bring these all into one input field (and therefore one singular search) These different types are of course a number of different tables, so it would have to query a number of different times for each type, just for one search. Any ideas? or should I just keep it how it is? I could add a drop-down menu to choose a different 'type' of search but that seems just as messy. I'm already doing this for my navbar when not on the main search page (which also happens to be the home page) My project is written in Python with the Pyramid framework.
Search box/field design with multiple search locations
1.2
0
0
171
30,854,395
2015-06-15T20:27:00.000
0
0
0
0
python,django,django-models
30,856,157
1
false
1
0
Wouldn't there be security issues at letting an external site dictate what your db's structure looks like? A problem of any kind (or a man in the middle attack) could completely modify (or destroy) your database without you doing anything. I'd say it's better to have API versions than what you're asking for. I'd prefer to have 2 concurrent versions running at the same time, to let the time for the API consumers to update their schemas, than force automatic updates of their schemas and data. And in general, if possible, I'd only use a single API version, taking care to always keep a compatibility with older implementations. But I'm just saying that for general public API, I don't know how open your API should be and how close your developpements (API and consumer) are, so I may be completely out of your scope!
1
1
0
I am setting up a public API for my app. I want to segregate my API code from my application code, so I am putting it in a new django project and am using "Django REST Framework" to build the scaffolding for the public API services. I'm struggling with how to keep models in sync between my main application project, and this new Django project for the API... product development may continue in the application project that necessitates models changes, and I'd like those models changes to be propagated to the API project. Is there a way to point to, or import, models from a different Django project?
Importing models from a different Django project
0
0
0
92
30,856,274
2015-06-15T22:53:00.000
0
0
1
0
python,windows,directory,project,virtualenv
30,856,299
2
false
0
0
testenv/bin/pip and testenv/bin/python I'd check it in a local repository and check it out in the virtualenv. No, you have not.
1
1
0
I am new to python development using virtualenv. I have installed python 2.7, pip, virtualenv, virtualenvwrapper in windows and I am using windows PS. I have referred lots of tutorials for setting this up. Most of them contained the same steps and almost all of them stopped short of explaining what to do after the virtualenv was created. How do I actually work in a virtualenv? suppose I want to create a new flask application after installing that package in my new env virtualenv (eg; testenv). If I already have an existing project and I want to put it inside a newly created virtual env, how do I do that? How should the folder structure be like? My understanding of virtual env is that it provides a sandbox for your application by isolating it and keeping all its dependencies to itself in that particular env (and not sharing and it with others). Have I understood it wrong? Please help me clear this.
Run an existing python web application inside a virtalenv
0
0
0
192
30,857,579
2015-06-16T01:46:00.000
0
0
1
0
python,windows,background
30,857,715
2
false
0
0
Sure you can. You just hide the window without destroying it. It will run for ever until you kill the mainloop itself. Your questing is too broad.
1
1
0
To be more specific: Is there a way for a python program to continue running even after it's closed (like automatic open at a certain time)? Or like a gmail notification? This is for an alarm project, and I want it to ring/open itself even if the user closes the window. Is there a way for this to happen/get scripted? If so, how? Any help would be appreciated!
Is there a way for a python program to continue running even after it's closed (like automatic open at a certain time)?
0
0
0
111
30,861,956
2015-06-16T08:03:00.000
1
0
0
0
python,python-2.7,ubuntu,pandas
30,865,729
2
true
0
0
So, the solution was essentially to create a virtual environment and install the needed packages independently. Some issues with dependencies on my system, I believe.
2
0
1
I'm doing a data science course on udemy using python 2.7, running Anaconda. My OS is Ubuntu 14.04. I'm getting the following error running with the pandas module: Traceback (most recent call last): File "/home/flyveren/PycharmProjects/Udemy/15_DataFrames.py", line 13, in <module> nfl_frame = pd.read_clipboard() File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/clipboard.py", line 51, in read_clipboard return read_table(StringIO(text), **kwargs) File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 474, in parser_f return _read(filepath_or_buffer, kwds) File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 260, in _read return parser.read() File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 721, in read ret = self._engine.read(nrows) File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 1170, in read data = self._reader.read(nrows) File "pandas/parser.pyx", line 769, in pandas.parser.TextReader.read (pandas/parser.c:7544) File "pandas/parser.pyx", line 791, in pandas.parser.TextReader._read_low_memory (pandas/parser.c:7784) File "pandas/parser.pyx", line 844, in pandas.parser.TextReader._read_rows (pandas/parser.c:8401) File "pandas/parser.pyx", line 831, in pandas.parser.TextReader._tokenize_rows (pandas/parser.c:8275) File "pandas/parser.pyx", line 1742, in pandas.parser.raise_parser_error (pandas/parser.c:20691) pandas.parser.CParserError: Error tokenizing data. C error: Expected 11 fields in line 5, saw 12 I've tried conda uninstall pandas and subsequently conda install pandas again to see, however with the same result. The package is there, it tells me an error if I uninstall and try to run the code again with missing package, but it gives this error when it's properly installed. Anyone knows what's up?
Python 2.7 Anaconda Pandas error(Ubuntu 14.04)
1.2
0
0
332
30,861,956
2015-06-16T08:03:00.000
1
0
0
0
python,python-2.7,ubuntu,pandas
40,701,927
2
false
0
0
I watch same lecture at udemy and face same problem. I change my browser from internet explorer to chrome. (I'm using windows7 & VS2013 with PTVS) Then, error does not occur. However, delimeter has some problem. Space should not be used as delimeter according to lecture, however, it does. So, result is not perfect.
2
0
1
I'm doing a data science course on udemy using python 2.7, running Anaconda. My OS is Ubuntu 14.04. I'm getting the following error running with the pandas module: Traceback (most recent call last): File "/home/flyveren/PycharmProjects/Udemy/15_DataFrames.py", line 13, in <module> nfl_frame = pd.read_clipboard() File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/clipboard.py", line 51, in read_clipboard return read_table(StringIO(text), **kwargs) File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 474, in parser_f return _read(filepath_or_buffer, kwds) File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 260, in _read return parser.read() File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 721, in read ret = self._engine.read(nrows) File "/home/flyveren/anaconda/lib/python2.7/site-packages/pandas/io/parsers.py", line 1170, in read data = self._reader.read(nrows) File "pandas/parser.pyx", line 769, in pandas.parser.TextReader.read (pandas/parser.c:7544) File "pandas/parser.pyx", line 791, in pandas.parser.TextReader._read_low_memory (pandas/parser.c:7784) File "pandas/parser.pyx", line 844, in pandas.parser.TextReader._read_rows (pandas/parser.c:8401) File "pandas/parser.pyx", line 831, in pandas.parser.TextReader._tokenize_rows (pandas/parser.c:8275) File "pandas/parser.pyx", line 1742, in pandas.parser.raise_parser_error (pandas/parser.c:20691) pandas.parser.CParserError: Error tokenizing data. C error: Expected 11 fields in line 5, saw 12 I've tried conda uninstall pandas and subsequently conda install pandas again to see, however with the same result. The package is there, it tells me an error if I uninstall and try to run the code again with missing package, but it gives this error when it's properly installed. Anyone knows what's up?
Python 2.7 Anaconda Pandas error(Ubuntu 14.04)
0.099668
0
0
332
30,867,061
2015-06-16T12:04:00.000
1
0
0
0
python
30,867,329
2
false
0
0
You should use git. Commit your code to git repo and update on your VPS. You could add the update logic into your crontab script.
2
0
0
I'm pretty new at coding, and Python in general. I've written a python script that scrapes data from several sites and saves it to a sqlite3 db. I put it on a digital ocean VPS and it runs several times a day using cron. I currently use dropbox to sync files from the computer I'm doing the coding on to the server and it seems to be working okay but it just feels like I'm doing things incorrectly. What is the proper way to take code after I've made an update and sync it over to my server? What's the proper term for this? Thanks for the help!
What's the best way to push code updates to a server while using Python?
0.099668
0
0
73
30,867,061
2015-06-16T12:04:00.000
0
0
0
0
python
30,867,403
2
false
0
0
I know these options: Git (Already mentioned) (S)FTP (Better if you work on the code alone) Samba Share (Also better if you're alone)
2
0
0
I'm pretty new at coding, and Python in general. I've written a python script that scrapes data from several sites and saves it to a sqlite3 db. I put it on a digital ocean VPS and it runs several times a day using cron. I currently use dropbox to sync files from the computer I'm doing the coding on to the server and it seems to be working okay but it just feels like I'm doing things incorrectly. What is the proper way to take code after I've made an update and sync it over to my server? What's the proper term for this? Thanks for the help!
What's the best way to push code updates to a server while using Python?
0
0
0
73
30,868,980
2015-06-16T13:27:00.000
2
0
0
0
python,nltk,n-gram,sentence
30,869,829
2
true
0
0
If I'm getting it right and if the purpose is to test yourself on the vocabulary you already have learned, then another approach could be taken: Instead of going through the difficult labor of NLG (Natural Language Generation), you could create a search program that goes online, reads news feeds or even simply Wikipedia, and finds sentences with only the words you have defined. In any case, for what you want, you will have to create lists of words that you have learned. You could then create search algorithms for sentences that contain only / nearly only these words. That would have the major advantage of testing yourself on real sentences, as opposed to artificially-constructed ones (which are likely to sound not quite right in a number of cases). An app like this would actually be a great help for learning a foreign language. If you did it nicely I'm sure a lot of people would benefit from using it.
1
3
1
I am writing a program that should spit out a random sentence of a complexity of my choosing. As a concrete example, I would like to aid my language learning by spitting out valid sentences of a grammar structure and using words that I have already learned. I would like to use python and nltk to do this, although I am open to other ideas. It seems like there are a couple of approaches: Define a grammar file that uses the grammar and lexicon I know about, and then generate all valid sentences from this list, then selecting a random answer. Load in corpora to train ngrams, which then can be used to construct a sentence. Am I thinking about this correctly? Is one approach preferred over the other? Any tips are appreciated. Thanks!
Generate Random Sentence From Grammar or Ngrams?
1.2
0
0
1,204
30,870,391
2015-06-16T14:21:00.000
0
1
0
0
python,python-2.7,raspberry-pi
30,873,178
1
false
0
0
Maybe an easier way would be to use the shell to kill the process in question? Each process in linux has a number assigned to it, which you can see by typing pstree -p In your terminal. You can then kill the process by typing in sudo kill process number Does that help, or were you thinking of something a bit more complicated?
1
0
0
i am using 2 python programs with rc.local for my raspberry, first program is my main program and the other is the second program. The second program is shutdown the raspberry, but when i do the second program my first program is still running and will stopped until raspberry truly shutdown. I want to make the second program kill the first program before raspberry truly shutdown, how can i do it?
how to kill a python programs using another python program on raspberry?
0
0
0
526
30,871,384
2015-06-16T15:03:00.000
0
0
1
0
python
30,871,586
3
false
0
0
Raw strings are helpful as they save you from adding escape characters just to escape 'escape characters'. For example r'url\1' is equivalent to 'url\\1'
1
3
0
I know the raw string operator r or R suppresses the meaning of escape characters but in what situation would this really be helpful?
How is a raw string useful in Python?
0
0
0
108
30,872,599
2015-06-16T15:57:00.000
3
0
0
0
python,django,database,naming-conventions
30,872,816
1
false
1
0
The default convention is better and cleaner to use : It avoids any table naming conflict ( As It's a combination of App name and Model name) It creates well organized database (Tables are ordered by App names) So until you have any special case that needs special naming convention , use the default.
1
0
0
What is the convention/best practices for naming database tables in Django... using the default database naming scheme (appname_classname) or creating your own table name (using your own naming conventions) with the meta class?
Django database table naming convention
0.53705
1
0
1,656
30,875,143
2015-06-16T18:15:00.000
1
0
0
0
python,sockets,networking,tcp,udp
30,875,280
3
true
0
0
Answering your last question: no. Because: If client is behind NAT, and the gateway (with NAT) has more than one IP, every connection can be seen by you as connection from different IP. Another problem is when few different clients that are behind the same NAT will connect with your server, you will have more than one pair of TCP-UDP clients. And it will be impossible to join correct pairs. Your method seems to be good solution for the problem.
3
1
0
I've made a server (python, twisted) for my online game. Started with TCP, then later added constant updates with UDP (saw a big speed improvement). But now, I need to connect each UDP socket client with each TCP client. I'm doing this by having each client first connect to the TCP server, and getting a unique ID. Then the client sends this ID to the UDP server, connecting it also. I then have a main list of TCP clients (ordered by the unique ID). My goal is to be able to send messages to the same client over both TCP and UDP. What is the best way to link a UDP and TCP socket to the same client? Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)? Finally, if anyone with knowledge of TCP/UDP could tell me (i'm new!), will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? (I need to know this, to secure my server, but I don't want to accidentally block some fair users)
UDP and TCP always use same IP for one client?
1.2
0
1
1,578
30,875,143
2015-06-16T18:15:00.000
1
0
0
0
python,sockets,networking,tcp,udp
30,876,002
3
false
0
0
1- Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? NO in the general case, but ... 2- is it necessary for the client to connect twice, once for TCP and once for UDP ? NO, definitively 3- will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? YES except in special cases You really need some basic knowledge of the TCP, UDP and IP protocol to go further, and idealy, on the OSI model. Basics (but you should read articles on wikipedia to have a deeper understanding) : TCP and UDP are 2 protocol over IP IP is a routable protocol : it can pass through routers TCP is a connected protocol : it can pass through gateways or proxies (firewalls and NATs) UDP in a not connected protocol : it cannot pass through gateways a single machine may have more than one network interface (hardware slot) : each will have different IP address a single interface may have more than one IP address in the general case, client machines have only one network interface and one IP address - anyway you can require that a client presents same address to TCP and UDP when connecting to your server Network Address Translation is when there is a gateway between a local network and the wild internet that always presents its own IP address and keep track of TCP connections to send back packets to the correct client In fact the most serious problem is if there is a gateway between the client and your server. While the client and the server are two (virtual) machines for which you have direct keyboard access, no problem, but corporate networks are generally protected by a firewall acting as a NAT, and many domestic ADSL routers also include a firewall and a NAT. In that case just forget UDP. It is possible to instruct a domestic router to pass all UDP traffic to a single local IP, but it is not necessarily an easy job. In addition, that means that if a user of yours has more than one machine at home, he will be allowed to use only one at a time and will have to reconfigure his router to switch to another one !
3
1
0
I've made a server (python, twisted) for my online game. Started with TCP, then later added constant updates with UDP (saw a big speed improvement). But now, I need to connect each UDP socket client with each TCP client. I'm doing this by having each client first connect to the TCP server, and getting a unique ID. Then the client sends this ID to the UDP server, connecting it also. I then have a main list of TCP clients (ordered by the unique ID). My goal is to be able to send messages to the same client over both TCP and UDP. What is the best way to link a UDP and TCP socket to the same client? Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)? Finally, if anyone with knowledge of TCP/UDP could tell me (i'm new!), will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? (I need to know this, to secure my server, but I don't want to accidentally block some fair users)
UDP and TCP always use same IP for one client?
0.066568
0
1
1,578
30,875,143
2015-06-16T18:15:00.000
1
0
0
0
python,sockets,networking,tcp,udp
32,045,545
3
false
0
0
First of all when you send data with TCP or UDP you have to give the port. If your client connect with TCP and after your server send a response with UDP the packet will be reject by the client. Why? Because you have to register a port for connection and you can not be sure the port is correctly open on the client. So when you begin a connection in TCP the client open a port to send data and receive the response. You have to make the same with UDP. When client begin all communication with server you can be sure all the necessary port are open. Don't forget to send data on the port which the connection was open. Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)? Why you don't want create 2 connections? You have to use UDP for movement for example. because if you create an FPS you can send the player's position every 50ms so it's really important to use UDP. It's not just a question of better connection. If you want to have a really good connection between client and server you need to use Async connection and use STREAM. But if you use stream you'r TCP socket do not signal the end of a socket but you have a better transmition. So you have to write something to show the packet end (for example <EOF>). But you have a problem with this. Every socket you receive you have to analyze the data and split over the <EOF>. It can take a lot a processor. With UDP the packet always have a end signal. But you need to implement a security check.
3
1
0
I've made a server (python, twisted) for my online game. Started with TCP, then later added constant updates with UDP (saw a big speed improvement). But now, I need to connect each UDP socket client with each TCP client. I'm doing this by having each client first connect to the TCP server, and getting a unique ID. Then the client sends this ID to the UDP server, connecting it also. I then have a main list of TCP clients (ordered by the unique ID). My goal is to be able to send messages to the same client over both TCP and UDP. What is the best way to link a UDP and TCP socket to the same client? Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)? Finally, if anyone with knowledge of TCP/UDP could tell me (i'm new!), will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? (I need to know this, to secure my server, but I don't want to accidentally block some fair users)
UDP and TCP always use same IP for one client?
0.066568
0
1
1,578
30,875,263
2015-06-16T18:22:00.000
1
0
0
0
python,gevent,epoll
30,926,961
1
true
1
0
Gevent does not provide their own epoll implementation yet. If you don't monkeypatch select it will block the entire process instead of just one greenlet.
1
0
0
I'm working on GPIO stuff in Python, need to register the fd on epoll, since gevent monkey patched the python select library, there will not be select.epoll if monkey.patch_all(select=True), so here comes two questions: Will be consequence that the monkey.patch_all(select=False)? Or does Gevent provide its own epoll register stuff? Thank you in advance.
Python select epoll in Gevent
1.2
0
0
1,187
30,878,666
2015-06-16T21:35:00.000
13
0
1
0
matplotlib,ipython,ipython-notebook
46,445,873
4
false
0
0
plt.ioff() and plt.ion() works like a charm in my Jupyter notebook with the notebook as backend (assuming the usual import matplotlib.pyplot as plt).
2
68
0
If I start an ipython notebook with matplotlib inlined, is there a way to subsequently plot a figure so that it shows in the "standard", non-inlined, way, without having to reload the notebook without the inline command? I'd like to be able to have some figures inlined int he notebook, but others in the traditional interactive mode, where I can zoom and pan.
matplotlib python inline on/off
1
0
0
82,957
30,878,666
2015-06-16T21:35:00.000
10
0
1
0
matplotlib,ipython,ipython-notebook
30,884,922
4
false
0
0
It depends on the exact configuration of your matplotlib, but you can switch between inline and one of 'osx', 'qt4', 'qt5', 'gtk3', 'wx', 'qt', 'gtk', 'tk' (some are aliases of other). just use %matplotlib <the one you want> to switch. Depending on conditions you migh have only access to one of these.
2
68
0
If I start an ipython notebook with matplotlib inlined, is there a way to subsequently plot a figure so that it shows in the "standard", non-inlined, way, without having to reload the notebook without the inline command? I'd like to be able to have some figures inlined int he notebook, but others in the traditional interactive mode, where I can zoom and pan.
matplotlib python inline on/off
1
0
0
82,957
30,879,556
2015-06-16T22:47:00.000
1
0
1
0
python,python-2.7
30,879,636
1
false
0
0
Typically on a Mac you would expect to find: /System/Library/Frameworks/Python.framework /Library/Python You should never touch the first, so the second exists to add local libraries. What problem are you trying to solve by moving libraries?
1
0
0
My mac has two versions of Python, one installed in /Library/Frameworks/Python.framework/Versions/2.7/bin/pip and the other /Library/Python/2.7/site-packages/ I have plenty of packages installed in the second directory. How do I move all those packages under the first python ? Also How do I force my mac to use the first Python ?
Shifting packages between different versions of python
0.197375
0
0
27
30,880,102
2015-06-16T23:47:00.000
1
0
0
0
python,import
30,880,320
1
true
1
0
You can't simply execute main.py from dir2; you'll need a script of sorts that lives outside your whole app package, and then if you import app.dir2.main, you'll get app.dir1.utils as well through the relative import. Create a script that does from app.dir2 import main, then run that outside the app package. And use the from ..dir1 import utils in structure in main.py, with 2 leading dots. I can't give you the exact reason why this is, but I think essentially any script/module executed inside a directory is not going to look up the directory chain to see if it's part of a module. That is, the main.py module will not look into the app directory and think "hey, I'm part of a module, and I can (relatively) import dir1 as well.
1
0
0
Hi I use Django frequently and am constantly using relative imports along the lines of from .models import XXX for XXX in a models folder or from . import views for a file named views.py in same directory and it works fine there. But when I create my own Python application with a directory structure such as: app/ containing __init__.py, dir1, and dir2 dir1/ containing __init__.py, utils.py dir2/ containing __init__.py, main.py then say inside of main.py in dir2 I do from .dir1 import utils or even from ..dir1 import utils I get an error like: ValueError: Attempted relative import of non-package Which I don't understand as there is an __init__.py in all the directories. Why does it always work fine in django projects but not when I start my own python project from scratch? What should I be doing to import something like this? Obviously absolute imports are not preferred, but I can't get relatives to work. All the answers on SO and other sites that I have found never seems to provide a solution, or at least one that worked for me. Can someone please just tell me what the correct way to do an import like this is? Import a python file from a directory that is a sibling of the directory which contains the file I'm calling import from. Help would be much appreciated. Perhaps for once we can get a nice short answer that is actually to the point. All I really need is for someone to show me what I should be using to do this import, and secondarily explain why I don't get this error in django but get it here. I just want to get the imports working, every time I start my own python application outside Django (because it isn't web based) I have this issue and every answer I find is no help. EDIT: The problem I'm having is importing files from anything but a child directory or files within the same directory. The places I have the issue are when the file I need is in a sibling or parent directory. I need help making the import work for that.
Python Relative Imports Only Work With Django
1.2
0
0
112
30,880,392
2015-06-17T00:25:00.000
0
0
0
0
python,sockets
30,880,515
1
true
0
0
could you make your server log for heartbeats? and also post heartbeats to the clients on the socket? if so, have a monitor check for the server heartbeats and restart the server application if the heartbeats exceed the threshold value. also, check for heartbeats on the client and reestablish connection when you did not hear a heartbeat.
1
0
0
I was trying to implement a multiuser chat (group chat) with socket on python. It basically works like this: Each messages that a user send is received by the server and the server sends it back to the rest of the users. The problem is that if the server close the program, it crashes for everyone else. So, how can you handle the departure of the server, should you change the server somehow, or there is other way around it? Thank you
Creating Multi-user chat with sockets on python, how to handle the departure of the server?
1.2
0
1
560
30,881,489
2015-06-17T02:46:00.000
1
0
0
0
python,pandas
30,881,760
2
true
0
0
Variable = ? The variable set would be equal to a pandas.core.frame.DataFrame object. Format? The pandas.core.frame.DataFrame format is a collection of numpy ndarrays, dicts, series, arrays or list-like structures that make up a 2 dimensional (typically) tabular data structure. Pandas Object Type? A pandas.core.frame.DataFrame object is an organized collection of list like structures containing multiple data types.
2
0
0
If I ftp into a database and use pandas.read_sql to read in a huge file, what data type would the variable set equal to this be? And, if applicable, what kind of format would it be in? What object type is a pandas data frame?
Data type using Pandas
1.2
1
0
160
30,881,489
2015-06-17T02:46:00.000
0
0
0
0
python,pandas
30,881,700
2
false
0
0
The function pandas.read_sql returns a DataFrame. The type of a DataFrame in pandas is pandas.core.frame.DataFrame.
2
0
0
If I ftp into a database and use pandas.read_sql to read in a huge file, what data type would the variable set equal to this be? And, if applicable, what kind of format would it be in? What object type is a pandas data frame?
Data type using Pandas
0
1
0
160
30,886,340
2015-06-17T08:35:00.000
0
0
0
0
python,apache-spark
30,887,058
2
false
0
0
I think when you only use map action on FIRST_RDD(logs) you will get SECOND_RDD count of new this SECOND_RDD will be equal to count of FIRST_RDD.But if you use distinct on SECOND_RDD, count will decrease to number of distinct tuples present in SECOND_RDD.
1
0
1
I was working with Apache Log File. And I created RDD with tuple (day,host) from each log line. Next step was to Group up host and then display the result. I used distinct() with mapping of first RDD into (day,host) tuple. When I don't use distinct I get different result as when I do. So how does a result change when using distinct() in spark??
How does result changes by using .distinct() in spark?
0
0
0
172
30,896,343
2015-06-17T15:44:00.000
0
0
1
0
python,installation,ldap,python-2.6,gssapi
52,655,111
3
false
0
0
For me, the issue got resolved after installing the package "krb5-libs" in Centos. Basically we need to have libgssapi_krb5.so file for installing gssapi.
2
16
0
I am trying to install the GSSAPI module through pip but I receive this error that I don't know how to resolve. Could not find main GSSAPI shared library. Please try setting GSSAPI_MAIN_LIB yourself or setting ENABLE_SUPPORT_DETECTION to 'false' I need this to work on Python 2.6 for LDAP3 authentication.
How to install GSSAPI Python module?
0
0
0
20,242
30,896,343
2015-06-17T15:44:00.000
14
0
1
0
python,installation,ldap,python-2.6,gssapi
50,410,443
3
false
0
0
sudo apt install libkrb5-dev actually installs /usr/bin/krb5-config and /usr/lib/libgssapi_krb5.so so none of the symlinking was needed, just install libkrb5-dev and you should be good.
2
16
0
I am trying to install the GSSAPI module through pip but I receive this error that I don't know how to resolve. Could not find main GSSAPI shared library. Please try setting GSSAPI_MAIN_LIB yourself or setting ENABLE_SUPPORT_DETECTION to 'false' I need this to work on Python 2.6 for LDAP3 authentication.
How to install GSSAPI Python module?
1
0
0
20,242
30,896,713
2015-06-17T16:01:00.000
1
0
0
0
python,python-imaging-library,pycuda
31,172,404
1
false
0
0
The shapely interface to GEOS may be the library for which you are looking, but I have never used it for this purpose. It may be easier to roll your own, in this case. The algorithm is a straightforward sweep-line algorithm, with an average complexity per rectangle proportional to log(N). A) Each rectangle is characterized by four coordinates, Left, Right, Top, Bottom. B) The rectangles will be processed in order of their Left coordinate c) An Interval Tree is used to maintain the top-bottom ranges for each rectangle whose Left edge has been encountered and Right edge has not yet been encountered D) A Priority Queue is maintained, ordered by Right edge, of all rectangles that are currently in the Interval Tree 1) Get the first or next rectangle to be processed. If no more are available, exit. 2) While any element on the Priority Queue has a priority less than or equal to the Left value of this rectangle, delete that element from the Priority Queue and the associated element from the Interval Tree. 3) Search the Interval Tree for overlap with the Top-Bottom range of this rectangle; process each overlap found. 4) Insert the Top-Bottom range of this rectangle into the Interval Tree, and add an element to the Priority Queue, with a priority set to the Right value from the rectangle, referring to the interval added to the Interval Tree. 5) Return to step 1) to get the next rectangle.
1
2
0
I have a set of rectangles overlapping each other. I need to detect that overlaps exist in a set of rectangles. If overlaps exist, then I need to update the coordinates so the set of rectangles do not overlap anymore. I wonder if there are existing python libraries suited for this task. This operation will be applied to million+ set of rectangles, so algorithm efficiency and leveraging GPU would be important as well.
Efficient library to detect and clip overlapping rectangles with python
0.197375
0
0
628
30,902,045
2015-06-17T20:44:00.000
1
0
1
0
python
30,902,169
4
false
0
0
You can loop though the csv.reader(). It will return you rows. Rows consist of lists. Compare the first element of the list ie row[0]. If it is what you want, add the row to an output list.
1
3
1
I have a CSV file with over 4,000 lines formatted like... name, price, cost, quantity How do I trim my CSV file so only the 20 names I want remain? I am able to parse/trim the CSV file, I am coming up blank on how to search column 1.
How to parse CSV file and search by item in first column
0.049958
0
0
145
30,906,633
2015-06-18T04:47:00.000
4
0
1
0
python
30,906,651
1
true
0
0
The bytes are the encoding. You need to decode them in order to get the text they encode. How Python encodes the text as bytes internally is... not your problem.
1
1
0
In Python 3 there are str and bytes types. To convert a bytes type into a str type, one would call the decode() method on an instance and vice versa. I am confused as to why this is, why is it not encode()? As I understand it, internally the actual bytes in memory are being encoded into an encoding (UTF-8 in Python's case).
Python 3's encode() and decode() methods
1.2
0
0
80
30,909,195
2015-06-18T07:34:00.000
6
0
1
0
python,cassandra
30,916,615
1
false
0
0
execute() will run one statement as a blocking call that will not return until the statement is finished executing. execute_async() will submit one statement as an asynchronous call that will return immediately with a response_future object you can use to retrieve the results with at a later time. By calling execute_async, your program can continue without waiting for the statement to finish. Since it is non-blocking, you can submit many statements by calling this repeatedly and have them be "in flight" at the same time. execute_concurrent() is a blocking call that will run a list of statements in parallel and return a list of the results. Like a thread pool, you can specify how many statements you want to allow it to run at a time. And you can set a flag if you want it to return immediately if any of the statements results in an error.
1
4
0
I want to know the difference between execute_async() and execute_concurrent() in python for Cassandra queries.
Difference between execute_async() and execute_concurrent()
1
0
0
1,325
30,911,872
2015-06-18T09:42:00.000
1
0
1
0
python,pandas,sas
30,916,191
7
false
0
0
When you have the option to download a SAS dataset you will often also have the option to download a Stata dataset (this is indeed the case for PSID btw). In that case, the easiest way will likely be to import with read_stata (this might change in the future, but I believe is a very accurate statement as of today). Less convenient, but almost always an option, is to download a text file (usually referred to as text, ascii, or csv). Those tend to come in two flavors: delimited (with comma or tab), or space separated (columnar or tabulated). If the file is comma or tab delimited, use read_csv and set the delimiter as appropriate. If it's space delimited or tabular, you might have good luck with read_csv, or you might be better off with read_fwf or read_table. Depends a bit on the variable types and formatting. From what I have read, sas7bdat mentioned by @hd1 seems to work well but is not part of pandas yet. For that reason, I tend to default to read_stata or read_csv but hopefully sas7bdat also works well and perhaps will be brought into pandas in the future. Also, I'm wondering about the speed of sas7bdat. read_csv has been pretty fast for a long time and read_stata is very fast in the latest versions (since 15.0, I believe). I'm not sure about the speed of sas7bdat?
1
6
0
I'm working on a data set (PSID) that gives data in a SAS format (a .txt and another file containing instructions to interpret the data). I cannot find anything in Python to read this type of data. Does anyone know of a pre-existing module/script to read SAS data? Edit (added from a comment to an answer): The data is in ascii/text and the start of a row of data looks like this: 3 10 1015000 150013200 00 002500 00 00
Import SAS data file into python data frame
0.028564
0
0
19,271
30,920,656
2015-06-18T16:16:00.000
0
0
0
0
python,database,python-3.x,io
30,922,498
1
false
0
0
Answering my own question: bytes(file)
1
0
0
I'm using Python 3.4. I have a binary column in a my postgresql database with some files and I need to retrieve it from the database and read it... the problem is that for this to work, I first have to (1) open a new file in the filesystem with 'wb', (2) write the contents of the binary column and then (3) read() the filesystem file with 'rb'. I would like to skip this whole process... I just wanto to get the file from the database, into a variable and use it AS IF IT WAS OPENED from the filesystem... How can I do that? I already tried BytesIO and it does not work... Thank you
Reading a file from database binary column (postgresql) in memory without having to save and open the file in the filesystem
0
1
0
176
30,921,986
2015-06-18T17:31:00.000
2
0
1
0
python
30,922,275
1
true
0
0
Creating a class with counter as one of its attribute will be a good solution. Since the current functions will be explicitly called in the future, making counter an attribute will save the callee of variable passing and you of probably additional error checking. Also the counter variable can be abstracted from the callee function reducing errors. This point depends on your implementation though.
1
2
0
I have a Python module with a few functions. At the moment, these functions are only called from main(), but I expect to import the module from other files in the future. I have a counter variable that is used and modified by most of these functions. At the moment I am passing it around as a parameter, and returning it with every function call. This works but seems silly. I considered including a constant variable COUNTER that is used directly by all functions. However, I assume that constants are not supposed to be modified. Is there a cleaner approach than passing the counter variable back and forth?
Passing a variable back and forth vs. using a python constant
1.2
0
0
139
30,926,097
2015-06-18T21:27:00.000
1
0
0
0
python,pyqt4,stdin,raw-input,qwebpage
30,926,485
1
true
0
1
raw_input uses synchronous/blocking IO without giving Qt a chance to continue processing events in the background. Qt isn't really prepared for it's processing to be halted in this way. In theory it should just resume when raw_input is finished. But maybe in the meantime a timeout occurred or something like that. You really should use signal/event based input when using Qt. If GUI interaction is ok you should try QInputDialog::getText because it looks like a blocking call from the outside but internally lets Qt to continue processing background jobs.
1
1
0
I'm using PyQt4 to enter credentials into a domain login page and pull data from several additional pages in the domain. Everything works exactly as expected when supplying login or search credentials from within the code. When I open up raw_input to allow the user to enter information, it causes hang-ups trying to download one of the web-pages. I can't provide information on the page itself because it is on a corporate network, but it doesn't make sense that simply using raw_input would cause problems with QWebpage loads. The QNetworkManager throws 1 of the expected 3 or 4 .finished signals and the QWebpage frame never throws the .loadfinished signal so it just hangs. (I've tried to flushing stdin as well as seek(0) which gives me a bad file descriptor error). Has anyone run into such a problem before?
Using raw_input causes problems with PyQt page loading
1.2
0
1
285
30,926,669
2015-06-18T22:09:00.000
0
0
1
1
python,windows,pdf,file-io
30,985,046
1
true
0
0
So, yes, Windows passes the file name into the script as one of the sys.argvs. It is (so far as I can tell from printing the value) a file name without the path, that I used to open the file, so that tells me that Windows starts my program with the working directory set to the directory of the file it's called on. One word of caution, a gotcha of sorts, registering my .py as the default handler, did not work -- clicking on the file, resulted in Windows complaining that the file was not a valid windows executable. I didn't do any research but turned my .py into an .exe (py2exe) and registered that as the default file handler which did work. UPDATE, did not test it out but was told that specifying the python interpreter with my script as the default file handler would solve the "not valid" issue. Like this "C:\Python2.7\python.exe yourscript.py %*" the %* is to make so that the file name is made available to the script. (adding this without testing because on the one hand my problem has been solved and on the other what I was told makes sense to me)
1
2
0
I want to set up my python program to process all pdfs that are opened on my system, and then hand the processed pdf off to a standard reader. So I register my program with windows as the default handler for .pdf files and windows presumably will run my program on the pdf file. How within the script do I access this file. Is the file name one the sys.argvs? I didn't get google to work for me here.
How to handle the File hand-off from windows in a python program
1.2
0
0
68
30,927,270
2015-06-18T23:04:00.000
0
0
1
0
python,subprocess,pexpect
30,994,197
1
false
0
0
I tried myprogram.expect('\r\n'), but it seems to contain the input along with the output. Does this mean the "\n" in input is considered?If yes, thereis a way to suppress the echo by setting set echo to false. You can try that. Also can you paste the code that you are trying?
1
1
0
With pexpect.spawn , I have a program running in the background that sends back a line for each input line sent. What would be the appropriate expect expression for getting the whole lines as output? (after waiting until the line is there in the output) I do not want to use any specific string (like a prompt), other than the newline to synchronize against. I tried myprogram.expect('\r\n'), but it seems to contain the input along with the output.
How to properly use pexpect for this case?
0
0
0
65
30,928,017
2015-06-19T00:21:00.000
2
0
1
0
python,encryption,aes
30,928,115
1
true
0
0
AES is only defined for key sizes of 128, 192 and 256 bit. The is no way to use some other key size and still call it AES. If you want to be compatible with other implementations, you will have to stick to the defined key sizes. Two common ways to get a key of the correct size are: Simply slice only a part of your key off toatch one of the valid sizes. This should only be done if the big key was created with much entropy. If not then you might make brute forcing much easier. Run the big key through some hash function such as SHA-256 to get a 256 bit key. Again, if the big key has low entropy the you should regard it as a long password and run it for example through PBKDF2 with many iterations.
1
0
0
Anyone have a way to encrypt/decrypt using Python to handle AES in CBC mode with a key of 1024 bits (128 bytes)? All the AES pkgs found so far seem to be limited to 256 bit keys. My crypto background is limited....
How to use AES CBC using a key longer than 256 bits in Python
1.2
0
0
693
30,928,370
2015-06-19T01:12:00.000
0
0
0
0
php,python,rest,authentication
30,932,400
2
false
1
0
Actually I doesn't make great sense. From my experience I know that mobile apps and web pages even if use the same backend very often require completely different set of data, and - (I know premature optimization is the root of all evil) - number of calls should be minimized for mobile applications to make them running smoothly. I'd separate mobile API from classic REST API, even with prefixes, e.g. /api/m/ and /api/. There're really many frameworks in a number of technologies. E.g. spring, django-rest-framework, express.js. Whatever you like. Token authentication will be the best choice. For both web and mobile. For REST in general. It shouldn't be a matter for you now.
1
0
0
My next project requires me to develop both a mobile and a website application. To avoid duplicating code, I'm thinking about creating an API that both of these applications would use. My questions regarding this are: Is this approach sensible? Are there any frameworks to help me with this? How would I handle authentication? Does this have an affect on scalability?
Creating a centric REST API for a mobile and website application
0
0
1
288
30,928,713
2015-06-19T01:55:00.000
1
0
0
0
python,mysql,insert,sql-insert,large-data
53,005,996
3
false
0
0
Having tried to do this recently, I found a fast method, but this may be because I'm using an AWS Windows server to run python from that has a fast connection to the database. However, instead of 1 million rows in one file, it was multiple files that added up to 1 million rows. It's faster than other direct DB methods I tested anyway. With this approach, I was able to read files sequentially and then run the MySQL Infile command. I then used threading with this process too. Timing the process it took 20 seconds to import 1 million rows into MySQL. Disclaimer: I'm new to Python, so I was trying to see how far I could push this process, but it caused my DEV AWS-RDS DB to become unresponsive (I had to restart it), so taking an approach that doesn't overwhelm the process is probably best!
1
4
0
I have a txt file with about 100 million records (numbers). I am reading this file in Python and inserting it to MySQL database using simple insert statement from python. But its taking very long and looks like the script wouldn't ever finish. What would be the optimal way to carry out this process ? The script is using less than 1% of memory and 10 to 15% of CPU. Any suggestions to handle such large data and insert it efficiently into database, would be greatly appreciated. Thanks.
Inserting millions of records into MySQL database using Python
0.066568
1
0
13,099
30,931,897
2015-06-19T06:59:00.000
0
0
0
0
python-2.7,openerp,odoo,openerp-8,odoo-8
31,235,475
2
false
1
0
Inherit the write method of the model and give it a raise when your condition (in this case the not owned products) is met. This way the user will receive a warning message and they cannot save the changed values.
1
2
0
How to restrict the users to edit only the product attributes for non owned products in odoo/openerp? Can this be achieved through record rules or coding?
Access control in odoo/openerp
0
0
0
176
30,935,500
2015-06-19T10:09:00.000
0
0
0
0
python,windows,selenium,phantomjs,chocolatey
30,966,232
2
false
1
0
What version of choco did you use to install PhantomJS (and what version of PhantomJS)? I believe we corrected this issue in most cases, but it is on newer versions of choco - and you need the shim to be generated in that newer version (which means install or upgrade, but we are adding a shim regen command).
1
1
0
I have some Python Selenium nose tests running on Windows 8 with PhantomJS. I installed Chutzpah (PhantomJS) via Chocolatey. When I run the nose tests, a "ShimGen" process appears and lots of "PhantomJS is a headless WebKit with JavaScript API (32 bit)" processes appear and use 50+mb of memory and never close. This causes a lot of stuck PhantomJS processes in memory. This eventually brings down the server.
Why are nose tests leaving orphaned PhantomJS processes on Windows 8?
0
0
0
131
30,936,625
2015-06-19T11:07:00.000
0
1
1
0
python,python-2.7,installation
30,939,196
1
false
0
0
just run setup.py with the wrong python version. Solved.
1
0
0
I installed python module unidecode: downloaded it, run setup.py. Exactly as I did for any other python module. Then tried import unidecode: It works only in the downloaded directory. What's wrong?
Installing unidecode python module
0
0
0
3,372
30,937,667
2015-06-19T12:01:00.000
3
0
0
0
python,scikit-learn,gaussian,naivebayes
30,938,653
2
true
0
0
Yes, you will need to convert the strings to numerical values The naive Bayes classifier can not handle strings as there is not a way an string can enter in a mathematical equation. If your strings have some "scalar value" for example "large, medium, small" you might want to classify them as "3,2,1", However, if your strings are things without order such as colours or names, you can do this or assign binary variables with every variable referring to a colour or name, if they are not many. For example if you are classifying cars an they can be red blue and green you can define the variables 'Red' 'Blue' 'Green' that take the values 0/1, depending on the colour of your car.
1
1
1
I am trying to implement Naive Bayes classifier in Python. My attributes are of different data types : Strings, Int, float, Boolean, Ordinal I could use Gaussian Naive Bayes classifier (Sklearn.naivebayes : Python package) , But I do not know how the different data types are to be handled. The classifier throws an error, stating cannot handle data types other than Int or float One way I could possibly think of is encoding the strings to numerical values. But I also doubt , how good the classifier would perform if I do this.
NaiveBayes classifier handling different data types in python
1.2
0
0
3,764
30,939,165
2015-06-19T13:16:00.000
6
1
0
1
python,text
30,939,390
1
true
0
0
Yes, you are right, you can only read where you cannot write.
1
0
0
I need to change text documents. The way I've been doing it is making a new file, copying everything line by line from the old file and making changes on the way, then saving the new file as the old file's name. This becomes a problem when I only have read permission on the file. First I get OSErrno 30, not letting me delete the old file at the end of the writing. If I change my open command to 'r+', it simply says the file is not found. I don't have root access. Does anyone know of a workaround to this problem? EDIT: Thanks for the responses. I guess that IS the intended behavior of a read-only file...
Changing a read-only file
1.2
0
0
253
30,943,690
2015-06-19T17:12:00.000
1
0
1
1
python,filenames
30,944,299
1
false
0
0
It's your shell, not Python, that's doing the expansion. Python always sees a single flat list of arguments. In case you want to parse arguments, which basically turns this flat list into a more complex data structure, you can use the argparse module, or use more extensive third party projects like click.
1
0
0
I want to run something like: python command.py -i -c "f*.xxx" "search" This fine since "file-set" is not expanded" but: python command.py -i -c f*.xxx "search" This is expanded so sys.argv = ['command.py','-i', '-c', 'f1.xxx','f2.xxx','search'] Why couldn't it be ['command.py','-i', '-c', ['f1.xxx','f2.xxx'],'search'] ? Since the "f*.xxx" is not accessible I have no way to know if 'search' is a file name and the real 'search' missing. I want to print an error message that says "Please use quotes". And I must keep position this way and turning off globbing is not an option. getopt and argparse do not solve this. Thanks
For the life of me I cannot make a file-set argument the next to last one
0.197375
0
0
52
30,947,172
2015-06-19T20:58:00.000
4
0
1
0
python,eclipse
30,955,933
1
true
0
0
I'm assuming you're using PyDev. I don't know if there are other alternatives but that's what I use for Python in Eclipse. Right-click on your project folder in the Package Explorer view and select "Properties". Select "PyDev - Interpreter/Grammar" Select the appropriate Grammar Version and Interpreter, if those options contain the Python version you want. If not, click on "Click here to configure an interpreter not listed." Click "New" and provide an interpreter name (e.g. python3.4) and path to the executable (C:\Python34) Once you've done that, you should see the option to select your Python 3.4 interpreter under Run Configurations > Interpreter. It'll be displayed using the interpreter name you provided in step 5.
1
4
0
I've installed Python 3.4 and am currently using Python 2.7. I want to create a Project in Python 3.4, but, when I go to Run-->Run Configurations and then look to make a new entry under Python Run , I see that C:\Python34 doesn't show up. Also, when I try to create a new Project, the "Grammar Version" goes only up to 3.0. I don't know how to resolve this. Edit: Could this be because I haven't installed Python 3.4 correctly? Thanks
Trouble trying to run Python 2.7 and 3.4 in Eclipse
1.2
0
0
1,700
30,948,736
2015-06-19T23:34:00.000
1
0
0
0
python,json,unicode,utf-8,scrapy
30,964,853
2
false
1
0
As I don't have your code to test, Can you try to use codecs Try: import codecs f = codecs.open('yourfilename', 'your_mode', 'utf-8') f.write('whatever you want to write') f.close()
1
1
0
I'm having problem on json output of scrapy. Crawler works good, cli output works without a problem. XML item exporter works without a problem and output is saved with correct encoding, text is not escaped. Tried using pipelines and saving the items directly from there. Using Feed Exporters and jsonencoder from json library These won't work as my data includes sub branches. Unicode text in json output file is escaped like this: "\u00d6\u011fretmen S\u00fcleyman Yurtta\u015f Cad." But for xml output file it is correctly written: "Öğretmen Süleyman Yurttaş Cad." Even changed the scrapy source code to include ensure_ascii=False for ScrapyJSONEncoder, but no use. So, is there any way to enforce scrapyjsonencoder to not escape while writing to file. Edit1: Btw, using Python 2.7.6 as scrapy does not support Python3.x This is as standart scrapy crawler. A spider file, settings file and an items file. First the page list is crawled starting from base url then the content is scraped from those pages. Data pulled from the page is assigned to variables defined in items.py of the scrapy project, encoded in utf-8. There's no problem with that, as everything works good on XML output. scrapy crawl --nolog --output=output.json -t json spidername Xml output works without a problem with this command: scrapy crawl --nolog --output=output.xml -t xml spidername I have tried editing scrapy/contrib/exporter/init.py and scrapy/utils/serialize.py to insert ensure_ascii=False parameter to json.JSONencoder. Edit2: Tried debugging again.There's no problem up to Python2.7/json/encoder.py code. Data is intact and not escaped. After that, it gets hard to debug as the scrapy works async and there are lots of callbacks. Edit3: A bit of dirty hack, but after editing Python2.7.6/lib/json/encoder.py and changing ensure_ascii parameter to False, the problem seems to be solved.
Unicode on Scrapy Json output
0.099668
0
1
2,012
30,948,885
2015-06-19T23:56:00.000
1
0
0
0
python,mysql,pyramid,mysql-python
30,969,950
1
true
0
0
What @AlexIvanov is trying to say is that when you're starting your Pyramid app in console it is served using Pyramid's built-in development server. This server is single-threaded and serves requests one after another, so if you have a long request which takes, say, 15 seconds - you won't be able to use your app in another tab until that long request finishes. This sequential nature of the built-in webserver is actually an awesome feature which greatly simplifies debugging. In production, your Pyramid app is normally served by a "real" webserver, such as Apache or Nginx. Such webservers normally spawn multiple "workers", or use multiple threads which allow them to serve multiple concurrent requests. So I suspect there's nothing wrong with your setup (provided you didn't do anything particularly strange with Pyramid's initial scaffold and it's still using SQLAlchemy's session configured with ZopeTransactionExtension etc.). A "single shared MySQL account" in no way prevents multiple connected clients from running queries concurrently in MySQL - the thing is, with the development server you only have one single-threaded client.
1
0
0
I am writing a web tool using Python and Pyramid. It access a MySQL database using MySQLdb and does queries based on user input. I created a user account for the tool and granted it read access on the tables it uses. It works fine when I open the page in a single tab, but if I try loading it in second tab the page won't load until the first search is finished. Is there a way to get around this or am I just trying to use MySQL incorrectly?
Accessing MySQL from multiple views of a web site
1.2
1
0
69
30,950,198
2015-06-20T04:16:00.000
6
0
1
0
python,datetime,pandas
54,027,432
3
false
0
0
This workaround gets you closer. round((df["Accident Date"] - df["Iw Date Of Birth"]).dt.days / 365, 1)
1
14
1
In Pandas, why does a TimedeltaProperties object have no attribute 'years'? After all, the datetime object has this property. It seems like a very natural thing for an object that is concerned with time to have. Especially if it already has an hours, seconds, etc attribute. Is there a workaround so that my column, which is full of values like 10060 days, can be converted to years? Or better yet, just converted to an integer representation for years?
AttributeError: 'TimedeltaProperties' object has no attribute 'years' in Pandas
1
0
0
19,665
30,950,925
2015-06-20T06:13:00.000
8
0
0
0
python,tkinter
30,954,325
3
true
0
1
If you want to make a GUI that is "platform independent and screen size independent", you definitely do not want to be measuring sizes yourself. Unless, by saying you want something platform independent, you're saying you want a button to be X pixels regardless of pixel density or screen resolution (which seems like a very bad idea). The whole reason tkinter supports measuring in character units, along with options for widgets to stretch and shrink, is to support platform independence. When you start working at the pixel level, you will have many, many problems when you run the code on other platforms, or on other displays, or with other fonts. That being said, the measure method of a font can tell you exactly how many pixels a given string will require in a given font. If you want to know how wide "one character" is, you can use the measure method on the string "0", which is what tkinter uses as a base when computing widths based on characters. If you want buttons to be exactly the same size, using character widths will give you that, because it isn't the width of 10 actual characters in that widget, but ten average character widths. In that case, "10 characters" will be the same for every widget, no matter what the contents of that widget.
1
2
0
I'm using Tkinter to design a UI for an application. I'm using grid geometry and while specifying button width (or any widget width), I realized that width should be specified in text units and not pixels. Since I want to make it platform independent and screen size independent Is there any method to get max text unit width ? So that I can do math on basis of that. For example: I've 10 buttons in a row, which should be of equal width. If I hard code a width value specific to current screen value, it would not work on diff screen wise. Thanks.
Tkinter : Getting screen text unit width. (Not pixels)
1.2
0
0
10,473
30,954,484
2015-06-20T13:23:00.000
0
0
1
0
python,pygame,pip
30,955,474
2
false
0
0
If you feel comfortable with Linux, you could use a live USB drive to boot into Linux and work there (without installing anything to the actual hard drive), or you could set up a new OS inside of a virtual machine. Good luck.
1
0
0
Hi I am going interstate and will only have my laptop with me, I do not have admin rights on it and the use of Cmd is banned. I want to be able to use Pygame on my laptop. How can I install the module without command line?
Installing modules in Python without command line
0
0
0
481
30,954,589
2015-06-20T13:36:00.000
2
0
1
1
python,c++
30,954,837
2
false
0
1
Look For python.net which is cable of making a call to the interfaces written in .net supported languages. What all you need to do is Steps: Download and put it two files Python.Runtime.dll and clr.pyd in your DLLs folder. From you >>>(triple greater than prompt )Python prompt Try >>>import clr if it doesn't gives any error you are good to go . Next You need to do is put Your C++ Dll inside Lib/site-packages Folder . (This is not mandatory but good for beginners). Next to import clr try importing your Dll as a module import YourDllName If step 5 doesn't gives you a error . Hola You are done That's All Folks :)
1
0
0
I need to execute C++ code to acquire images to process in python. I need to use these commands from python: make and ./name_of_the_executable Could anybody please help know me how to do it?
Excuting cpp file from Python
0.197375
0
0
70
30,957,671
2015-06-20T18:49:00.000
1
0
1
0
python,windows,pip,anaconda
30,958,087
2
true
0
0
Just change your PATH environment variable to put C:\PythonXX\Scripts (where XX is the version of Python, usually 27 or 34) at the beginning. Click on My Computer -> Properties -> System Properties -> Advanced -> Environment Variables, then select Path in either the System Variables section (if you have Administrator access), or User Variables if you don't. Click Edit and put the correct path at the beginning, followed by a semi-colon ;. Save everything, close your command line session(s), then start a new one. Running pip -V should now print the CPython version and location instead of the Anaconda one.
1
1
0
I have been using Anaconda python on Windows 7, but a package I need isn't supported by Anaconda python, so I installed CPython from python.org. I'd like to install the package to CPython, but pip still installs everything to C:\Anaconda\ ...\site-packages. How can I change this?
Installing packages with pip with multiple python installs
1.2
0
0
544
30,960,617
2015-06-21T02:07:00.000
2
1
0
0
python,c,matlab,lzw,lz77
30,961,062
1
false
0
0
Reconsider your choice of the LZ 77/78 algorithms. ECG waves look similar but they are not binary identical so the dictionary-based compression algorithms don't provide ideal results. Complicated algorithms can hardly be expressed in few lines of code.
1
1
1
I am interested to implement LZ algorithms for the compression of ECG signal and want to optimized the code with relevant to Micro controller. So that it would Entropy efficient and take less time to compress and decompress the ECG signal. I am totally stuck how I go through to achieve this. I am open to any programming language. I have searched the internet for the source codes and I found a very long code which is difficult to understand in a short period of time. Any suggestion...?
LZ 77, 78 algorithm for ECG Compression
0.379949
0
0
339
30,969,031
2015-06-21T20:30:00.000
0
0
1
0
python
30,969,299
2
false
0
0
Printing that many elements will take a non-trivial amount of time, even if it's in fact a small fraction of the total. If your numbers are extremely large (e.g., hundreds of digits), the overhead of converting them to decimal is also going to be a factor. Certainly, if you want to optimize a loop it won't hurt to take out the output (you can still print a short message every million iterations, or whatever). But all this can only shave a percentage of your runtime. To really speed up your code, look for an approach that does not require you to churn through all elements in the power set.
2
1
0
I have a very large list of around 40 items. I am finding and printing its power set. So, the complexity of my code is n*2n. Undoubtedly, it is taking a long time. But, if I remove the print statement, will it bring any significant improvement to the runtime of code. In other words, is print adding a significant overhead?
Does adding a print statement in a loop add a significant overhead for large numbers?
0
0
0
222
30,969,031
2015-06-21T20:30:00.000
0
0
1
0
python
30,969,246
2
false
0
0
Printing isn't very CPU consuming itself, but a lot of OSes has got an artificial limit on the speed of printing characters to the command line, so I think it will be very significant (maybe even a hundred times slower).
2
1
0
I have a very large list of around 40 items. I am finding and printing its power set. So, the complexity of my code is n*2n. Undoubtedly, it is taking a long time. But, if I remove the print statement, will it bring any significant improvement to the runtime of code. In other words, is print adding a significant overhead?
Does adding a print statement in a loop add a significant overhead for large numbers?
0
0
0
222
30,974,575
2015-06-22T07:48:00.000
1
0
0
0
python,xlrd,openpyxl
30,974,768
1
false
0
0
xlrd can read both xlsx and xls files, so it's probably simplest to use that. Support for xlsx isn't as extensive as openpyxl but should be sufficient. There's a risk of losing information in converting xlsx to xls because xlsx files can be much larger.
1
0
0
I have a web application (based on Django 1.5) wherein a user uploads a spreadsheet file. I've been using xlrd for manipulating xls files and looked into openpyxl which claims to support xlsx/xlsm files. So is there a common way to read/write both xls and xlsx files? Another option could be to convert the uploaded file to xls and use xlrd. For this I looked into gnumeric and ssconvert, this would be favorable since all my existing code in written using xlrd and I will not have to change the existing codebase. So should I change the library I use or go with the conversion solution? Thanks in advance.
How do I read/write both xlsx and xls files in Python?
0.197375
1
0
1,651
30,975,327
2015-06-22T08:34:00.000
0
0
1
0
python,linux,pip,python-venv
31,356,654
2
false
0
0
This is perhaps not the best solution. Having the same issue on Fedora 22, I managed to install python3 packages using pip this way:sudo pip3 install --install-option="--prefix=/usr/lib/python3.4/site-packages" package_name
1
1
0
Ask: I can't install or upgrade any lib for python 3.4 because pip, pip3, pip3.4 not working or connected to python2.7. I tried to set alias python=python3 and use just pip: sudo pip install selenium Requirement already satisfied (use --upgrade to upgrade): selenium in /usr/local/lib/python2.7/dist-packages I tried pip3: sudo pip3 install selenium Requirement already satisfied (use --upgrade to upgrade): selenium in /usr/local/lib/python2.7/dist-packages I tried pip3.4: sudo pip3.4 install selenium Requirement already satisfied (use --upgrade to upgrade): selenium in /usr/local/lib/python2.7/dist-packages I tried to create venv for python3.4: volodka@interceptor:/usr/bin$ sudo virtualenv -p /usr/bin/python3.4 python3env Running virtualenv with interpreter /usr/bin/python3.4 Using base prefix '/usr' New python executable in python3env/bin/python3.4 Also creating executable in python3env/bin/python Installing setuptools, pip, wheel...done. volodka@interceptor:/usr/bin/python3env/bin$ . activate (python3env)volodka@interceptor:/usr/bin/python3env/bin$ sudo pip install selenium Requirement already satisfied (use --upgrade to upgrade): selenium in /usr/local/lib/python2.7/dist-packages Pip3, pip3.4 in virtualenv also try to install lib for python2.7. What I'm doing wrong?
python&linux pip always try to use python2.7 instead of 3.4
0
0
0
1,996
30,976,120
2015-06-22T09:13:00.000
0
0
0
0
python,scikit-learn,tf-idf
56,899,773
3
false
0
0
@kinkajou, No, TF and IDF are not same but they belong to the same algorithm- TF-IDF, i.e Term frequency Inverse document Frequency
1
7
1
I have code that runs basic TF-IDF vectorizer on a collection of documents, returning a sparse matrix of D X F where D is the number of documents and F is the number of terms. No problem. But how do I find the TF-IDF score of a specific term in the document? i.e. is there some sort of dictionary between terms (in their textual representation) and their position in the resulting sparse matrix?
Find the tf-idf score of specific words in documents using sklearn
0
0
0
11,861
30,981,249
2015-06-22T13:29:00.000
1
0
1
0
python,packages
30,981,746
6
false
0
0
You can try to install pytabix to different folder pip install --target="/path/to/your_new_path" pytabix and add this new path to sys.path: import sys sys.path.insert(0, "/path/to/your_new_path") and then import like import your_new_path.tabix
3
12
0
There is, apparently, a package loaded in our Python/2.7.2 environment named CrossMap which has, as a subpackage, tabix. When I start this version of python and import tabix, tabix shows: /hpcf/apps/python/install/2.7.2/lib/python2.7/site-packages/CrossMap-0.1.6-py2.7-linux-x86_64.egg/tabix/__init__.pyc Indicating that it is being loaded from CrossMap. Now, even if I pip install pytabix (which creates a tabix.so file in the site-packages directory), it still hits the CrossMap version. I even tried installing pytabix localling with pip install --user pytabix, but it still loads the CrossMap version. How can I point import tabix to the tabix.so file instead of the subpackage of CrossMap? UPDATE: Even after moving CrossMap to 'old_versions' directory, when I try to load tabix, it still hits a different package which has tabix as a subpackage. When I import tabix and then run tabix, I get a pysam package from RSeQC-2.6.1 even though I have pytabix as it's own package in the main site-packages directory. This same thing happens with the pysam package. Any ideas here?
How can I keep python from loading the 'wrong' package?
0.033321
0
0
10,184
30,981,249
2015-06-22T13:29:00.000
4
0
1
0
python,packages
31,118,453
6
false
0
0
You may be able to use a .pth file in the python version's site-packages folder to manually sort the sys.path for a user. easy_install uses this to add an eggs contents to your path as well.
3
12
0
There is, apparently, a package loaded in our Python/2.7.2 environment named CrossMap which has, as a subpackage, tabix. When I start this version of python and import tabix, tabix shows: /hpcf/apps/python/install/2.7.2/lib/python2.7/site-packages/CrossMap-0.1.6-py2.7-linux-x86_64.egg/tabix/__init__.pyc Indicating that it is being loaded from CrossMap. Now, even if I pip install pytabix (which creates a tabix.so file in the site-packages directory), it still hits the CrossMap version. I even tried installing pytabix localling with pip install --user pytabix, but it still loads the CrossMap version. How can I point import tabix to the tabix.so file instead of the subpackage of CrossMap? UPDATE: Even after moving CrossMap to 'old_versions' directory, when I try to load tabix, it still hits a different package which has tabix as a subpackage. When I import tabix and then run tabix, I get a pysam package from RSeQC-2.6.1 even though I have pytabix as it's own package in the main site-packages directory. This same thing happens with the pysam package. Any ideas here?
How can I keep python from loading the 'wrong' package?
0.132549
0
0
10,184
30,981,249
2015-06-22T13:29:00.000
1
0
1
0
python,packages
31,208,210
6
false
0
0
I would suggest you use a virtualenv for your project. Virtualenv is an excellent way to avoid namespace pollution and contentions like you are having. To debug a situation where you do not know where a specific module is hiding, you can try to import the specific module in an interactive Python shell and print the __file__ property for the module. Doesn't work in all cases though, e.g. zipped modules, but can get you started.
3
12
0
There is, apparently, a package loaded in our Python/2.7.2 environment named CrossMap which has, as a subpackage, tabix. When I start this version of python and import tabix, tabix shows: /hpcf/apps/python/install/2.7.2/lib/python2.7/site-packages/CrossMap-0.1.6-py2.7-linux-x86_64.egg/tabix/__init__.pyc Indicating that it is being loaded from CrossMap. Now, even if I pip install pytabix (which creates a tabix.so file in the site-packages directory), it still hits the CrossMap version. I even tried installing pytabix localling with pip install --user pytabix, but it still loads the CrossMap version. How can I point import tabix to the tabix.so file instead of the subpackage of CrossMap? UPDATE: Even after moving CrossMap to 'old_versions' directory, when I try to load tabix, it still hits a different package which has tabix as a subpackage. When I import tabix and then run tabix, I get a pysam package from RSeQC-2.6.1 even though I have pytabix as it's own package in the main site-packages directory. This same thing happens with the pysam package. Any ideas here?
How can I keep python from loading the 'wrong' package?
0.033321
0
0
10,184
30,985,379
2015-06-22T16:40:00.000
1
0
0
1
python,linux,elf,readelf
31,053,673
1
true
0
0
using pyelftools: tag = section.get_tag(n) gives the nth tag of a specific section
1
1
0
I want to get information about the dynamic section of an ELF file. Basically the same information I get using the command line: readelf -d elfFile
How can I read the dynamic section of an ELF file in python
1.2
0
0
1,452
30,985,798
2015-06-22T17:04:00.000
0
0
0
0
python,django
30,986,554
1
false
1
0
Definitely go for "way #1". Keeping an independent layer for your service(s) API will help a lot later when you have to enhance that layer for reliability or extending the API. Stay away from singletons, they're just global variables with a new name. Use an appropriate life cycle for your interface objects. The obvious "instantiate for each request" is not the worst idea, and it's easier to optimize that if needed (by caching and/or memoization) than it's to unroll global vars everywhere. Keep in mind that web applications are supposed to be several processes on a "shared nothing" design; the only shared resources must be external to the app: database, queue managers, cache storage. Finally, try to avoid using this API directly from the view functions/CBV. Either use them from your models, or write a layer conceptually similar to the way models are used from the views. No need of an ORM-like api, but keep any 'business process' away from the views, which should be concerned only with the presentation and high level operations. Think "thick model, thin views" with your APIs as a new kind of models.
1
1
0
I am building my first Django web application and I need a bit of advice on code layout. I need to integrate it with several other applications that are exposed through RESTful APIs and additionally Django's internal data. I need to develop the component that will pull data from various sources, django's DB, format it consistently for output and return it for rendering by the template. I am thinking of the best way to write this component. There are a couple of ways to proceed and I wanted to solicit some feedback from more experienced web developers on things I may have missed. Way #1 Develop a completely standalone objects for interacting with other applications via their APIs, etc... This would not have anything related with django and test independently. Then in django, import this module in the views that need it, run object methods to get required data, etc... If I need to access any of this functionality via a GET request (like through JavaScript), I can have a dedicated view that imports the module and returns json. Way #2 Develop this completely as django view(s) expose as a series of GET/POST calls that would all call each other to get the work done. This would be directly tied in the application. Way #3 Start with Way #1, but instead of creating a view, package it as a django app. Develop unit tests on the app as well as the individual objects. I think that way 1 or 3 would be very much encapsulated and controlled. Way 2 would be more complicated, but facilitate higher component re-use. What is better from a performance standpoint? If I roll with Way #1 or 3, would an instance of the object be instantiated for each request? If so this approach may be a bit too heavy for this. If I proceed with this, can they be singletons? Anyway, I hope this makes sense. thanks in advance.
Python Module in Django
0
0
0
765
30,987,425
2015-06-22T18:40:00.000
0
0
0
0
python,networkx
31,006,818
1
false
0
0
I ended up with forking the spring layout with a hold_dim parameter, so during updating the positions only x or y is being changed.
1
0
1
I wonder if it is implemented in networkx.drawing to hold one dimension during the layout optimization with a predefined position array. Lets say you want to optimize the layout of a graph and have the x dimension of the positions already given, so you only want to optimize the y directions of the vertices. So far I've only noticed, that one can hold positions of certain vertices, but then of course non of those are being moved. In the Python package grandalf, they have DicgoLayout, so I'd expect something similar in networkx.
Drawing layout with constraints in networkx
0
0
0
380
30,988,075
2015-06-22T19:17:00.000
2
0
1
0
python,ipython,ipython-notebook
63,175,447
8
false
0
0
If some stumbles here as of 2020, it's now possible to move .ipynb or other kind of files by simply checking it and clicking move. Nevertheless, for .ipynb files you must be sure that the notebook isn't running (gray icon). If it's running it should be green and you must shut it down before moving.
4
14
0
After creating a .ipynb file in the root directory /, how can you move that .pynb file into a deeper directory ie: /subdirectory using the web UI?
Move .ipynb using the IPython Notebook Web Interface
0.049958
0
0
23,019
30,988,075
2015-06-22T19:17:00.000
0
0
1
0
python,ipython,ipython-notebook
40,560,790
8
false
0
0
Ran into this issue and solved it by : Create a new folder in jupyter notebooks. Go to the folder/directory and click the "Upload "button which is next to the "New" button. Once you click "Upload", your pc file explorer window will pop-up, now simply find where you have your jupyter notebooks saved on your local machine and upload them to that desired file/directory. Although this doesn't technically move your python files to your desired directory, it does however make a copy in that directory. So next time you can be more organized and just click on a certain directory that you want and create/edit/view the files you chose to be in there instead of looking for them through your home directory.
4
14
0
After creating a .ipynb file in the root directory /, how can you move that .pynb file into a deeper directory ie: /subdirectory using the web UI?
Move .ipynb using the IPython Notebook Web Interface
0
0
0
23,019
30,988,075
2015-06-22T19:17:00.000
0
0
1
0
python,ipython,ipython-notebook
43,175,463
8
false
0
0
Duplicate the notebook and delete the original, was my workaround.
4
14
0
After creating a .ipynb file in the root directory /, how can you move that .pynb file into a deeper directory ie: /subdirectory using the web UI?
Move .ipynb using the IPython Notebook Web Interface
0
0
0
23,019
30,988,075
2015-06-22T19:17:00.000
1
0
1
0
python,ipython,ipython-notebook
46,796,512
8
false
0
0
Ipython 5.1: 1. Make new folder -- with IPython running, New, Folder, select 'Untitled folder' just created, rename (and remember the name!) 2. Go to the file you want to move, Move, write new directory name at prompt Note: If the folder exists, skip 1. Note: If you want to leave a copy in the original directory, Duplicate and then move.
4
14
0
After creating a .ipynb file in the root directory /, how can you move that .pynb file into a deeper directory ie: /subdirectory using the web UI?
Move .ipynb using the IPython Notebook Web Interface
0.024995
0
0
23,019
30,990,501
2015-06-22T21:54:00.000
1
0
0
0
python,matplotlib,3d,data-visualization
31,486,223
2
true
0
0
I actually was able to do this using the matplotlib.patches library, creating a patch for every data point, and then making it whatever shape I wanted with the help of mpl_toolkits.mplot3d.art3d.
1
0
1
In a standard 3D python plot, each data point is, by default, represented as a sphere in 3D. For the data I'm plotting, the z-axis is very sensitive, while the x and y axes are very general, so is there a way to make each point on the scatter plot spread out over the x and y direction as it normally would with, for example, s=500, but not spread at all along the z-axis? Ideally this would look like a set of stacked discs, rather than overlapping spheres. Any ideas? I'm relatively new to python and I don't know if there's a way to make custom data points like this with a scatter plot.
How to make data points in a 3D python scatter plot look like "discs" instead of "spheres"
1.2
0
0
612
30,991,181
2015-06-22T22:54:00.000
1
0
1
0
ipython,ipython-notebook,postscript
30,995,920
1
true
0
0
PostScript isn't a 'file format', its a programming language. In order to render PostScript you will need a complete PostScript interpreter. Presumably you could write one in Python, the last time I saw an estimate for the amount of time required to write a full PostScript interpreter it was 5 man years, its probably a bit more now. Or you could render the program externally using Ghostscript, to produce something you can already read. Since you say PDF is already supported it would seem sensible to convert to that instead; since its not a bitmap format you won't lose scalability.
1
2
0
How can we render postscript documents in IPython notebook? I saw there is support for other file formats such as jpg, png, pdf and svg but couldn't find any mention about postscript.
IPython notebook embed postscript
1.2
0
0
711
30,991,451
2015-06-22T23:18:00.000
1
0
0
0
python,user-interface,automation
34,939,080
1
false
0
1
Please try to run your script with administrative privileges. If you are using Powershell then run powershell as administrator
1
0
0
I am currently trying to write a script to automate the ATTO disk benchmark GUI. I can use the script to successfully locate images in the GUI, but I can not get any clicks generated by the script to register in the application. I have modified the script to test if I could use the PyAutoGUI package to click things in other applications, and have been able to successfully click things in other applications. Has anyone else had this issue with other applications using the PyAutoGUI package, and if so did you ever find any solution to the issue?
PyAutoGUI click() function will not register in certain applications
0.197375
0
0
1,086
30,992,964
2015-06-23T02:38:00.000
0
0
0
0
python,file,python-2.7
30,993,345
2
false
0
0
Open 10 files. Read 1 record from each file (or 10, the question is not clear). Use these records. Wait until the current 5 second interval elapses. Go to 2.
1
0
1
I have 10 CSV files with million records. I want to read the 10 files in parallel, but with a specific rate (10 records per 5 sec). What is the efficient way to do so ?. I am using Windows in case someone will suggest to use OS scheduler
Read many files in parallel with a specific sample rate
0
0
0
90
30,993,221
2015-06-23T03:13:00.000
1
1
1
0
java,python,c,calculator,ti-basic
30,993,515
4
false
0
1
You would need a compiler that will translate whatever language you're writing in (in the case of Java, an implementation of the JVM as well) to the assembly used by the calculator's CPU, it's probably not likely you will find an easy to use solution as calculators like the TI-84 are pretty archaic.
3
6
0
I am interested into programming with different languages besides Ti-Basic (like Java, C, and Python) on my Ti-84 plus calculator. Does my calculator support this, and if not, are there any calculators on the market that would be able to do this? Thanks in advance! (The idea is that when I don't have access to my computer at home, I could just pull out my pocket calculator and start programming and testing out some algorithms on the go that come in mind.) It doesn't have to be a calculator, just something cheap and programmable and something I can carry around in my hand.
Multiple Language Programming on Ti-Calculator
0.049958
0
0
3,809
30,993,221
2015-06-23T03:13:00.000
1
1
1
0
java,python,c,calculator,ti-basic
69,366,466
4
false
0
1
The TI-84 Plus CE Python allows you to code in Python but it is a barebones implementation. But it has been pretty useful for me.
3
6
0
I am interested into programming with different languages besides Ti-Basic (like Java, C, and Python) on my Ti-84 plus calculator. Does my calculator support this, and if not, are there any calculators on the market that would be able to do this? Thanks in advance! (The idea is that when I don't have access to my computer at home, I could just pull out my pocket calculator and start programming and testing out some algorithms on the go that come in mind.) It doesn't have to be a calculator, just something cheap and programmable and something I can carry around in my hand.
Multiple Language Programming on Ti-Calculator
0.049958
0
0
3,809
30,993,221
2015-06-23T03:13:00.000
3
1
1
0
java,python,c,calculator,ti-basic
31,085,450
4
true
0
1
After a little research, I found some some hand-held "pocket" devices. The Palm m500 has a JVM to program java on. There apparently was a website that had an SDK for C, but the website was removed. In regards to calculators: TI-82, 83, 84, 85, 86, and related models all support TI-BASIC and z80 ASM. TI-92, Voyage 200, TI-89, and related models all support TI-BASIC, C, and 68000 ASM. TI-nspire supports TI-BASIC and Lua. HP 50g supports RPL (User and System), ARM ASM, Saturn ASM, and C. HP 49, 48G, or 48S, which support Saturn ASM and RPL.
3
6
0
I am interested into programming with different languages besides Ti-Basic (like Java, C, and Python) on my Ti-84 plus calculator. Does my calculator support this, and if not, are there any calculators on the market that would be able to do this? Thanks in advance! (The idea is that when I don't have access to my computer at home, I could just pull out my pocket calculator and start programming and testing out some algorithms on the go that come in mind.) It doesn't have to be a calculator, just something cheap and programmable and something I can carry around in my hand.
Multiple Language Programming on Ti-Calculator
1.2
0
0
3,809
30,994,391
2015-06-23T05:18:00.000
0
0
1
0
python,tuples
30,994,460
2
false
0
0
Accessing element like this in a collection is wrong - K[0[0]] In this case, you are indexing the 0th element of 0 itself, which definitely causes an error. Try doing k[0][0]
1
0
0
I have a list that is a combination of a tuple and integer ex: K = [(7,8),8] How do I access the first element of the tuple, 7? I tried K[0[0]] and was not successful.
Python: List containing tuple and integer
0
0
0
55
30,997,900
2015-06-23T08:42:00.000
0
0
0
1
python,automation,chef-infra,orchestration
31,379,600
2
true
0
0
I applied the following steps to solve my problem Found the directory which gets created after installation of the independent software waited for the directory to get created(using ruby block code for sleeping) and initiated the installation of dependent software thereafter to ensure the dependency is satisfied. solved my issue..
1
0
0
As part of a platform setup orchestration we are using our python package to install various software packages on a cluster of machines in cloud. We have the following scenario: Our python package initiates installation of certain software packages(say A,B,C) then simultaneously initiates installation of certain other (say D,E,F). (N.B:. D,E,F are through our chef cookbooks and A,B,C are through our python programs ) Our problem is that software D(installs through chef cookbook) depends on software A. Since, D depends on A, cookbook for D does not find A in system and fails. What I was thinking was, if we can have a dependency in chef cookbook saying that proceed only if A is found in system else wait! Is it possible? are there any alternatives to above problem? Thanks
chef cookbook dependency on system software not on another cookbook
1.2
0
0
60
31,003,551
2015-06-23T13:02:00.000
0
0
0
0
python,matplotlib
31,004,307
2
false
0
1
One solution is to make MatPlotLib react to key events immediately. Another solution is print a 'Cursor' or marker line on the plot, and change its coordinates with the mouse events. Eg. draw a vertical line, and update its X coordinates with the left and right keys. You can then add a label with the X coordinate along the line, and other nice tricks.
1
1
0
I would like to move the cursor in a Python matplotlib window by a (data) pixel at a time (I'm displaying a 2D image), using the keyboard arrow keys. I can trap the keyboard keypress events, but how do I reposition the cursor based on these events?
move cursor in matplotlib window with keyboard stroke
0
0
0
533