Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
28,422,520
2015-02-10T01:19:00.000
0
0
0
1
python,django,macos,pip
70,606,374
2
false
1
0
Try to create a virtual environment. This can be achieved by using python modules like venv or virtualenv. There you can change your python path without affecting any other programs on your machine. If then the error is is still that you do not have permission to read files, try sudo pip install. But only as a last resort since pip recommends not using it as root.
2
0
0
I recently installed Python 3.4 on my Mac and now want to install Django using pip. I tried running pip install Django==1.7.4 from the command line and received the following error: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/basecommand.py", line 232, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/commands/install.py", line 347, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_set.py", line 549, in install **kwargs File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 754, in install self.move_wheel_files(self.source_dir, root=root) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 963, in move_wheel_files isolated=self.isolated, File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 234, in move_wheel_files clobber(source, lib_dir, True) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 205, in clobber os.makedirs(destdir) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/django' Obviously my path is pointing to the old version of Python that came preinstalled on my computer, but I don't know how to run the pip on the new version of Python. I am also worried that if I change my file path, it will mess up other programs on my computer. Is there a way to point to version 3.4 without changing the file path? If not how do I update my file path to 3.4?
Mac OSX Trouble Running pip commands
0
0
0
797
28,423,604
2015-02-10T03:32:00.000
0
0
0
0
python,xlsx,xlsxwriter
28,946,313
1
false
0
0
Try win32com package. This offers an interface of VBA for python You can find it on SourceForge. I've done some projects with this package, we can discuss more on your problem if this package helps.
1
0
0
I've been looking for ways to do this and haven't found a good solution to this. I'm trying to copy a sheet in an .xlsx file that has macros to another workbook. I know I could do this if the sheet contained data in each cell but that's not the case. The sheet contains checkboxes and SOME text. Is there a way to do this in python (or any other language for that matter?) I just need it done programmatically as it will be part of a larger script.
Copy sheet with Macros to workbook in Python
0
1
0
194
28,432,231
2015-02-10T12:55:00.000
0
0
0
1
python,python-2.7
28,432,344
3
false
0
0
Having thought about the problem itself: No, it's not possible to do this in any general way, because there's absolutely no reason why C:\path\to\file should correspond to \\PCNAME\C:\path\to\file; that requires that the folder C:\ has the share name C:, which might be canonical on windows machines, but not fixed. In fact, normal users don't even get to read the full list of shares of a windows computer including their local bindings. EDIT: as the others pointed out, there are ways to get your machines network name, but you'll have to build the network path with that manually, and I think your question was about automatically matching a file to it's windows share "URL".
1
1
0
I have a local path like C:\Users\some_user\Desktop\some_folder\attachments\20150210115. I need to generate network path with python like \\PC_NAME\C:\Users\some_user\Desktop\some_folder\attachments\20150210115 or something like this to get this folder from other windows pc. Is it possible to do in python automatically or I need just to hardcode local path replacing pc name and other staff? UPDATE Sorry, I not so familiar with Windows path as I live in Linux. I just need to generate network path for local path and send it to other device.
How to get network path from local path?
0
0
1
1,113
28,433,859
2015-02-10T14:15:00.000
1
1
1
0
python,python-3.x,setuptools,setup.py
28,435,407
1
true
0
0
setup.py is intended for people who are writing their own Python code and need to deploy it. If you either haven't written any Python code, or for some reason do not need to deploy any of the code you have written (you're not doing development in production, are you?), setup.py is not going to be terribly useful.
1
0
0
I'm in the midst of writing our build/deployment scripts for a small Python application that will be run multiple times per day using a scheduled task. I figure now is as good a time as ever to investigate Python's setup.py feature. My question is, is there any sort of benefit to investing the time in creating a setup.py file for internal business applications as opposed to say, writing a simple script that will activate my virtualenv, then download my pip packages based on my requirements file?
Is Python's setup.py beneficial for internal applications?
1.2
0
0
76
28,435,372
2015-02-10T15:23:00.000
7
1
0
0
python,raspberry-pi,mqtt
28,435,487
1
true
0
0
You can run as many clients as you like on the same machine as the broker (You could even run multiple brokers as long as they listen on different ports). The only thing you need to do is ensure that each client has different client id
1
4
0
So I'm building a system where I scan a RFID tag with a reader connected to a Raspberry Pi, the RFID tag ID should then be sent to another "central" RPI, where a database is checked for some info, and if it matches the central Pi sends a message to a lamp (also connected to a Pi) which will then turn on. This is just the start of a larger home-automation system. I read about MQTT making it very easy to make more RPIs communicate and act on events like this. The only thing I am wondering about, but can't find documented on the internet, is whether the central Pi in my case can act like the broker, but also be subscribed to the topic for the RFID tag ID, check the database and then publish to another topic for the light. Purely based on logical thinking I'd say yes, since the broker is running in the background. Thus I would still be able to run a python script that subscribes/publishes to, I'm guessing, localhost instead of the central Pi's IPaddress and port. Can anyone confirm this? I can't test it myself yet because I have just ordered the equipment, and am doing lots of preparation-research.
MQTT broker and client on the same RPI
1.2
0
0
2,202
28,437,813
2015-02-10T17:18:00.000
0
1
0
1
python,linux,ps
31,284,981
2
false
0
0
Easiest way to keep it simple would be to create a screen for each. screen -S prod and screen -S test, then run the python script in the background of each and detach the screen (using ctrl +a+d) then when you need to stop one screen -r prod then kill it/restart then detach again.
2
0
0
I am running two different Python scripts on a Ubuntu VPS. One of them is the production version, and the other one is for testing purposes. From time to time, I need to kill or restart one of them. However ps does not show which script does each Python process runs. What is the conventional way to do this?
How do i stop process running a certain Python script?
0
0
0
194
28,437,813
2015-02-10T17:18:00.000
2
1
0
1
python,linux,ps
28,437,903
2
false
0
0
ps -AF will give you All processes (not only the ones in your current terminal, or run as your current user), in Full detail, including arguments.
2
0
0
I am running two different Python scripts on a Ubuntu VPS. One of them is the production version, and the other one is for testing purposes. From time to time, I need to kill or restart one of them. However ps does not show which script does each Python process runs. What is the conventional way to do this?
How do i stop process running a certain Python script?
0.197375
0
0
194
28,440,950
2015-02-10T20:17:00.000
0
1
0
0
python,ruby
29,416,346
1
true
0
0
Follow-up: I eventually decided that this was forcing a square peg in a round hole. We now just run the Python dependency as a HTTP service and call it that way.
1
2
0
I have a small Python script that has a dependency on the scipy library and a few others. It exposes a few methods: weather_prediction, current_temperature, and so on. I would like to make this script callable from a Ruby gem, and wrap the Python methods in similar Ruby methods so that consumers of the gem can interface with Ruby, not Python. Ruby has the ability to have C extensions, but that's not really what I'm after here; I'd rather just have a way to talk to the Python directly. Is that possible?
Including a Python dependency in a Ruby gem
1.2
0
0
79
28,441,674
2015-02-10T21:00:00.000
1
0
0
1
python,python-2.7,ubuntu-12.04,openstack,devstack
28,613,401
1
false
0
0
artemdevel is right! Run the following and you should be good to go : sudo rm /usr/lib/python2.7/dist-packages/setuptools.egg-info sudo apt-get install --reinstall python-setuptools
1
0
0
I am trying to install the icehouse openstack on Ubuntu 12.04 amd64, which is in VirtualBox. I am using devstack to do so. So, I am executing the script "stack.sh". While executing it throws this error: IOError:[Error 2]No such file or directory: '/usr/lib/python2.7/dist-packages/setuptools.egg-info' I even updated and upgraded the Ubuntu (by running the command sudo apt-get update/upgrade),before downloading the devstack and it took a long time, but even after that, the error still comes.
Error while installing Icehouse openstack using devstack
0.197375
0
0
636
28,444,173
2015-02-10T23:57:00.000
1
0
0
1
python,macos,argument-passing,exit-code,py2app
28,468,268
2
true
0
0
Quoting from Py2app 0.6.4 minor feature release: Issue #15: py2app now has an option to emulate the shell environment you get by opening a window in the Terminal. Usage: python setup.py py2app --emulate-shell-environment This option is experimental, it is far from certain that the implementation works on all systems. Using this option with Py2app solved the problem of blocked communication between Py2app-frozen app and Os X shell.
2
0
0
I have frozen a python based GUI script using Py2app successfully but I run into trouble using this app on Mac. This app is supposed to send arguments/parameters to Clustal, a terminal-based application, but it instead returns an error non-zero exit status 127, '/bin/sh: clustal: command not found'. I found that my frozen app can send shell command successfully when I execute the same app from Frozen_apl.app>Contents>MacOS>Frozen_apl (which is a UNIX executable file). Why do these shell commands get blocked when they are passed directly from app? How can I get around this problem? Note: Clustal is properly installed and its path is properly set. I use OS X 10.9. I have same script frozen for Ubuntu and Windows and they work just fine.
Passing shell command to a terminal application from an App in Mac
1.2
0
0
814
28,444,173
2015-02-10T23:57:00.000
1
0
0
1
python,macos,argument-passing,exit-code,py2app
28,464,729
2
false
0
0
[Based on the discussion in comments] This isn't a problem with the arguments, it's due to the spawned shell not being able to find the clustal executable. I'm not sure why this is, since it's in /usr/local/bin/clustal, and since /usr/local/bin is in OS X's default PATH (it's listed in /etc/paths). Using the full path to the executable worked, so it appears the frozen app is spawning a shell with a non-default PATH. Including the full path (/usr/local/bin/clustal) in the frozen app isn't really an optimal solution; it'd be better to figure out how to get a normal PATH in the spawned shell. But I'm now familiar enough with Py2app to know how to do this. (JeeYem: please give the workaround you came up with in a comment or another answer.)
2
0
0
I have frozen a python based GUI script using Py2app successfully but I run into trouble using this app on Mac. This app is supposed to send arguments/parameters to Clustal, a terminal-based application, but it instead returns an error non-zero exit status 127, '/bin/sh: clustal: command not found'. I found that my frozen app can send shell command successfully when I execute the same app from Frozen_apl.app>Contents>MacOS>Frozen_apl (which is a UNIX executable file). Why do these shell commands get blocked when they are passed directly from app? How can I get around this problem? Note: Clustal is properly installed and its path is properly set. I use OS X 10.9. I have same script frozen for Ubuntu and Windows and they work just fine.
Passing shell command to a terminal application from an App in Mac
0.099668
0
0
814
28,444,272
2015-02-11T00:08:00.000
0
0
0
0
python,csv,numpy
64,579,432
5
false
0
0
While there is not such a parameter in numpy.loadtxt to ignore quoted or otherwise escaped commas, one alternative that has not been suggested yet would be the following... Perform a find and replace using some text editor to replace commas with tabs OR save the file in Excel as tab delimited. When you use numpy.loadtxt, you will simply specify delimiter='\t' instead of comma delimited. Simple solution that could save you some code...
1
7
1
I have a csv file where a line of data might look like this: 10,"Apple, Banana",20,... When I load the data in Python, the extra comma inside the quotes shifts all my column indices around, so my data is no longer a consistent structure. While I could probably write a complex algorithm that iterates through each row and fixes the issue, I was hoping there was an elegant way to just pass an extra parameter to loadtxt (or some other function) that will properly ignore commas inside quotes and treat the entire quote as one value. Note, when I load the CSV file into Excel, Excel correctly recognizes the string as one value.
numpy.loadtxt: how to ignore comma delimiters that appear inside quotes?
0
0
0
5,651
28,451,428
2015-02-11T10:01:00.000
1
0
1
0
python,gdb
34,778,855
2
false
0
0
I had trouble with int(value) giving me a signed number even when the type was explicitly unsigned. I used this to solve the problem: int(value) & 0xffffffffffffffff
1
5
0
I want to cast in python code a gdb.value which is a complicated type variable to int (this is not general type, but defined in the our code - a kind of bitfield). When I promped in the gdb shell itself: "p var" it prints it as int, i.e. the gdb know how to cast that bitfield to int. (Strange thing, in addition, I got when I try this command: int(var.cast(i.type)) When "i" is gdb.value that its type is int. The var indeed becomes int, but its value become same as the value of i). So is someone know how to cast this gdb.value by use the knowledge of the gdb. (Or by other way..) Thanks. This is my first question in StackOverFlow, sorry on the confusion and my weak English.
How to cast type of gdb.value to int
0.099668
0
0
3,058
28,453,194
2015-02-11T11:28:00.000
1
1
0
0
php,python,memory,memory-management
28,453,222
1
true
1
0
No. When a process is killed, the operating system releases all operating system resources (memory, sockets, file handles, …) previously acquired by that process.
1
0
0
I am writing PHP-Python web application for a webApp scanner where web application is managed by PHP and scanning service is managed by python. My question is if I kill a running python process with PHP, does it cause any memory leak or any other trouble (functionality-wise I handled it already)
PHP-Python - killing python process leads memory leak?
1.2
0
0
71
28,453,854
2015-02-11T12:01:00.000
14
0
1
0
python,python-2.7,terminal,pygame,pycharm
48,428,050
5
false
0
0
For PyCharm 2017 do the following: File - Settings Double click on your project name Select Project Interpreter Click on green + button on the right side of the window Type Pygame in search window Click Install package. Not I'm saying that the answers above won't work, but it might be frustrating to a newbie to do command line magic.
5
17
0
I've downloaded pygame-1.9.1release.tar.gz from the Pygame website. I extracted and installed it and it's working fine in the command line Python interpreter in Terminal (Ubuntu). But I want to install it for some IDE, like PyCharm. How can I do it?
Add pygame module in PyCharm IDE
1
0
0
120,657
28,453,854
2015-02-11T12:01:00.000
1
0
1
0
python,python-2.7,terminal,pygame,pycharm
46,529,022
5
false
0
0
I just figured it out! Put the .whl file in C:\Program Files\Anaconda3 While in the folder, click on the blue File tab in the upper left corner of the Window Explorer (assuming you're using Windows) Click on Open Windows PowerShell as administrator Write or just copy and paste: py -m pip install pygame It should start installing Done! I hope it works for you. I know it did for me.
5
17
0
I've downloaded pygame-1.9.1release.tar.gz from the Pygame website. I extracted and installed it and it's working fine in the command line Python interpreter in Terminal (Ubuntu). But I want to install it for some IDE, like PyCharm. How can I do it?
Add pygame module in PyCharm IDE
0.039979
0
0
120,657
28,453,854
2015-02-11T12:01:00.000
22
0
1
0
python,python-2.7,terminal,pygame,pycharm
28,467,215
5
true
0
0
Well, you don't have to download it for PyCharm here. You probably know how it checks your code. Through the interpreter! You don't need to use complex command lines or anything like that. You need to is: Download the appropriate interpreter with PyGame included Open your PyCharm IDE (Make sure it is up to date) Go to File Press Settings (Or Ctrl + Alt + S) Double click on the option that looks like Project: Name_of_Project Click on Project Interpreter Choose the interpreter you want to use that includes PyGame as a module Save your options And you are ready to go! Here is an alternate (I have never done this, please try to test it) Add PyGame in the same folder as your PyCharm file (Your PyCharm stuff is always in a specific file placed by you during installation/upgrade) Please consider putting your PyCharm stuff inside a folder for easy access. I hope this helps you!
5
17
0
I've downloaded pygame-1.9.1release.tar.gz from the Pygame website. I extracted and installed it and it's working fine in the command line Python interpreter in Terminal (Ubuntu). But I want to install it for some IDE, like PyCharm. How can I do it?
Add pygame module in PyCharm IDE
1.2
0
0
120,657
28,453,854
2015-02-11T12:01:00.000
4
0
1
0
python,python-2.7,terminal,pygame,pycharm
48,044,770
5
false
0
0
If you are using PyCharm and you are on a Windows 10 machine use the following instructions: Click on the Windows start menu and type cmd and click on the Command Prompt icon. Use the command pushd to navigate to your PyCharm project which should be located in your user folder on the C:\ drive. Example: C:\Users\username\PycharmProjects\project name\venv\Scripts. (If you are unsure go to the settings within PyCharm and navigate to the Python Interpreter settings. This should show you the file path for the interpreter that your project is using. Credit to Anthony Pham for instructions to navigate to interpreter settings.) HINT: Use copy and paste in the command prompt to paste in the file path. Use the command pip install pygame and the pip program will handle the rest for you. Restart you Pycharm and you should now be able to import pygame Hope this helps. I had a fun time trying to find out the correct way to get it installed, so hopefully this helps someone out in the future.
5
17
0
I've downloaded pygame-1.9.1release.tar.gz from the Pygame website. I extracted and installed it and it's working fine in the command line Python interpreter in Terminal (Ubuntu). But I want to install it for some IDE, like PyCharm. How can I do it?
Add pygame module in PyCharm IDE
0.158649
0
0
120,657
28,453,854
2015-02-11T12:01:00.000
0
0
1
0
python,python-2.7,terminal,pygame,pycharm
65,260,782
5
false
0
0
I already had pygame installed with python38-32 since its working just fine with it. I used this version of python us my project interpreter. 1.File -settings 2.according to your settings look for project interpreter 3.click on your current project interpreter and click on the add symbol 4.choose system interpreter 5.select the python version thats works with pygame for you 6.Note: some versions of pygame don't work with some versions of python be sure of what are you doing. 7.hope it works.
5
17
0
I've downloaded pygame-1.9.1release.tar.gz from the Pygame website. I extracted and installed it and it's working fine in the command line Python interpreter in Terminal (Ubuntu). But I want to install it for some IDE, like PyCharm. How can I do it?
Add pygame module in PyCharm IDE
0
0
0
120,657
28,456,349
2015-02-11T14:04:00.000
6
1
0
1
python,linux
28,456,469
3
false
0
0
EDIT: The answer makes the assumption that you plan to write to /mnt. I would just try to write to it and catch OSError exception to handle the read-only case.
1
0
0
I have a python application running on a beaglebone. How do I (in Python) check if the "/mnt" partition is mounted as read-only or read-write?
Python - Check if linux partition is read-only or read-write?
1
0
0
2,297
28,456,493
2015-02-11T14:13:00.000
1
0
1
0
python,pip,setuptools
28,460,274
1
true
0
0
I think, you can't! (at least without using such tricks, as you described). The Python package system has (to my knowledge) no such notion as "allowed" packages. There could be a person, that invents a different package C that he calls B, but with totally different functionality. Such a notion would prohibit users of your package A to use package C (alias B). So I would communicate to users of A, that B is no longer needed and make sure, that your newer coding does not reference B at all. And when somebody installs B, it is like a third party library that has nothing to do with yours. Of course, when the functionality of A and B is intermingled very much and the other user code also has to deal with B directly and has (allowed) side effects on A, you could get in trouble when an old B is still installed. But than your initial design was not the best either. In such a case (and when you really have to merge the packages -- see below) I would recommend, that you make a totally new package name like "newA" to underscore the fact, that something has fundamentally changed (and thus an intermingling between old A and B also would more likely be detected). But of course, I would second the argument of msw, that you create your problem by yourself. Normally it is also better to have smaller packages (if they are of a reasonable size, of course) instead of bigger "I manage the world" packages. You just can combine smaller packages better for different applications.
1
4
0
I have two python packages A and B that I would like to merge together into A, i.e. all functionality of B is now reachable in A.B. Previously, A-1.0 depended on B-1.0. Now I want to avoid, that users of A-2.0 still have B-1.0 installed and I don't know how to handle this properly. Different solutions/ideas I came up with: Include some code in A-2.0 that tries to import B, if an ImportError is raised, catch the exception and go on, otherwise throw a RuntimeError that B is installed in parallel Somehow mark B as a blocker for A-2.0 (is this possible?) Create a "fake" successor for B, so people who update their virtual environments or install the "latest" version of B get an empty package that throws an exception upon importing. I welcome your opinion, and experiences
Python dependencies: Merging two packages into one
1.2
0
0
1,572
28,460,426
2015-02-11T17:17:00.000
1
0
1
0
python
28,460,504
2
false
0
0
stdin and stdout are stream representations of the standard in- and output that your OS supplies Python with. You can do almost everything you can do with a file on these, so for many applications, they are far more useful than eg. print, which adds linebreaks etc.
1
1
0
Can someone explain stdin and stdout? I don't understand what is the difference between using these two objects for user input and output as opposed to print and raw_input. Perhaps there is some vital information I am missing. Can anyone explain?
Stdin and Stdout
0.099668
0
0
525
28,461,926
2015-02-11T18:39:00.000
1
0
1
0
python,arrays,list,numbers,integer
28,462,100
2
false
0
0
You need to iterate through your respective lists within your list, to accomplish this with your approach. Maybe try iterating through your list elements for each list, checking whether the list element is equal to RND, and printing the following indexed value as you currently are doing. The only change you would have to make is iterating through your list elements and using a conditional statement to find RND instead of just using list.index to determine the index of RND. This would be a good exercise for you to do yourself.
1
1
0
I have problem getting numbers from list after 'RND': myList = [['1123-TDT', '10593917/5422', 'RND', '20.5', 'QWTQWGSAG', 'RND', '40'], ['1156-TDT', '51205915/6166', 'ASDWT', '111551', 'RND', '35']] I can get number after RND if there is only one RND in list item(myList[1]). But if there's two of 'em(myList[0]), then nothing happens. Example of what I have: myList = [['1123-TDT', '10593917/5422', 'RND', '20.5', 'QWTQWGSAG', 'RND', '40'], ['1156-TDT', '51205915/6166', 'ASDWT', '111551', 'RND', '35']] listLen = len(myList) for x in range(listLen): if 'RND' in myList[x]: y = myList[x].index('RND') z = myList[x][y+1] print(z) Output looks like that: 20.5 35 But I would like it to be: 20.5 40 35 Thanks in advance!
Getting more than one number after specific string
0.099668
0
0
49
28,467,558
2015-02-12T01:08:00.000
1
0
0
1
python,google-app-engine,capedwarf
28,474,564
2
false
1
0
Yes, as Alex posted - CapeDwarf is Java only.
1
1
0
I'm trying to deploy my GAE application - written with Python - on CapeDwarf (WildFly_2.0.0.CR5). But all the documentation talking only about Java Application. So is CapeDwarf can deploy Python Application ? if it is, how to do it ? else any other application that can ?
Is CapeDwarf compatible with Python GAE?
0.099668
0
0
105
28,468,321
2015-02-12T02:36:00.000
0
1
0
0
java,python,sublimetext,sublimetext3,sublime-text-plugin
40,161,595
2
false
1
0
In windows 10 if you delete this folder --> C:\Users\USER\AppData\Roaming\Sublime Text 3 then sublime text will default back to its original state. Maybe if you setup a batch file to keep different versions of this folder for example": "./Sublime Text 3_Java" or "./Sublime Text 3_Python" or "./Sublime Text 3_C++" and then when you want to work on java have a batch file to rename "./Sublime Text 3_Java" to "./Sublime Text 3" and restart sublime. If you really want to get fancy use a symlink to represent "./Sublime Text 3" then have the batch file to modify(or delete and recreate) where the symlink points to.
2
6
0
I am currently using Sublime Text 3 for programming in Python, Java, C++ and HTML. So, for each language I have a different set of plugins. I would like to know if there is a way for changing between "profiles", with each profile containing plugins of the respective language. My PC is not all that powerful, so it starts to hang if I have too many active plugins. So when one profile is running, all other plugins should be disabled. TL;DR : Is there a way to change between "profiles" containing a different set of plugins in Sublime Text?
Sublime Text 3 - Plugin Profiles
0
0
0
1,302
28,468,321
2015-02-12T02:36:00.000
2
1
0
0
java,python,sublimetext,sublimetext3,sublime-text-plugin
28,504,749
2
false
1
0
The easiest way I can think of doing this on Windows is to have multiple portable installs, each one set up for your programming language and plugin set of your choice. You can then set up multiple icons on your desktop/taskbar/start menu/whatever, each one pointing to a different installation. This way you don't have to mess around with writing batch files to rename things. Alternatively, you could just spring for a new computer :)
2
6
0
I am currently using Sublime Text 3 for programming in Python, Java, C++ and HTML. So, for each language I have a different set of plugins. I would like to know if there is a way for changing between "profiles", with each profile containing plugins of the respective language. My PC is not all that powerful, so it starts to hang if I have too many active plugins. So when one profile is running, all other plugins should be disabled. TL;DR : Is there a way to change between "profiles" containing a different set of plugins in Sublime Text?
Sublime Text 3 - Plugin Profiles
0.197375
0
0
1,302
28,471,198
2015-02-12T06:57:00.000
2
0
1
0
python,visual-studio-2013
28,473,337
2
false
0
0
You can look for PyQt and Tkinter.
1
0
0
I have been writing desktop application in c# and of the experience of writing my code in VS, compiling it and let .NET framework do the running and execution. Now I want to develop desktop applications using python. I am new to python. Speaking of UI programming, please suggest which framework is best for rich desktop app programming and why, remember I'm coming from a tool like WPF. Also Please suggest any useful tutorials to start developing desktop applications using Python.
How to create desktop applications using python
0.197375
0
0
1,470
28,473,613
2015-02-12T09:20:00.000
0
0
0
0
python,django,django-forms,mime-types
28,474,553
3
false
1
0
You should not rely on the MIME type provided, but rather the MIME type discovered from the first few bytes of the file itself. This will help eliminate the generic MIME type issue. The problem with this approach is that it will usually rely on some third party tool (for example the file command commonly found on Linux systems is great; use it with -b --mime - and pass in the first few bytes of your file to have it give you the mime type). The other option you have is to accept the file, and try to validate it by opening it with a library. So if pypdf cannot open the file, and the built-in zip module cannot open the file, and rarfile cannot open the file - its most likely something that you don't want to accept.
1
0
0
I'm using django validators and python-magic to check the mime type of uploaded documents and accept only pdf, zip and rar files. Accepted mime-types are: 'application/pdf’, 'application/zip’, 'multipart/x-zip’, 'application/x-zip-compressed’, 'application/x-compressed', 'application/rar’, 'application/x-rar’ 'application/x-rar-compressed’, 'compressed/rar', The problem is that sometimes pdf files seem to have 'application/octet-stream' as mime-type. 'application/octet-stream' means generic binary file, so I can't simply add that mime type to the list of accepted files, because in that case also other files such es excel files would be accepted, and I don't want that to happen. How can I do in this case? Thanks in advance.
Checking file type with django form: 'application/octet-stream' issue
0
0
0
3,271
28,481,471
2015-02-12T15:34:00.000
5
1
0
0
python,python-sphinx
28,481,785
1
true
0
0
sphinx-apidoc only needs to be re-run when the module structure of your project changes. If adding, removing, and renaming modules is an uncommon occurrence for you, it may be easiest to just place the rst files under version control and update them by hand. Adding or removing a module only requires changing a few lines of rst, so you don't even need to use sphinx-apidoc once you've run it once.
1
3
0
The scheme is the following. There exists a package called foo (an API under heavy development, in first alpha phases) whose rst files are auto generated with sphinx-apidoc. For the sake of having a better documentation for foo after those files are generated, there is some editing. In, say, foo.bar.rst there are some paragraphs added to the contents generated with sphinx-apidoc How can I not loose all that information when a new call of sphinx-apidoc is made? And of course I want potential changes in the API to be reflected, with that manual information added being preserved.
Keeping the API updated in Sphinx
1.2
0
0
1,826
28,483,253
2015-02-12T16:54:00.000
6
1
1
0
python,git
58,271,317
5
false
0
0
In my case apt install python-git fixed the issue.
1
26
0
My laptop has been formatted and new OS was installed, and since then I get this error: ImportError: No module named git This refers to a python code which simply imports git. Location of git before my laptop was formatted: /usr/local/bin/git Location of git after the laptop was formatted: /usr/bin/git How / what do I change in my python code to refer to right path ?
ImportError: No module named git after reformatting laptop
1
0
0
50,377
28,485,685
2015-02-12T19:03:00.000
2
0
0
0
python,django
28,486,071
1
true
1
0
You're going wrong at the very end -- yes, you do need to call manage.py makemigrations <appname> for each of your apps once. It's not automatically done for all apps. Presumably that is because Django has no way of knowing if that is what you want to do (especially if some apps were downloaded from PyPI, etc). And a single command per app can't really be an extreme amount of work, right?
1
0
0
I'm in the process of preparing a Django application for its initial production release, and I have deployed development instances of it in a few different environments. One thing that I can't quite get happening as smoothly as I'd like is the initial database migration. Given a fresh installation of Django, a deployment of my application from version control, and a clean database, manage.py migrate will handle the initial creation of all tables (both Django's and my models'). That's great, but it doesn't actually create the initial migration files for my apps. This leads to a problem down the road when I need to deploy code changes that require a new database migration, because there's no basis for Django to compute the deltas. I've tried running manage.py makemigrations as the first step in the deployment, in the hopes that it would create the migration files, but it reports that there are no changes to migrate. The only way I've found to get the baseline state that I need is to run manage.py makemigrations [appname] for each of my apps. Shouldn't makemigrations, called without a specific app name, pick up all the installed apps and create their migrations? Where am I going wrong?
What's the right way to handle an initial database migration in Django?
1.2
0
0
129
28,486,832
2015-02-12T20:09:00.000
0
0
0
0
python,data-structures
28,486,878
1
false
0
0
Actually I figured it out. I can use PriorityQueue from the Queue class. I didn't realize you could put tuples into it. Sorry!
1
0
0
So I found the heapq implementation, but that does not seem to work for my purposes. I need a priority queue where the priority is given by a function manhattan_distance(node, end_node) that stores the node. Heapq seems to only work for integers and gives no way to store the nodes? What is my best option for implementing this without having to write my own class? Any advice would be greatly appreciated.
Python Priority Queue for node
0
0
0
156
28,488,559
2015-02-12T21:54:00.000
12
0
0
0
python,networkx
28,488,596
2
true
0
0
You can test it pretty quickly, but it only adds them once. Edges and nodes are represented as a dictionaries inside the graph structure, and they are only added if they don't actually exist. For already existing edges, adding them again has no effect.
1
8
1
If the same edge is added twice to the networkx edge data structure, will it then have two edges between the nodes or still just one? For example, would a spring layout show the nodes converge more with edges [(a,b),(a,b),(a,b),(a,b)] than [(a,b),(a,b)]? If I want to weight the edge, how would I go about it?
Networkx duplicate edges
1.2
0
0
13,392
28,488,714
2015-02-12T22:04:00.000
4
0
0
0
python,gensim
40,033,364
2
false
0
0
Sadly, as far as I can tell, you have to start from the very beginning of the analysis knowing that you'll want to retrieve documents by the ids. This means you need to create your own mapping between ids and the original documents and make sure the ids gensim uses are preserved throughout the process. As is, I don't think gensim keeps such a mapping handy. I could definitely be wrong, and in fact I'd love it if someone tells me there is an easier way, but I spent many hours trying to avoid re-running a gigantic LSI model on a wikipedia corpus to no avail. Eventually I had to carry along a list of ids and the associated documents so I could use gensim's output.
1
10
1
I am using Gensim for some topic modelling and I have gotten to the point where I am doing similarity queries using the LSI and tf-idf models. I get back the set of IDs and similarities, eg. (299501, 0.64505910873413086). How do I get the text document that is related to the ID, in this case 299501? I have looked at the docs for corpus, dictionary, index, and the model and cannot seem to find it.
Retrieve string version of document by ID in Gensim
0.379949
0
0
2,031
28,488,905
2015-02-12T22:16:00.000
2
0
1
0
python,optimization,interpreter,py2exe
28,488,931
1
false
0
0
No, it's just a convenience, it has no real bearing on execution speed. Things like Py2exe just bundle a Python interpreter together with your source code into a single package, so it's easier for a user to manage. To speed up execution you can either try using PyPy, a JIT compiler, or try writing the bottlenecks of your program in C. Also see if you can't leverage already existing libraries built for speed, such as NumPy.
1
1
0
I've tried looking for this question but I can't seem to find the answer. I have a pretty computationally intense python program with multiple modules and classes that are used for computer vision applications. My question is: "If I convert the python script(s) to an executable using something like py2exe, will the program run faster and more efficiently than running the original .py files in an interpreter?". Thanks for your time and comments in advance.
Does a python program that has been compiled into an executable run faster than running the python program in an interpreter?
0.379949
0
0
1,529
28,489,542
2015-02-12T23:01:00.000
0
1
1
0
python,cython
29,159,267
1
false
0
1
I usually just throw all the commands I need to build and test into a shell script, and run it when I want to test. It's a lot easier than futzing with crazy Python test runners.
1
4
0
I currently use Cython to build a module that is mostly written in C. I would like to be able to debug quickly by simply calling a python file that imports the "new" Cython module and test it. The problem is that I import GSL and therefore pyximport will not work. So I'm left with "python setup.py build; python setup.py install" and then running my test script. Is this the only way? I was wondering if anyone else uses any shortcuts or scripts to help them debug faster?
Better Way of Debugging Cython Packages
0
0
0
87
28,489,779
2015-02-12T23:21:00.000
0
0
0
0
python,flask,flask-sqlalchemy
28,502,445
1
false
1
0
Yes, it is possible. I was having difficulties debugging because of the opacity of the error, but ran it with app.run(debug=True), and managed to troubleshoot my problem.
1
0
0
Is it possible to connect a Flask app to a database using MySQLdb-python and vertica_python? It seems that the recommended Flask library for accessing databases is Flask-SQLAlchemy. I have an app that connects to MySQL and Vertica databases, and have written a GUI wrapper for it in Flask using flask-wtforms, but am just getting an error when I try to test a Vertica or MySQL connection through the flask app. Is there a reason that I cannot use the prior libraries that I was using within my app?
Can I connect a flask app to a database using MySQLdb-python vertica_python?
0
1
0
224
28,491,219
2015-02-13T01:45:00.000
0
0
1
0
python,windows,directory
70,032,481
2
false
0
0
I had the same problem and after prefixing '\\?\' also had not helped, for me it was wrong delimiter in path, a combination of both forward slash and backward slash can cause it. For me it got fixed when I replaced forward slash (/) with backward slash (\).
1
5
0
I created a directory above the character limit adding "\\?\" before the directory, but I can't delete it using shutil.rmtree or list it using os.walk. I get the following error with shutil.rmtree("folder"): WindowsError: [Error 3] The system cannot find the path specified: 'folder\CAAAAAAAAAB2iMan9VH4-0fxO4JOiT43bz9XVbQUoCcdOJTk1WRcPA++\BwAAAAAAAACXEWzr-_xJujcfpbaeAa-zNMqou1c_EtOH1lGXEMaL8w++\CAAAAAAAAACq0GkU9kGYNVDcaXAZ78ut8FSHTvE45Ra69qN495R6Fw++\CgAAAAAAAAAsOJ6oX-y6iRcg2F3KB4HGi6kcWnU2QPO2CEKsJUA4-g++' Is there a function I can use to remove that directory? Thanks.
Removing and listing directories above the character limit
0
0
0
865
28,491,923
2015-02-13T03:20:00.000
1
0
1
0
python,django,vim
28,491,949
3
false
1
0
I think the best practice is developing locally, and use Pycharm "sync" to deploy the code from local to remote. If you prefer to code without GUI or in the console mode, you could try "emacs+jedi", it works well in the console mode. For debug, the best choice is pydev/pdb.
1
0
0
I started develop python / django code recently. When I do it on my local machine (Ubuntu desktop), I use PyCharm, which I like a lot, because I can set up break points, and, when an exception occurs, it shows me the line in the code to which that exception relates. Are there any tools available for python / django development in the environment without a graphic UI, which would give me break point and debugging features? I know, that VIM with the right settings is an excellent coding tool, but I have no idea if it could be used for interactive debugging.
what tools can be used to debug python / django code on a remote hosts without graphical UI
0.066568
0
0
158
28,492,747
2015-02-13T04:53:00.000
1
0
1
1
python
28,492,782
2
true
0
0
It seems like you are not in the same directory as a.py. If so, you will need the absolute path rather than the relative path. That is probably why python (location of a.py) runs but python a.py will not. Make sure that you are running a.py from the same directory as you saved it in.
2
0
0
Just like I ask in the title I try to run "python a.py" in cmd but it says no such file or directory "python E:\python\python2.79\a.py" can run.. ..I am a newbie in python...I will appeciate for your answers
why cmd can not run "python a.py",but "python E:\python\python2.79\a.py"is ok..."python a.py"is taught by a python book
1.2
0
0
41
28,492,747
2015-02-13T04:53:00.000
0
0
1
1
python
28,492,784
2
false
0
0
Usually, when you're using a shell, you're positioned in a directory in the filesystem. The first example, python a.py uses a relative path; it says "the file I want to run is a.py in the very same directory I'm currently in". The second example, python E:\python\python2.79\a.py uses an absolute path; it says "no matter where I'm in the filesystem, the complete path to the file I want to run is this one". Then, simply, if you're not in the directory where a.py is, and you run python a.py, python will say it couldn't find that file.
2
0
0
Just like I ask in the title I try to run "python a.py" in cmd but it says no such file or directory "python E:\python\python2.79\a.py" can run.. ..I am a newbie in python...I will appeciate for your answers
why cmd can not run "python a.py",but "python E:\python\python2.79\a.py"is ok..."python a.py"is taught by a python book
0
0
0
41
28,494,629
2015-02-13T07:36:00.000
0
0
0
0
python,django
28,510,476
2
false
1
0
Instead I ended up using forms.HiddenInput() and did not display the field with null, which fits my case perfectly well!
1
0
0
In django model form, how to display blank for null values i.e prevent display of (None) on the form .Using postgresql, django 1.6.5. I do not wish to add space in the model instance for the allowed null values.
django modelform display blank for null
0
0
0
432
28,497,204
2015-02-13T10:15:00.000
0
0
0
0
python,protocol-buffers
71,591,118
2
false
0
0
Thanks to @Likor, I can just combine the binaries using cat proto1.bin proto1.bin > combined_proto.bin then de-serialize the binary to string.
1
3
0
I have a lot of data files (almost 150) in binary structure created according the .proto scheme of Protocol Buffer. Is there any efficient solution how to merge all the files to just one big binary data file without losing any information?
Protocol Buffer - merge binary data files with the same .proto file to the one file
0
0
1
720
28,499,705
2015-02-13T12:31:00.000
0
0
1
1
python,package-managers,revert
28,500,816
2
false
0
0
You could have two solutions First is to 1. copy the files into a temp folder and on success 2. remove the old folder 3. move the temp folder into the new one if Second is to 1.copy the files into a directory, versioned by name, something like C:\Programs\v2, v3, v4 etc. 2. If everything is ok, you create a junction point or a symlink to the destination you want.
1
0
0
I am writing a package manager in python for users to install programs we write at work. When 'installing' a new tool (which is just a process of copying files/folders from locations on a server to the users' computer), it may fail before completion for whatever reason. If this happens, I need a way to 'undo' all the changes made on the users' PC (I remove anything that was copied across). What techniques are there to implement this sort of 'revert' functionality? (Windows only solution)
Python - techniques for reverting changes to a folder?
0
0
0
96
28,501,107
2015-02-13T13:49:00.000
0
0
0
1
python,output
28,502,174
3
false
0
0
I can not comment on your question because my reputation is too low. If you use os.system or subprocess.check_output or os.popen, you will just get the standard output of your xyz program (if it is printing something in the screen). To see the files in some directory, you can use os.listdir(). Then you can use these files in your script afterwards. It may also be worth using subprocess.check_call. There may be other better and more efficient solutions.
2
0
0
I am using Python 2.7.3 in Ubuntu 12.04 OS. I have an external program say 'xyz' whose input is a single file and two files say 'abc.dat' and 'gef.dat' are its output. When I used os.system or subprocess.check_output or os.popen none of them printed the output files in the working directory. I need these output files for further calculations. Plus I've to keep calling the 'xyz' program 'n' times and have to keep getting the output 'abc.dat' and 'gef.dat' every time from it. Please help. Thank you
Getting output files from external program using python
0
0
0
100
28,501,107
2015-02-13T13:49:00.000
0
0
0
1
python,output
28,558,974
3
false
0
0
Thank you for answering my question but the answer to my question is this - import subprocess subprocess.call("/path/to/software/xyz abc.dat", shell=True) which gave me the desired the output. I tried the subprocess-related commands but they returned error " No such file or directory". The 'shell=True' worked like a charm. Thank you all again for taking your time to answer my question.
2
0
0
I am using Python 2.7.3 in Ubuntu 12.04 OS. I have an external program say 'xyz' whose input is a single file and two files say 'abc.dat' and 'gef.dat' are its output. When I used os.system or subprocess.check_output or os.popen none of them printed the output files in the working directory. I need these output files for further calculations. Plus I've to keep calling the 'xyz' program 'n' times and have to keep getting the output 'abc.dat' and 'gef.dat' every time from it. Please help. Thank you
Getting output files from external program using python
0
0
0
100
28,502,910
2015-02-13T15:25:00.000
3
0
1
0
python,string,pyqt4
28,502,943
1
true
0
1
The & causes PyQT to display an underscore under the letter that serves as a shortcut key for the menu item.
1
1
0
I'm going through some PyQt4 GUI tutorials and often see strings such as "&File" or "&Exit" in menus and the like. My question is what is the purpose of the "&" symbol in this string? I can't see a difference when I miss it out. I've not come across it in other Python tutorials - is it somehow unique to PyQt? Thanks.
What is the purpose of an ampersand (&) in a PyQt4 string?
1.2
0
0
784
28,504,861
2015-02-13T17:04:00.000
0
0
0
0
python,django,import,reference,dependencies
28,505,992
1
true
1
0
PyCharm adds only source folders automatically to the python path. So I marked app1 as a source folder and everything works.
1
0
0
I have a question concerning dependencies between two Django projects. I have two Django Projects, P1 and P2. I want to import a form model from P2 > apps > app1 > forms > form.py. I marked P1 as depending on P2 (using Pycharm) and tried to use from app1.forms import form.py inside my model.py file of an app in P1. PyCharm now says that app1 is an unresolved reference. Is there any step I missed? Thank you for your help.
Dependencies between two Django projects
1.2
0
0
79
28,506,077
2015-02-13T18:16:00.000
0
0
1
0
python,numpy,pycharm
63,116,042
4
false
0
0
I had the same problem, it is basically caused due to pycharm.But when i ran the same code in spyder it worked fine. Uninstall and reinstall the package And it works!
3
6
0
I'm trying to use a thing in numpy.random which I import using from numpy.random import normal. PyCharm tells me this is an unresolved reference despite being able to find other things in numpy.random such as numpy.random.random. Whenever I open up a Python shell and type from numpy.random import normal it runs fine and I can use normal just as I desire in the terminal. Why is this?
PyCharm can't find a reference yet it definitely exists
0
0
0
11,274
28,506,077
2015-02-13T18:16:00.000
1
0
1
0
python,numpy,pycharm
28,506,346
4
false
0
0
One possible problem is that your interpreter setting is wrong. When you have multiple versions of python installed and only one has numpy installed, if pycharm choose the wrong interpreter, then you get the error.
3
6
0
I'm trying to use a thing in numpy.random which I import using from numpy.random import normal. PyCharm tells me this is an unresolved reference despite being able to find other things in numpy.random such as numpy.random.random. Whenever I open up a Python shell and type from numpy.random import normal it runs fine and I can use normal just as I desire in the terminal. Why is this?
PyCharm can't find a reference yet it definitely exists
0.049958
0
0
11,274
28,506,077
2015-02-13T18:16:00.000
10
0
1
0
python,numpy,pycharm
49,817,865
4
false
0
0
Try File->Invalidate Caches/Restart... This worked for me.
3
6
0
I'm trying to use a thing in numpy.random which I import using from numpy.random import normal. PyCharm tells me this is an unresolved reference despite being able to find other things in numpy.random such as numpy.random.random. Whenever I open up a Python shell and type from numpy.random import normal it runs fine and I can use normal just as I desire in the terminal. Why is this?
PyCharm can't find a reference yet it definitely exists
1
0
0
11,274
28,506,217
2015-02-13T18:25:00.000
2
1
0
1
c,xcode4,osx-lion,python-2.6
28,530,379
1
true
0
0
Actually I found the problem. The problem was with the ld: library not found for -lgcc_ext.10.5 The gcc version given by Xcode 4.6.3 on Mac OS X Lion is 4.6. I installed the new gcc via homebrew, brew install gcc. I symlink my gcc to gcc-4.9 by doing ln -s /usr/local/bin/gcc /usr/local/bin/gcc-4.9. make sure the that in your PATH /usr/local/bin is before /usr/bin ). To a ls -l 'which gcc' to check that gcc is associated to the 4.9 version. Once this is done, the library is found and python 2.6 can be installed using pyenv.
1
4
0
I'm trying to install python 2.6 using pyenv but when doing pyenv install 2.6.9I get the following: checking MACHDEP... darwin checking EXTRAPLATDIR... $(PLATMACDIRS) checking machine type as reported by uname -m... x86_64 checking for --without-gcc... no checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/var/folders/r9/771hsm9931sd81ppz31384p80000gn/T/python-build.20150213191018.2121/Python-2.6.9': configure: error: C compiler cannot create executables I've installed Xcode 4.6.3 and installed Command Line Tools as info. Cheers, Ch
checking whether the C compiler works... no when installing python 2.6 (mac os x lion)
1.2
0
0
6,376
28,507,678
2015-02-13T19:53:00.000
0
0
0
0
python,python-3.x,game-engine,blender
28,684,387
1
false
0
1
Procedurally generating terrain on the fly is not that easy, but it is possible. You must create the terrain mesh with its triangles and textures during runtime and then dynamically load/unload the parts the player can see. So you would not create a single big landmass, but rather divide your terrain in smaller chunks than you can load and unload as needed. To create the actual terrain you could use a height-map or a noise algorithm.
1
0
0
I am making an first-person open-world game in Blender Game Engine(bge), and intend to have the terrain be generated procedurally. I have looked around from website to website, and have only found various ways for the game to generate terrain in the "edit" mode, but not a way to generate it in blender game engine, eg. I would like it so that every time you play, the terrain is different. Is there any way to do this with python code, or at all? Thanks in advance for your help! Note: When I say procedual, I do not mean for the world to be infinitely large, eg Minecraft(although acquiring a solution that supported that would be awesome)
Procedualy Generateing Terrain in Blender Game Engine
0
0
0
519
28,508,663
2015-02-13T21:01:00.000
1
1
0
0
python,facebook,instagram
28,509,000
1
true
0
0
You can set them up on "any" OS. Just make sure you have an internet connection. Also note, that those libraries wan't do anything unless you write the code. So you need to create a lightweight wrapper, that would pass credentials and triggers necessary functions, in a certain order. And that's pretty much it. Could it work just with a UPLOADER.py ? Not sure what you referring to. Or do I need to set up like a webserver ? No. You dont. It's not a requirements for the library. Do i need the Json.simple/google Take a look at the file called requirements.txt it provides a set of libraries you need to have in addition to the standart/builtin libs.
1
1
0
I'm using a raspberry b+ to create some files that i would like to post on FB and Instagram (my account or any account). I have a good industrial computer bckground but not for the "cloud" stuff. I seen the libs for python to connect to facebook and to instagram. (facebook-sdk, python-instagram). I understand the code of the examples etc... I'm just missing the context of where should I put this code to be able to interact with these "social media" sites. Could it work just with a UPLOADER.py ? Or do I need to set up like a webserver ? Do i need the Json.simple/google and so on ? I understand if it's a dumb question, but I'm a bit lost... Few "architectural" directions will do :). I'll get to understand the technical parts bymyself... Thanks in advance! Cheers, Mat
python lib for facebook instagram
1.2
0
0
112
28,510,010
2015-02-13T22:43:00.000
0
0
0
0
python,tkinter,coordinates
28,510,642
2
false
0
1
You cannot change the coordinate system of the canvas. You can scroll the canvas so that 0,0 is in the bottom left, but the y coordinates will be negative going up.
1
0
0
I already tried to google it but I couldn't find anything ... I created tkinter canvas with width 800, height 600 apparently, the left upside will be (0,0) I want to change left downside to (0,0) How can I do this???
How can I convert coordinate in tkinter?
0
0
0
209
28,510,059
2015-02-13T22:48:00.000
1
0
0
0
python,numpy,ocaml,export-to-csv,hdf5
28,511,785
1
true
0
0
First of all I would like to mention, that there're actually bindings for HDF-5 for OCaml. But, when I was faced with the same problem I didn't find one that suits my purposes and is mature enough. So I wouldn't suggest you to use it, but who knows, maybe today there is something more descent. So, to my experience the best way to store numeric data in OCaml is Bigarrays. They are actually wrappers around the C-pointer, that can be allocated outside of OCaml runtime. They also can be a memory mapped regions. So, for me this is the most efficient way to share data between different processes (potentially written in different languages). You can share data using memory mapping with OCaml, Python, Matlab or whatever with very little pain, especially if you're not trying to modify it from different processes simultaneously. Other approaches, is to use MPI, ZMQ or bare sockets. I would prefer the latter for the only reason that the former doesn't support bigarrays. Also, I would suggest you to look for capn'proto, it is also very efficient, and have bindings for OCaml and Python, and for your particular use case, can work very fine.
1
0
1
I'm generating time series data in ocaml which are basically long lists of floats, from a few kB to hundreds of MB. I would like to read, analyze and plot them using the python numpy and pandas libraries. Right now, i'm thinking of writing them to csv files. A binary format would probably be more efficient? I'd use HDF5 in a heartbeat but Ocaml does not have a binding. Is there a good binary exchange format that is usable easily from both sides? Is writing a file the best option, or is there a better protocol for data exchange? Potentially even something that can be updated on-line?
data exchange format ocaml to python numpy or pandas
1.2
0
0
245
28,512,929
2015-02-14T06:00:00.000
0
0
1
0
python,c,pycharm,python-2.x
28,513,155
3
false
0
0
Variables are nothing but reserved memory locations to store values. This means that when you create a variable you reserve some space in memory. Based on the data type of a variable, the interpreter allocates memory and decides what can be stored in the reserved memory. Therefore, by assigning different data types to variables, you can store integers, decimals or characters in these variables. Python variables do not have to be explicitly declared to reserve memory space. The declaration happens automatically when you assign a value to a variable. The equal sign (=) is used to assign values to variables. The operand to the left of the = operator is the name of the variable and the operand to the right of the = operator is the value stored in the variable.
1
1
0
In C, I have to set proper type, such as int, float, long for a simple arithmetic for multiplying two numbers. Otherwise, it will give me an incorrect answer. But in Python, basically it can automatically give me the correct answer. I have tried debug a simple 987*456 calculation to see the source code. I set a break point at that line in PyCharm, but I cannot step into the source code, it just finished right away. How can I see the source code? Is it possible? Or how does Python do that multiplication? I mean, how does Python carry out the different of number type in the result of 98*76 or 987654321*123457789, does Python detect some out of range error and try another number type?
How does Python know which number type to use in order to Multiply arbitrary two numbers?
0
0
0
444
28,513,774
2015-02-14T08:18:00.000
4
0
0
1
python,google-app-engine,google-cloud-datastore
28,513,820
2
false
1
0
You can create a key for any entity whether this entity exists or not. This is because a key is simply an encoding of an entity kind and either an id or name (and ancestor keys, if any). This means that you can store a child entity before a parent entity is saved, as long as you know the parent's id or name. You cannot reassign a child from one parent to another, though.
1
1
0
I am working on a web application based on Google App Engine (Python / Webapp2) and Google NDB Datastore. I assumed that if I tried to add a new entity using as parent key the key of a no longer existing entity an exception was thrown. I have instead found the entity is actually created. Am i doing something wrong? I may check before whether the parent still exist through a keys_only query. Does it consume GAE read quotas?
Google NDB: Adding an entity with non existing parent
0.379949
0
0
168
28,514,950
2015-02-14T11:03:00.000
1
0
0
0
python,selenium
28,515,299
1
true
0
0
Each instantiation of WebDriver launches a new browser, which is a very costly operation, so option 1 is not what you want to do. I would also not do option 3 because it is not good coding practice to depend on global variables when it can easily be avoided. This leaves you option 2: instantiate WebDriver once and pass the instance to your function(s).
1
2
0
I am iterating a list of links for screen scraping. The pages have JavaScript so I use Selenium. I have a defined a function to get the source for each page. Should I instantiate the WebDriver inside that function, which will happen once per loop? Or should I instantiate outside the function and pass the WebDriver in? Or assign the WebDriver to a variable that will be visible from inside the function, without explicitly passing it?
Where should I instantiate my WebDriver instance when looping?
1.2
0
1
197
28,515,972
2015-02-14T13:12:00.000
13
0
0
1
python,eclipse,macos,postgresql,psycopg2
60,101,069
8
false
0
0
I was able to fix this on my Mac (running Catalina, 10.15.3) by using psycopg2-binary rather than psycopg2. pip3 uninstall psycopg2 pip3 install psycopg2-binary
2
48
0
Currently I am installing psycopg2 for work within eclipse with python. I am finding a lot of problems: The first problem sudo pip3.4 install psycopg2 is not working and it is showing the following message Error: pg_config executable not found. FIXED WITH:export PATH=/Library/PostgreSQL/9.4/bin/:"$PATH” When I import psycopg2 in my project i obtein: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Library libssl.1.0.0.dylib Library libcrypto.1.0.0.dylib FIXED WITH: sudo ln -s /Library/PostgreSQL/9.4/lib/libssl.1.0.0.dylib /usr/lib sudo ln -s /Library/PostgreSQL/9.4/lib/libcrypto.1.0.0.dylib /usr/lib Now I am obtaining: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so, 2): Symbol not found: _lo_lseek64 Referenced from: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Expected in: /usr/lib/libpq.5.dylib in /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Can you help me?
Problems using psycopg2 on Mac OS (Yosemite)
1
1
0
17,998
28,515,972
2015-02-14T13:12:00.000
4
0
0
1
python,eclipse,macos,postgresql,psycopg2
28,949,608
8
false
0
0
I am using yosemite, postgres.app & django. this got psycopg2 to load properly for me but the one difference was that my libpq.5.dylib file is in /Applications/Postgres.app/Contents/Versions/9.4/lib. thus my second line was sudo ln -s /Applications/Postgres.app/Contents/Versions/9.4/lib/libpq.5.dylib /usr/lib
2
48
0
Currently I am installing psycopg2 for work within eclipse with python. I am finding a lot of problems: The first problem sudo pip3.4 install psycopg2 is not working and it is showing the following message Error: pg_config executable not found. FIXED WITH:export PATH=/Library/PostgreSQL/9.4/bin/:"$PATH” When I import psycopg2 in my project i obtein: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Library libssl.1.0.0.dylib Library libcrypto.1.0.0.dylib FIXED WITH: sudo ln -s /Library/PostgreSQL/9.4/lib/libssl.1.0.0.dylib /usr/lib sudo ln -s /Library/PostgreSQL/9.4/lib/libcrypto.1.0.0.dylib /usr/lib Now I am obtaining: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so, 2): Symbol not found: _lo_lseek64 Referenced from: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Expected in: /usr/lib/libpq.5.dylib in /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Can you help me?
Problems using psycopg2 on Mac OS (Yosemite)
0.099668
1
0
17,998
28,517,789
2015-02-14T16:43:00.000
0
0
0
0
python,gdata,google-api-python-client
28,519,427
1
true
0
0
You'll need to continue to use gdata for sheets until a new version that supports Google API is released.
1
2
0
I started writing some python code that adds events to a spreadsheet in Google Sheets using gdata. Everything went well and it works, but now I wanted to add these same events to a calendar and I can't figure out what I'm supposed to use. I see that the calendar API on Google is at V3 and they suggest I install google-api-python-client. Gdata seems to be almost abandoned and I feel like I'm lost in the middle. I would just like to be able to use one python API to add data to a calendar and a spreadsheet and if possible keeping things simple as I'm really not very good at this yet. Any help would be greatly appreciated. Thanks
Python API for Google Calendar and Spreadsheets
1.2
0
1
303
28,521,611
2015-02-14T23:55:00.000
0
0
1
0
python,parsing,pdf,text
28,521,666
1
true
0
0
Short answer: use regular expressions and recast the string sections. Long answer: it's because all of this is coming from a text file, so everything is a string. The date 23/10/90 isn't represented in a .txt as a numerical value, it's a collection of character codes. Depending on exactly what you are trying to get out of that file, your best bet is to regex out the data you want, and recast it. So, for dates, try int(dayString) int(monthString) etc.
1
0
0
I am reading in information from a text file which has been ripped from a pdf, so everything is a mess. Some example variables (columns) that I'm trying to separate include date, action type and summary. For date, the format is DD/MM/YY, so I know that the first index will always be an int. However, whenever I test the file (using type(xyz)), everything is marked as being an str. How do I get python to recognize what is, and what is not, a str vs. int vs. double... etc.?
How to evaluate data type (i.e. str, int, double... etc.) when reading text file
1.2
0
0
113
28,523,247
2015-02-15T05:19:00.000
8
0
0
0
python,algorithm,sorting,time-complexity,computation-theory
28,523,302
1
true
0
0
The "similarity" (?!) that you see is completely illusory. The elementary, O(N squared), approaches, repeat their workings over and over, without taking any advantage, for the "next step", of any work done on the "previous step". So the first step takes time proportional to N, the second one to N-1, and so on -- and the resulting sum of integers from 1 to N is proportional to N squared. For example, in selection sort, you are looking each time for the smallest element in the I:N section, where I is at first 0, then 1, etc. This is (and must be) done by inspecting all those elements, because no care was previously taken to afford any lesser amount of work on subsequent passes by taking any advantage of previous ones. Once you've found that smallest element, you swap it with the I-th element, increment I, and continue. O(N squared) of course. The advanced, O(N log N), approaches, are cleverly structured to take advantage in following steps of work done in previous steps. That difference, compared to the elementary approaches, is so pervasive and deep, that, if one cannot perceive it, that speaks chiefly about the acuity of one's perception, not about the approaches themselves:-). For example, in merge sort, you logically split the array into two sections, 0 to half-length and half-length to length. Once each half is sorted (recursively by the same means, until the length gets short enough), the two halves are merged, which itself is a linear sub-step. Since you're halving every time, you clearly need a number of steps proportional to log N, and, as each step is O(N), obviously you get the very desirable O(N log N) as a result. Python's "timsort" is a "natural mergesort", i.e, a variant of mergesort tuned to take advantage of already-sorted (or reverse-sorted) parts of the array, which it recognizes rapidly and avoids spending any further work on. This doesn't change big-O because that's about worst-case time -- but expected time crashes much further down because in so many real-life cases some partial sortedness is present. (Note that, going by the rigid definition of big-O, quicksort isn't quick at all -- it's worst-case proportional to N squared, when you just happen to pick a terrible pivot each and every time... expected-time wise it's fine, though nowhere as good as timsort, because in real life the situations where you repeatedly pick a disaster pivot are exceedingly rare... but, worst-case, they might happen!-). timsort is so good as to blow away even very experienced programmers. I don't count because I'm a friend of the inventor, Tim Peters, and a Python fanatic, so my bias is obvious. But, consider... ...I remember a "tech talk" at Google where timsort was being presented. Sitting next to me in the front row was Josh Bloch, then also a Googler, and Java expert extraordinaire. Less than mid-way through the talk he couldn't resist any more - he opened his laptop and started hacking to see if it could possibly be as good as the excellent, sharp technical presentation seemed to show it would be. As a result, timsort is now also the sorting algorithm in recent releases of the Java Virtual Machine (JVM), though only for user-defined objects (arrays of primitives are still sorted the old way, quickersort [*] I believe -- I don't know which Java peculiarities determined this "split" design choice, my Java-fu being rather weak:-). [*] that's essentially quicksort plus some hacks for pivot choice to try and avoid the poison cases -- and it's also what Python used to use before Tim Peters gave this one immortal contribution out of the many important ones he's made over the decades. The results are sometimes surprising to people with CS background (like Tim, I have the luck of having a far-ago academic background, not in CS, but in EE, which helps a lot:-). Say, for example, that you must maintain an ever-growing array that is always sorted at any point in time, as new incoming data points must get added to the array. The classic approach would use bisection, O(log N), to find the proper insertion point for each new incoming data point -- but then, to put the new data in the right place, you need to shift what comes after it by one slot, that's O(N). With timsort, you just append the new data point to the array, then sort the array -- that's O(N) for timsort in this case (as it's so awesome in exploiting the already-sorted nature of the first N-1 items!-). You can think of timsort as pushing the "take advantage of work previously done" to a new extreme -- where not only work previously done by the algorithm itself, but also other influences by other aspects of real-life data processing (causing segments to be sorted in advance), are all exploited to the hilt. Then we could move into bucket sort and radix sort, which change the plane of discourse -- which in traditional sorting limits one to being able to compare two items -- by exploiting the items' internal structure. Or a similar example -- presented by Bentley in his immortal book "Programming Pearls" -- of needing to sort an array of several million unique positive integers, each constrained to be 24 bits long. He solved it with an auxiliary array of 16M bits -- just 2M bytes after all -- initially all zeroes: one pass through the input array to set the corresponding bits in the auxiliary array, then one pass through the auxiliary array to form the required integers again where 1s are found -- and bang, O(N) [and very speedy:-)] sorting for this special but important case!-)
1
1
1
So I just learned about sorting algorithm s bubble, merge, insertion, sort etc. they all seem to be very similar in their methods of sorting with what seems to me minimal changes in their approach. So why do they produce such different sorting times ie O(n^2) vs O(nlogn) as an example
Sorting algorithm times using sorting methods
1.2
0
0
297
28,524,215
2015-02-15T08:17:00.000
1
0
0
0
java,python,string
28,524,245
3
false
1
0
If you use u"" before the string, which means unicode in python2.x, then you would possibly get the same result with Java
1
4
0
I have a string like the follwoing ("استنفار" OR "الأستنفار" OR "الاستنفار" OR "الإستنفار" OR "واستنفار" OR "باستنفار" OR "لستنفار" OR "فاستنفار" OR "والأستنفار" OR "بالأستنفار" OR "للأستنفار" OR "فالأستنفار" OR "والاستنفار" OR "بالاستنفار" OR "فالاستنفار" OR "والإستنفار" OR "بالإستنفار" OR "للإستنفار" OR "فالإستنفار" OR "إستنفار" OR "أستنفار" OR "إلأستنفار" OR "ألأستنفار" OR "إلاستنفار" OR "ألاستنفار" OR "إلإستنفار" OR "ألإستنفار") (("قوات سعودية" OR "قوات سعوديه" OR "القوات سعودية" OR "القوات سعوديه") OR ("القواتالسعودية" OR "القواتالسعوديه" OR "إلقواتالسعودية" OR "ألقواتالسعودية" OR "إلقواتالسعوديه" OR "ألقواتالسعوديه")("القوات السعودية" OR "إلقوات السعودية" OR "ألقوات السعودية" OR "والقوات السعودية" OR "بالقوات السعودية" OR "للقوات السعودية" OR "فالقوات السعودية" OR "وإلقوات السعودية" OR "بإلقوات السعودية" OR "لإلقوات السعودية" OR "فإلقوات السعودية" OR "وألقوات السعودية" OR "بألقوات السعودية" OR "لألقوات السعودية" OR "فألقوات السعودية") OR ) If I used java string variable and count the number of characters it gives me 923 but if I used the len function of python it gives me 1514 What is the difference here ?
Why java String.length gives different result than python len() for the same string
0.066568
0
0
673
28,529,702
2015-02-15T18:41:00.000
0
0
1
0
python,raspberry-pi,project,raspbian
28,529,774
2
false
0
0
If I understand your question correctly Just write a configuration file something like a settings.py Then import it wherever you want like from settings import * settings.py API_KEY = "asdh31134hasdhe812" UNAME = "meUser" program.py import settings print settings.API_KEY >>asdh31134hasdhe812
1
0
0
I am making my first ever python project to be implemented on the raspberry pi. There is a bunch of information, such as database/table names, IDs, project directory path etc. that I have written down in my code. In the interest of flexibility, I don't want it there and would rather pull it out of some place that I can modify. What is the best way to approach this?
Where to store data I don't want in code?
0
0
0
69
28,530,885
2015-02-15T20:38:00.000
0
0
0
0
python,django
28,538,381
2
false
1
0
Django is storing time in database with more precision, than seconds. So if you time_at holds exact time of 12:45:44, query for database will retrieve fields that have time more than that just by miliseconds, so time of field retrieved from database can have 12:45:44.231 and it will be retrieved.
1
1
0
I want to fetch items from my model by using time_created as criteria. If the latest item I fetched was posted at 12:45:44, I store it in request.session['time_at'] = 12:45:44 and use it to fetch item that are later than the last fetched. new_notice = Notify.objects.filter(time_created__gt = request.session['time_at']) This is supposed to return items with time from 12:45:45 but it still return the ones with 12:45:44 which is making me have duplicate of items I have already fetched. How do I deal with this the right way?
Django __gt filter return the duplicate item
0
0
0
163
28,533,354
2015-02-16T01:33:00.000
2
0
0
0
python,django,django-1.7
28,533,371
2
true
1
0
No, django doesn't have such function. So you have to check for the existence of the generated username in the loop.
1
3
0
I want visitors of my page to be able to create entries without registering first. When they register later I want everything they have created within that session to belong to their user account. To achieve that I just want to create blank users with random usernames when entries from non users are made. What is the most elegant way to create a unique username randomly avoiding any collision? Should I just make a while loop that generates usernames and tries to save them to the db with a break upon success or is there a better way? Most scripts I've seen just create a random string, but that has the danger of a collision. Is there any Django function that creates random usernames based on which usernames are already taken?
How to create a unique random username for django?
1.2
0
0
2,119
28,533,515
2015-02-16T02:00:00.000
0
0
0
1
python,subprocess,dicom,pydicom
28,565,218
2
false
0
0
Oh this was due to syntax error using Pydicom. I wanted to access 0019, 109c tag. Syntax should be: ds[0x0019,0x109c].value. not ds[aaaa,bbbb].value
1
0
0
I am trying to read a dicom header tag in dicom file. Now, there are two ways to read this dicom header tag. 1) Using pydicom package in python which apparently is not working well on my python installed version(python 3). 2) or when i call AFNI function 'dicom_hinfo' through command line, i can get dicom tag value. The syntax to call afni function in terminal is as follows: dicom_hinfo -tag aaaa,bbbb filename.dcm output:fgre Now how should i call this dicom-info -tag aaaa,bbbb filename.dcm in python script. I guess subprocess might work but not sure about how to use it in this case.
How to call command line command (AFNI command)?
0
0
0
411
28,535,671
2015-02-16T06:27:00.000
17
0
1
0
api,python-3.x
28,535,731
1
true
0
0
It means that the creator of the API makes some choices for you that are in her opinion the best. For example, a web application framework could choose to work best with (or even bundle or work exclusively with) a selection of lower-level libraries (for stuff like logging, database access, session management) instead of letting you choose (and then have to configure) your own. In the case of ssl.create_default_context some security experts have thought about reasonably secure defaults to configure SSL connections. In particular, it limits the available algorithms to those that are still considered secure, at the expense of complete compatibility with legacy systems, a trade-off that is beneficial in their (and my) opinion. Essentially they are saying "we have a lot of experience in this domain, and we really think you should do things in the following way". I suppose this is a response to "enterprise" API that claim to work with every implementation of as many standard interfaces as possible (at the expense of complexity in configuration and combination, requiring costly consultants to set up everything). Or a natural extension of "Convention over Configuration". Things should work very well out-of-the-box, so that you only have to twiddle around with expert settings in special cases (and by then you should know what you are doing), as opposed to even a beginner having to make informed decisions about every aspect of the application (which can end in disaster).
1
8
0
I came across the term "Opinionated API" when reading about the ssl.create_default_context() function introduced by Python 3.4, what does it mean? What's the style of such API? Why do we call it an "opinionated one"? Thanks a lot.
What does "Opinionated API" mean?
1.2
0
1
2,047
28,537,135
2015-02-16T08:19:00.000
1
0
1
0
python
28,537,218
1
false
0
0
I am guessing you need some sort of educational system where the user can submit code, presumable to check that the code performs cf. a exercise. My immediate thoughts about this would to use a web-interface. In this manner, the code of the evaluating system is entirely hidden (unless the student hacks your webserver, but this is an entirely different topic). However, for this to work you must be aware of the pitfalls of allowing others submit code to your service, that is then executed. The code must be rigorously checked for the obvious things, but the issues here are open-bounded. You must also protect your service from the back-end by providing a safe environment for executing (e.g. a sandbox).
1
0
0
I'll try to keep this question objective. What is the canonical way to build a plugin-system for a Python desktop application? Is it possible to have an easy to use (for the developer) system which achieves the following: Users input their code using an in-app editor (the editor could end up dumping their plugins into a subdirectory of the app, if necessary) Keep the source of the application "closed" (Yes, there are pitfalls to obfuscation) Thanks
Canonical way to build a plug-in system in Python
0.197375
0
0
46
28,538,461
2015-02-16T09:43:00.000
0
0
0
0
python,django,django-models,django-admin,django-admin-tools
28,538,705
2
false
1
0
I've tried a couple of approaches locally, including overriding an AdminSite, but given the fact that all admin-related code is loaded when the app is initialized, the simplest approach would be to rely on permissions (and not give everyone superuser access).
1
4
0
Is it possible to conditionally register or unregister models in django admin? I want some models to appear in django admin, only if request satisfies some conditions. In my specific case I only need to check if the logged in user belongs to a particular group, and not show the model if the user (even if superuser) is not in the group. I can not use permissions here because, superusers can not be ruled out using permissions. Or, is there a way to revoke permission from even superusers on model.
unregister or register models conditionally in django admin
0
0
0
1,619
28,541,889
2015-02-16T12:55:00.000
2
0
0
0
python,sockets,networking
28,542,041
2
false
0
0
That depends. Creating a new socket means two computers have to discover each other, so there is name lookup, TCP/IP routing and resource allocation involved. Not really cheap but not that expensive either. Unless you send data more than 10 times/second, you won't notice. If you keep a socket open and don't send data for some time, a firewall between the two computers will eventually decide that this connection is stale and forget about it. The next data packet you send will fail with a timeout. So the main difference between the two methods is whether you can handle the timeout case properly in your code. You will have to handle this every time you write data to the socket. In many cases, the code to write is hidden very deep somewhere and the code doesn't actually know that it's writing to a socket plus you will have more than one place where you write data, so the error handling will leak into your design. That's why most people prefer to create a new socket every time, even if it's somewhat expensive.
2
4
0
Which is the best method for sending a data using sockets: Method 1: Creating a new socket every time when data needs to be sent and closing it when the transfer is complete. Method 2: Using the same socket instead of creating a new socket and maintaining the connection even when waiting for new data.
Which is a larger overhead: Creating a new socket each time or maintaining a single socket for data transfer
0.197375
0
1
2,330
28,541,889
2015-02-16T12:55:00.000
5
0
0
0
python,sockets,networking
28,543,387
2
true
0
0
It depends on the kind of socket, but in the usual cases it is better to keep the socket unless you have very limited resources. UDP is connectionless, that is you create the socket and there is no delay because of connection setup when sending a packet. But there are still system calls involved and allocating memory etc, so it is cheap but not free. TCP instead needs to establish the connection before you can even start to sending data. How fast this is done depends on the latency, i.e. fast on local machine, slower in local network and even slower on the internet. Also, connection start slowly because the available bandwidth is not known yet. With SSL/TLS on top of TCP connection setup is even more expensive because it needs more round trips between client and server. In summary: If you are using TCP you almost always better keep the socket open and close it only if you lack the necessary resources to keep it open. A good compromise is to close the socket as long as you have enough activity on the socket. This is the approach usually done with HTTP persistent connections.
2
4
0
Which is the best method for sending a data using sockets: Method 1: Creating a new socket every time when data needs to be sent and closing it when the transfer is complete. Method 2: Using the same socket instead of creating a new socket and maintaining the connection even when waiting for new data.
Which is a larger overhead: Creating a new socket each time or maintaining a single socket for data transfer
1.2
0
1
2,330
28,543,193
2015-02-16T14:04:00.000
0
1
0
0
python,swig,python-extensions
60,458,804
2
false
0
0
I'd absolutely recommend doing some basic testing of the wrapped code. Even some basic "can I instantiate my objects" test is super helpful; you can always write more tests as you find problem areas. Basically what you're testing is the accuracy of the SWIG interface file - which is code that you wrote in your project! If your objects are at all interesting it is very possible to confuse SWIG. It's also easy to accidentally skip wrapping something, or for a wrapper to use a diferent typemap than you hoped.
1
4
0
I need to create python wrapper for the library using SWIG and write unit tests for it. I don't know how to do this. My first take on this problem is to mock dynamic library with the same interface as those library that I'm writing wrapper for. This mock library can log every call or return some generated data. This logs and generated data can be checked by the unit tests.
How should I unit-test python wrapper generated by SWIG
0
0
0
501
28,545,060
2015-02-16T15:41:00.000
0
0
0
0
python,cluster-analysis
28,546,908
1
false
0
0
On skewed data, it can help a lot to go into logspace. You may first want to understand the distribution better, then split them. Have you tried visualizing them, to identify appropriate splitting values? One-dimensional data can be well visualized, and the results of a manual approach are often better than those of some blackbox clustering.
1
0
1
I have a series of integers. What I would like to do is split them into 5 discrete categories. I tried z-scores with bounds (-oo, -2), [-2, -1), [-1, +1], (+1, +2], (+2, +oo) but it doesn't seem to work probably because of the right-skewed data. So, I though that it might work with some sort of clustering. Any ideas?
How to cluster a series of right-skewed integers
0
0
0
80
28,545,553
2015-02-16T16:06:00.000
29
0
0
0
python,django,rest,django-rest-framework
28,545,657
2
true
1
0
The docs cover this: request.data returns the parsed content of the request body. This is similar to the standard request.POST and request.FILES attributes except that: It includes all parsed content, including file and non-file inputs. It supports parsing the content of HTTP methods other than POST, meaning that you can access the content of PUT and PATCH requests. It supports REST framework's flexible request parsing, rather than just supporting form data. For example you can handle incoming JSON data in the same way that you handle incoming form data. The last two are the important ones. By using request.data throughout instead of request.POST, you're supporting both JSON and Form-encoded inputs (or whatever set of parsers you have configured), and you'll be accepting request content on PUT and PATCH requests, as well as for POST. Is one more flexible? Yes. request.data is more flexible.
1
23
0
The Django Rest Frameworks has this to say about POST, quoting a Django dev Requests If you're doing REST-based web service stuff ... you should ignore request.POST. — Malcom Tredinnick, Django developers group As not-so experienced web-developer, why is request.POST (standard) discouraged over request.DATA (non-standard)? Is one more flexible?
Django Rest frameworks: request.Post vs request.data?
1.2
0
0
8,970
28,548,813
2015-02-16T19:21:00.000
0
0
0
0
python,django,image,web
28,550,992
1
true
1
0
Have multiple copies on the same image on different resolution on the server, and serve the correct one according to screen size using CSS media queries
1
0
0
I'm working on a responsive website that uses Django and a lot of the content is static. Most of this content are photos of high resolution and because of this, it takes too much time to load. Is there any way (using python/django) to make it so that to load such an image, a server request is made and then a function automatically resizes the image to the size that it needs to be (bigger for a desktop, smaller for a smartphone, etc) or compresses the image so that it doesn't take so much time to load?
Auto image resizing/compressing using python/django
1.2
0
0
162
28,549,066
2015-02-16T19:37:00.000
0
0
1
0
python,algorithm,blocking,dining-philosopher
28,549,138
3
false
0
0
The dining philosophers problem is a scenario where you have N philsophers sitting around a circular table and there is a fork between each philosopher. If a philosopher wants to take a bite, then he or she must pick up one of the two forks next to them, and then the other fork. After the philosopher takes a byte, then he or she puts both forks down. This scenario will block if each philosopher picks up their left fork. Then no one can pick up the right fork to eat and they all starve. One solution is to have every philosopher start by picking up the left fork except one who will start by picking up the right fork.
2
7
0
I have been asked to write a simple solution to the dining philosophers problem in python. That itself seems quite straight forward but am some what confused since I am asked to write a non-blocking solution. I am unsure what is meant by this in this context. Is anyone able to give any hints or point me in the right direction?
Non-blocking solution to the dining philosophers
0
0
0
3,000
28,549,066
2015-02-16T19:37:00.000
2
0
1
0
python,algorithm,blocking,dining-philosopher
30,385,250
3
false
0
0
In the context of the problem, non-blocking means no deadlock. I.e., a philosopher will not suspend indefinitely waiting for one fork while already holding the other fork. Suspension means that the thread is disabled for scheduling purposes and will not execute until another thread specifically resumes the suspended thread. The solution must avoid indefinite suspension or deadlock (i.e., 2 or more threads suspended waiting on each other to proceed). The solution requires an arbitrator that can atomically grant both forks or reject the request. If the philosopher cannot atomically take both forks, then the philosopher must think about life, the universe and everything else for a random amount of time. After thinking, the philosopher again requests to the arbitrator to acquire atomically both forks. Eating also delays for a random time before relinquishing both forks. All random delays are finite with a common upper limit, say, 10 seconds or 10 minutes, whatever. This design requires a compare-and-swap mechanism to examine and conditionally update a bit mask, with one bit for each fork. The mechanism is atomic. Either both bits are updated or neither are updated. A sample solution in java for an arbitrary number of philosophers that uses only volatile fields and no synchronized() blocks or suspension locks is available at: sourceforge.net/projects/javamutex/
2
7
0
I have been asked to write a simple solution to the dining philosophers problem in python. That itself seems quite straight forward but am some what confused since I am asked to write a non-blocking solution. I am unsure what is meant by this in this context. Is anyone able to give any hints or point me in the right direction?
Non-blocking solution to the dining philosophers
0.132549
0
0
3,000
28,551,063
2015-02-16T21:58:00.000
0
0
0
0
python,django
28,551,333
1
false
1
0
Depends on how you intend to use them after storing in the database; 2 methods I can think of are: Option 1) models.IntegerField(unique=True) now the trick is loading data and parsing it: you would have to concatenate the numbers then have a way to split them back out. fast would be Option 2) models.CommaSeparatedIntegerField(max_length=1024, unique=True) not sure how it handles unique values; likely '20,40' is not equal to '40,20', so those two sets would be unique. or just implement it yourself in a custom field/functions in the model.
1
0
0
I am defining the models for my Django app, and I would like to have a field for a model consisting of a tuple of two (positive) integers. How can I do this? I'm looking at the Django Models API reference but I can't see any way of doing this.
Django - how can I have a field in my model consisting of tuple of integers?
0
0
0
339
28,553,439
2015-02-17T01:52:00.000
0
0
0
0
python,pygame
28,553,503
2
false
0
1
Each of your sprites should have velocity attributes. If your velocity is negative, turn one way. If your velocity is positive, turn the other. Because velocity indicates the direction of movement, this can be used to determine angular direction.
1
0
0
Im fairly new to python. I creating a game similar to asteroids, and so far I've gotten the spaceship to move around with the arrow keys. can someone explain what i would do to make the spaceship rotate in the direction its moving? thanks in advance!
python/pygame: how would i get my sprite to rotate in the direction it's moving?
0
0
0
774
28,555,246
2015-02-17T05:25:00.000
3
0
1
0
python,python-3.x
28,555,400
4
false
0
0
If you can get the equation of the line in slope-intercept form, you should be able to set up an inequality. Say we have points (x1, y1), (x2, y2), and the line y = mx+b You should be able to plug in the x and y values for both points, and if both make the equation y < mx + b or both make it y > mx + b, they are on the same side. If either point satisfies the equation (y = mx + b), that point is on the line. if (y1 > m * x1 + b and y2 > m * x2 + b) or (y1 < m * x1 + b and y2 < m *x2 + b): return True #both on same side
1
1
0
If I have a definite line segment and then am given two random different points, how would i be able to determine if they are on the same side of the line segment? I am wanting to use a function def same_side (line, p1, p2). I know based on geometry that cross products can be used but am unsure how to do so with python.
Finding out if two points are on the same side
0.148885
0
0
2,656
28,556,452
2015-02-17T07:05:00.000
1
0
0
0
python,sockets,network-programming
28,564,544
1
true
0
0
Recently there was setns() call introduced in pyroute2, that allows you to set the network namespace for the current process. Then you can spawn processes with multiprocessing, set NS for each and use multiprocessing.Pipe to communicate between spawned processes. If anything else is still missing — ye're welcome to file an issue at github, we'll try to fix it asap.
1
0
0
I am exploring Python socket support with Linux network-namespaces, I see there is pyroute2, which handles only network-namespace (netns) creation etc, but does not seem to have any APIs for socket IO (say udp). And the Python socket library also does not seem to have any methods related to pick a specific network-namespace . Am I missing something, or its not yet implemented?
socket IO between netns
1.2
0
1
291
28,559,048
2015-02-17T09:50:00.000
0
0
0
0
python,linux,wget
28,559,080
1
false
0
0
Send a unique header on the backend server, E.G., "X-Node-ID: 1" for server #1.
1
0
0
I want to verify Stickiness policy of load balancer. Hence want to verify that when I make two or three subsequent HTTP requests (wget), then the request is satisfied by the same server every time?
How to check if the wget command request was served by a particular server when load balancer is in place?
0
0
1
178
28,560,045
2015-02-17T10:40:00.000
1
1
0
0
python,c++,c
28,560,741
3
true
0
1
You have to construct a PyObject from each element prior to adding it to a sequence. So you have to either add them one by one, or convert them all, then pass to a constructor from PyObject[]. I guess the 2nd way is slightly faster since it doesn't have to adjust the sequence's member variables after each addition.
1
0
0
When dealing with C extensions with Python, what is the best way to move data from a C++ array or vector into a PyObject/PyList so it can be returned to Python? The method I am using right now seems a bit clunky. I am looping through each element and calling Py_BuildValue to append values to a PyList. Is there something similar to memcpy?
Best way to copy C++ buffer to Python
1.2
0
0
983
28,561,598
2015-02-17T12:32:00.000
3
0
0
0
python,django,http,standards
28,561,881
1
true
1
0
I personally don't ever consider it safe to use GET for a request that will have side effects. I try to always follow the practice of POST + redirection to another page. This solves all kinds of problems, such as F5 refreshing the action, the user bookmarking a URL which has a side effect and so on. In your case, the update is harmless, but using POST at least conveys the fact that it's an update and may be useful for tools such as caching software (POST requests usually aren't cached). On top of that, applications often change, and you may at some point need to modify the app in such a way that the update isn't so harmless anymore. It's also generally difficult to guarantee that anything is completely secure in web development, so I prefer to stay on the safe side.
1
3
0
If I have a view which the only purpose is to update a value in my session, is it safe to use it over GET or should I use POST and CSRF protection? The value modified in the session is only used to change the user's context and if somebody manage to change the user's context in his own browser that should be harmless.
Which HTTP method should I use to update a session attribute?
1.2
0
0
131
28,569,765
2015-02-17T19:40:00.000
1
0
1
0
string,python-2.7,parsing,logfile
28,569,852
2
false
0
0
Check the indexes for both both characters, then use the lowest index to split your string.
1
0
0
I have some unpredictable log lines that I'm trying to split. The one thing I can predict is that the first field always ends with either a . or a :. Is there any way I can automatically split the string at whichever delimiter comes first?
Split by the delimiter that comes first, Python
0.099668
0
0
628
28,570,438
2015-02-17T20:23:00.000
1
0
1
0
python,shared-memory,wsgi,uwsgi
30,273,392
4
false
0
0
One possibility is to create a C- or C++-extension that provides a Pythonic interface to your shared data. You could memory map 200MB of raw data, and then have the C- or C++-extension provide it to the WSGI-service. That is, you could have regular (unshared) python objects implemented in C, which fetch data from some kind of binary format in shared memory. I know this isn't exactly what you wanted, but this way the data would at least appear pythonic to the WSGI-app. However, if your data consists of many many very small objects, then it becomes important that even the "entrypoints" are located in the shared memory (otherwise they will waste too much memory). That is, you'd have to make sure that the PyObject* pointers that make up the interface to your data, actually themselves point to the shared memory. I.e, the python objects themselves would have to be in shared memory. As far as I can read the official docs, this isn't really supported. However, you could always try "handcrafting" python objects in shared memory, and see if it works. I'm guessing it would work, until the Python interpreter tries to free the memory. But in your case, it won't, since it's long-lived and read-only.
1
11
0
I have a python process serving as a WSGI-apache server. I have many copies of this process running on each of several machines. About 200 megabytes of my process is read-only python data. I would like to place these data in a memory-mapped segment so that the processes could share a single copy of those data. Best would be to be able to attach to those data so they could be actual python 2.7 data objects rather than parsing them out of something like pickle or DBM or SQLite. Does anyone have sample code or pointers to a project that has done this to share?
How to store easily python usable read-only data structures in shared memory
0.049958
1
0
1,046
28,571,693
2015-02-17T21:45:00.000
0
0
1
0
python-2.7
28,582,880
3
false
0
0
I found a different approach to solve this problem. I have written a shell script which load s the environment modules and i am calling that in the python script. Something like this import subprocess subprocess.call(['./module_load.sh']) and the script has something like this.... module load *** This seems to be working. Have to say.....its pretty simple in Perl. it provides the environment modules package to handle this.
2
2
0
I am new to python and am learning things by writing scripts. I have tried the following and none of them seem to be working. 1) commands.getoutput('module load xxx') 2) subprocess.check_output(['module load', xxx']) None of these change the environment as a side effect of the module call. Can someone tell me what is wrong?
How can i load Module packages of linux using a python script
0
0
0
1,853
28,571,693
2015-02-17T21:45:00.000
0
0
1
0
python-2.7
28,583,181
3
false
0
0
Both commands create a subshell where the module is loaded - the problem is that this subshell is destroyed the moment Python function terminates
2
2
0
I am new to python and am learning things by writing scripts. I have tried the following and none of them seem to be working. 1) commands.getoutput('module load xxx') 2) subprocess.check_output(['module load', xxx']) None of these change the environment as a side effect of the module call. Can someone tell me what is wrong?
How can i load Module packages of linux using a python script
0
0
0
1,853
28,571,885
2015-02-17T21:57:00.000
-1
0
0
1
python,python-2.7,windows-7,cmd
29,957,535
1
true
0
0
The filepaths I was pulling files from were through ClearCase directories. It turned out that I simply had to reinstall ClearCase. There must have been some configuration issue that was causing the cmd to hang and thus forcing a hard reboot. It's good to point out that I am no longer experiencing this problem and that nothing was wrong with the python scripts.
1
0
0
I am trying to run a python script via cmd prompt on my work PC (Windows 7, Python 2.7). The script requires filepaths from different drives on my PC. I am correctly pulling all necessary filepaths and I press Enter to run the script but the script just hangs. The only thing that shows is a blinking underscore. I try to click the X to close the prompt but nothing happens. I am not able to Ctrl+C out of the program either. I open up Task Manager and I am not able to End Task (nothing happens) or End Process (cmd.exe doesn't even show up in this tab). I also tried Start-->Run-->taskkill /im cmd.exe but nothing happens. The rest of my team has no problem with Python 2.7. The only way to get out of the frozen cmd is to hold down the power button. I do not want to have to keep going through this process especially since this is during work. I'm hoping someone will be able to help me out: Any idea what's wrong with the version of Python I am using? How am I able to kill cmd.exe so that I can continue normal work functions without having to hold down the power button and waiting 5-10 minutes to reboot my PC?
Hanging script and cmd won't close
1.2
0
0
749
28,572,764
2015-02-17T22:55:00.000
5
0
0
0
python,cpython
28,572,917
3
true
1
0
Aside from the "private-by-convention" functions with _leading_underscores, there are: Quite a few imported names; Four class names; Three function names without leading underscores; Two string "constants"; and One local variable (nobody). If __all__ wasn't defined to cover only the classes, all of these would also be added to your namespace by a wildcard from server import *. Yes, you could just use one method or the other, but I think the leading underscore is a stronger sign than the exclusion from __all__; the latter says "you probably won't need this often", the former says "keep out unless you know what you're doing". They both have their place.
2
5
0
I've been reading through the source for the cpython HTTP package for fun and profit, and noticed that in server.py they have the __all__ variable set but also use a leading underscore for the function _quote_html(html). Isn't this redundant? Don't both serve to limit what's imported by from HTTP import *? Why do they do both?
Is there a point to setting __all__ and then using leading underscores anyway?
1.2
0
0
234
28,572,764
2015-02-17T22:55:00.000
5
0
0
0
python,cpython
28,572,885
3
false
1
0
__all__ indeed serves as a limit when doing from HTTP import *; prefixing _ to the name of a function or method is a convention for informing the user that that item should be considered private and thus used at his/her own risk.
2
5
0
I've been reading through the source for the cpython HTTP package for fun and profit, and noticed that in server.py they have the __all__ variable set but also use a leading underscore for the function _quote_html(html). Isn't this redundant? Don't both serve to limit what's imported by from HTTP import *? Why do they do both?
Is there a point to setting __all__ and then using leading underscores anyway?
0.321513
0
0
234
28,572,833
2015-02-17T23:00:00.000
327
0
0
1
python,linux,operating-system
28,583,621
6
true
0
0
It's faster, os.system and subprocess.call create new processes which is unnecessary for something this simple. In fact, os.system and subprocess.call with the shell argument usually create at least two new processes: the first one being the shell, and the second one being the command that you're running (if it's not a shell built-in like test). Some commands are useless in a separate process. For example, if you run os.spawn("cd dir/"), it will change the current working directory of the child process, but not of the Python process. You need to use os.chdir for that. You don't have to worry about special characters interpreted by the shell. os.chmod(path, mode) will work no matter what the filename is, whereas os.spawn("chmod 777 " + path) will fail horribly if the filename is something like ; rm -rf ~. (Note that you can work around this if you use subprocess.call without the shell argument.) You don't have to worry about filenames that begin with a dash. os.chmod("--quiet", mode) will change the permissions of the file named --quiet, but os.spawn("chmod 777 --quiet") will fail, as --quiet is interpreted as an argument. This is true even for subprocess.call(["chmod", "777", "--quiet"]). You have fewer cross-platform and cross-shell concerns, as Python's standard library is supposed to deal with that for you. Does your system have chmod command? Is it installed? Does it support the parameters that you expect it to support? The os module will try to be as cross-platform as possible and documents when that it's not possible. If the command you're running has output that you care about, you need to parse it, which is trickier than it sounds, as you may forget about corner-cases (filenames with spaces, tabs and newlines in them), even when you don't care about portability.
3
158
0
I am trying to understand what is the motivation behind using Python's library functions for executing OS-specific tasks such as creating files/directories, changing file attributes, etc. instead of just executing those commands via os.system() or subprocess.call()? For example, why would I want to use os.chmod instead of doing os.system("chmod...")? I understand that it is more "pythonic" to use Python's available library methods as much as possible instead of just executing shell commands directly. But, is there any other motivation behind doing this from a functionality point of view? I am only talking about executing simple one-line shell commands here. When we need more control over the execution of the task, I understand that using subprocess module makes more sense, for example.
Why use Python's os module methods instead of executing shell commands directly?
1.2
0
0
27,738
28,572,833
2015-02-17T23:00:00.000
11
0
0
1
python,linux,operating-system
28,582,402
6
false
0
0
It's far more efficient. The "shell" is just another OS binary which contains a lot of system calls. Why incur the overhead of creating the whole shell process just for that single system call? The situation is even worse when you use os.system for something that's not a shell built-in. You start a shell process which in turn starts an executable which then (two processes away) makes the system call. At least subprocess would have removed the need for a shell intermediary process. It's not specific to Python, this. systemd is such an improvement to Linux startup times for the same reason: it makes the necessary system calls itself instead of spawning a thousand shells.
3
158
0
I am trying to understand what is the motivation behind using Python's library functions for executing OS-specific tasks such as creating files/directories, changing file attributes, etc. instead of just executing those commands via os.system() or subprocess.call()? For example, why would I want to use os.chmod instead of doing os.system("chmod...")? I understand that it is more "pythonic" to use Python's available library methods as much as possible instead of just executing shell commands directly. But, is there any other motivation behind doing this from a functionality point of view? I am only talking about executing simple one-line shell commands here. When we need more control over the execution of the task, I understand that using subprocess module makes more sense, for example.
Why use Python's os module methods instead of executing shell commands directly?
1
0
0
27,738
28,572,833
2015-02-17T23:00:00.000
16
0
0
1
python,linux,operating-system
28,573,425
6
false
0
0
Shell call are OS specific whereas Python os module functions are not, in most of the case. And it avoid spawning a subprocess.
3
158
0
I am trying to understand what is the motivation behind using Python's library functions for executing OS-specific tasks such as creating files/directories, changing file attributes, etc. instead of just executing those commands via os.system() or subprocess.call()? For example, why would I want to use os.chmod instead of doing os.system("chmod...")? I understand that it is more "pythonic" to use Python's available library methods as much as possible instead of just executing shell commands directly. But, is there any other motivation behind doing this from a functionality point of view? I am only talking about executing simple one-line shell commands here. When we need more control over the execution of the task, I understand that using subprocess module makes more sense, for example.
Why use Python's os module methods instead of executing shell commands directly?
1
0
0
27,738
28,578,302
2015-02-18T07:33:00.000
2
0
0
0
python,numpy,matrix,vector,matrix-multiplication
55,585,713
3
false
0
0
If you are using numpy. First, make sure you have two vectors. For example, vec1.shape = (10, ) and vec2.shape = (26, ); in numpy, row vector and column vector are the same thing. Second, you do res_matrix = vec1.reshape(10, 1) @ vec2.reshape(1, 26) ;. Finally, you should have: res_matrix.shape = (10, 26). numpy documentation says it will deprecate np.matrix(), so better not use it.
1
43
1
In numpy operation, I have two vectors, let's say vector A is 4X1, vector B is 1X5, if I do AXB, it should result a matrix of size 4X5. But I tried lot of times, doing many kinds of reshape and transpose, they all either raise error saying not aligned or return a single value. How should I get the output product of matrix I want?
How to multiply two vector and get a matrix?
0.132549
0
0
49,959
28,579,257
2015-02-18T08:42:00.000
1
0
1
0
python,mysql,json,database
34,722,784
1
true
0
0
1) filter out the lines you can ignore. 2) work out your table dependency graph and partition rows into multiple files by table. 3) insert all rows for tables without dependencies; optionally, cache these so you don't have to ask the DB what you just told it for lookups. N) use that cache + do any DB lookups required to insert rows that depend on rows inserted in step N-1. Do all this as multiple processes so you can verify each stage. Use bulk inserts and consider disabling FK verification.
1
1
0
I have a file that is several G in size and contains a JSON hash on each line. The document itself is not a valid JSON document, however I have no control over the generation of this data so I cannot change it. The JSON needs to be read, lookups need to be performed on certain "fields" in the JSON and then the result of these lookups needs to be inserted into a MySQL database. At the moment, it is taking hours to process this file and I think that it is because I am inserting and commiting on each row instead of using executemany, however I'm struggling to work out how best to approach this because I need to do the lookups as part of the process and then insert into multiple tables. The process is effectively as follows: 1) Iterate over the file, reading each line as we go 2) For each line, work out if it needs to be inserted into the database 3) If the line does need to be inserted into the database, look up foreign keys for various JSON fields and replace them with the FK id 4) Insert the "new" line into the database. The issue comes at (3) as there are some cases where the FK id is created by an insert of a subset of the data. In short, I need to do a mass insert of a nested data structure with various parts of the nested data needing to be inserted into different tables whilst maintaining referential integrity. Thanks for all and any help, Matt
Mass insert of data with intermediate lookups using Python and MySQL
1.2
1
0
76
28,579,468
2015-02-18T08:56:00.000
6
0
1
0
python,pycharm
32,341,555
6
false
1
0
I've run into this running Pycharm CE 4.5 on Windows. The workaround I use is to run your program in debug mode, then you get a console tab where you can enter your password when using getpass.
1
33
0
I have found getpass does not work in PyCharm. It just hangs. In fact is seems msvcrt.getch and raw_input also don't work, so perhaps the issue is not with getpass. Instead with the 'i' bit of PyCharm's stdio handling. The problem is, I can't put my personal password into code as it would end up in SVN which would be visible to other people. So I use getpass to get my password each time. On searching, all I can find is that "Pycharm does dome hacking to get Django working with getpass" but no hint as to what that hack is.... I've looked at getpass and it uses msvcrt on Windows (so this problem might only be on Windows) My question is: Is there a workround for this issue?
How to use the Python getpass.getpass in PyCharm
1
0
0
22,724