Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
33,867,992
2015-11-23T09:52:00.000
0
0
1
1
python,azure,pip,python-wheel
33,873,874
3
false
0
0
have you tried uninstalling and reinstalling? I tried pip wheel azure-mgmt and that installed -0.20.1 for me. The directory for mine is /Users/me/wheelhouse, so you could look there. I found that in the initial log of the build.
1
0
0
I try to run pip wheel azure-mgmt=0.20.1, but whenever I run it I get following pip wheel error, which is very clear: error: [Error 183] Cannot create a file when that file already exists: 'build\\bdist.win32\\wheel\\azure_mgmt-0.20.0.data\\..' So my question is where or how I can find that path? I want to delete that existing file. I have been searching my local computer, searched for default path in Google, but still didn't find any solution. Also is it possible to tell pip wheel to output full log? As you can see that full error path is not displayed. I'm using virtualenv.
Pip Wheel Package Installation Fail
0
0
0
1,089
33,868,684
2015-11-23T10:26:00.000
2
1
0
0
python,drawing,reportlab,pycairo,wand
33,868,968
1
true
0
0
Pygame has some decent drawing capabilities; I'd suggest looking at that and playing with the pygame.draw module. PyCairo is more featureful however, and seems to be the more popular choice. Python Imaging Library (PIL) also might be worth looking into.
1
4
0
I'm working on a Python project and I'm looking for a nice module to do the following : Draw some bezier curves, on an existing JPEG image, from a given list of points. Then use this image to present it in a PDF. I have to be able to draw shapes, fill them, and set the opacity of the fill color. I also have to be able to draw images (from existing files) inside the JPEG. A module that allows me to use drawing paths would be great. I started using Wand, but I'm not satisfied with the results (quality loss on the areas of the image containing the drawn curves, and filling a path doesn't work as I expected (It draws horizontal lines but doesn't entirely fill the shape), or maybe I didn't use it correctly ?). I think I'm going to use ReportLab for the PDF part. ReportLab can be used to draw bezier curves, but I would prefer generating the images with the curves before including them inside the PDF. There are a lot of modules for drawing using Python out there, but it's not easy to determine which module is the best for what I want. I just started looking into pyCairo, but if you know of any other module that can achieve what I want, please feel free to share.
Good python module to draw on images
1.2
0
0
1,376
33,868,806
2015-11-23T10:31:00.000
2
0
1
0
python,pycharm,virtualenv
58,756,423
5
false
0
0
I did what was specified by comiventor in the accepted answer, but also had to do what Brian W commented: mark the root folder as a "Sources Root". This is done as follows: Right click on your root directory Look at the bottom for the option Mark Directory as Choose Sources Root (the folder icon color should change from gray to blue) That's all!
2
14
0
I am trying to get PyCharm running using existing virtualenv setting. I have pointed my PyCharm project to python interpreter in existing virtualenv ~/.virtualenvs/myproj/ in the following path File -> Default Settings -> Default Project -> Python Interpreter the project runs fine but the editor is still glowing RED on packages installed as part of the virtualenv. Any idea what I am missing?
Configuring PyCharm with existing virtualenv
0.07983
0
0
24,483
33,868,806
2015-11-23T10:31:00.000
11
0
1
0
python,pycharm,virtualenv
33,870,399
5
true
0
0
To run PyCharm properly for your project, you need to set Python Interpreter and Python Structure correctly. I had set Python Interpreter correctly but missed out on Python Structure. Go to Pycharm->Preferences->your_project->Project Structure Add the right content root It has nothing to do with your working directory which you can set separately in your debug/run configuration also don't forget to add environment variables you need and you should be good to go. For Intellij Idea 2016.2, following is the path to add site-packages installed in virtualenv File -> Project Structure -> Sources -> "Use + button and add as Sources" if hidden directory may is be visible, you may either change your view settings or copy paste the path to site-packages in virtualenv
2
14
0
I am trying to get PyCharm running using existing virtualenv setting. I have pointed my PyCharm project to python interpreter in existing virtualenv ~/.virtualenvs/myproj/ in the following path File -> Default Settings -> Default Project -> Python Interpreter the project runs fine but the editor is still glowing RED on packages installed as part of the virtualenv. Any idea what I am missing?
Configuring PyCharm with existing virtualenv
1.2
0
0
24,483
33,871,960
2015-11-23T13:11:00.000
2
0
1
0
python
33,872,147
2
false
0
0
No this is not possible. The result of the expression evaluation is passed to the function rather than the expression itself. You will have wrap your statement in a function or a lambda expression.
1
3
0
I want to pass a statement to a python function. This statement should only be executed after I do some other stuff in the function. By default, python evaluates the statement and then passes its result as the parameter. Is there any way to change this behavior? The only way I have found is to wrap my statement in a function and then pass the function.
Can python handle unevaluated expression arguments?
0.197375
0
0
770
33,874,877
2015-11-23T15:38:00.000
0
0
0
0
python,quickfix
38,184,550
1
false
1
0
You could get the source for quickfix and derive from FileLog. Then overide: void onIncoming( const std::string& value ) Change this to check for size/date/something else and roll based on your criteria.
1
0
0
I am trying to deal with the issue that QuickFix logs grow indefinitely by scheduling a cron-like job to stop the initiator, copy the log file (which looks like 'FIX.4.2-XXX-YYY.messages.current.log') to a different location, then start the initiator again. This works fine, except that QuickFix does not automatically create a new messages.current.log file to save to. If I create the file manually, QF does not save to it. QF only behaves properly when it is shut down and restarted, in other words, when the Session is destroyed and then created again. Rather than shutting down my entire application and restarting it (which I am not sure I can do automatically very easily) is there some way of destroying and creating the Session objects from within a running QF instance? I am using the Python bindings but should be able to figure out QF/J instructions or those from other languages.
How to destroy and create a Session dynamically in QuickFix?
0
0
0
336
33,876,005
2015-11-23T16:35:00.000
1
0
0
0
python,django
33,891,222
1
true
1
0
No, you can't. The SITE_ID is cached in various places, so you can't change it at runtime. You need a separate process for each site, but you can't bind more than one process to a single port. Neither can the development server act as a reverse proxy for separate processes. Running each site on a different port is the closest you can get. This is what happens with any HTTP-based app server, but in a production environment you use a reverse proxy to forward all requests from port 80 to the appropriate port for that site.
1
0
0
I'm starting with Django and want to try out the features of django.contrib.sites. I have added some aliases for 127.0.0.1 in my /etc/hosts, and can run different sites by providing a DJANGO_SETTINGS_MODULE when running manage.py runserver. What I haven't managed to do is to have both sites available at once, on the same port. I have seen solutions that use WSGI and Apache or similar, but none using the development server. Can the Django Development server serve multiple sites at once, switching by domain name, or is the nearest I'll get to start multiple servers on different ports?
Can Django's development server route mutliple sites? If yes, how?
1.2
0
0
49
33,876,657
2015-11-23T17:09:00.000
29
0
1
0
python
51,081,628
8
false
0
0
I downloaded embeddable zip file from the site. Extracted it to the folder of my choice. Then added that folder in the windows path variable (using setx). It worked for me. Well this install only python and not the other packages like pip etc. Later I found better and simpler way with Python 3.7.0 version for windows. Download windows installer exe. Run the exe. Screen will be shown to chose the installation option. Uncheck "install for all users" option. Go for the custom installation. On next screen specify the directory path for which your user have full access on the computer. Uncheck "create shortcuts for installed application" option. Make sure "Add python to environment variable" option is Unchecked . complete the installation. Add the installation and Script folder path in Path using setx This has installed all the default components of python
2
49
0
The "NO ADMIN PRIVILEGES" part is key. I need to install Python but I do not have access to it in order to run the installation in a proper way. I'm also behind a firewall, so the "pip" option is quite limited. Could you help me figure this out?
How to install Python (any version) in Windows when you've no admin privileges?
1
0
0
121,588
33,876,657
2015-11-23T17:09:00.000
3
0
1
0
python
59,508,269
8
false
0
0
Uncheck 'Install for all user' when the installation starts. Rest will be taken care by Python-3. I'm using Python-3.7.6 with this method. This installation method will automatically updates current user's path for Python. But the application name will be py instead of python. The user has to handle environment variables if he wants to use pip or python commands.
2
49
0
The "NO ADMIN PRIVILEGES" part is key. I need to install Python but I do not have access to it in order to run the installation in a proper way. I'm also behind a firewall, so the "pip" option is quite limited. Could you help me figure this out?
How to install Python (any version) in Windows when you've no admin privileges?
0.07486
0
0
121,588
33,877,549
2015-11-23T18:02:00.000
1
0
0
1
python,gdb
33,877,733
1
false
0
0
Run gdb with the --batch command line option. This will disable all confirmation requests. You can also run the command "set confirm off"
1
0
0
I'm running GDB with a bash(.sh) script that does need sudo/super user access and it works good, but there is a problem, every time i runs gdb with that script, before gdb load the executable it will ask about running python with superuser. I want to remove this requirement/question. I want to remove this: WARNING: Phyton has been executed as super user! It is recommended to run as a normal user. Continue? (y/N) I'm using gdb 7.9 on ubuntu server 12.x which i compiled by my own. Ps: In another ubuntu server(version 15) the gdb(version 7.9) will not ask this question using the same script and access.
GDB and Python: How to disable Y/N requirement about running python as root
0.197375
0
0
723
33,878,291
2015-11-23T18:45:00.000
8
0
0
0
python,sql-server,pyodbc,pypyodbc
53,201,310
3
false
0
0
The "{SQL Server}" doesn't work in my case. "{SQL Server}" works perfectly well if the database is on my local machine. However, if I tried to connect to the remote server, always the error message below would return: pypyodbc.DatabaseError: ('08001', '[08001] [Microsoft][ODBC SQL Server Driver][DBNETLIB]SSL Security error') For those who are still struggling in the VARCHAR(MAX) truncation, a brilliant workaround my colleague came out with is to CAST the VARCHAR(MAX) to TEXT type. Let's say we have a column called note and its data type is VARCHAR(MAX), instead of using SELECT note FROM notebook, writing in SELECT CAST(note AS TEXT) FROM notebook. Hope it helps!
1
8
0
I have a Python program that connects to an MSSQL database using an ODBC connection. The Python library I'm using is pypyodbc. Here is my setup: Windows 8.1 x64 SQL Server 2014 x64 Python 2.7.9150 PyPyODBC 1.3.3 ODBC Driver: SQL Server Native Client 11.0 The problem I'm having is that when I query a table with a varchar(max) column, the content is being truncated. I'm new to pypyodbc and I've been searching around like crazy and can't find anything on how to prevent this from happening in pypyodbc or even pyodbc. At least not with the search terms I've been using and I don't know what other phrases to try. I even tried adding SET TEXTSIZE 2147483647; to my SQL query, but the data is still being truncated. How do I prevent this from happening? Or can you point me in the right direction, please? UPDATE: So, I tried performing a cast in my SQL query. When I do CAST(my_column as VARCHAR(MAX)) it truncates at the same position. However, if I do CAST(my_column as VARCHAR(8000)) it gives me a larger set of the text, but it's still truncating some of the contents. If I try to do anything larger than 8000 I get an error saying that 8000 is the largest I can use. Anyone know what might be going on here? It seem strange that using MAX won't work.
How to get entire VARCHAR(MAX) column with Python pypyodbc
1
1
0
5,189
33,878,648
2015-11-23T19:05:00.000
0
1
0
1
python,html,bash,jinja2
33,879,045
1
false
0
0
No. That is not possible. Nor desirable, due to the security implications.
1
0
0
Is it possible to create a hyperlink/button that calls a bash/python script on the user/local machine. I did search on the topic but there is a lot of discussion about the opening a port to a server (even the local port) but I don't want to open a port but execute everything locally. Is this even possible? Thanks
create a hyperlink that executes a bash/python on the user machine
0
0
0
41
33,881,172
2015-11-23T21:37:00.000
0
1
0
0
android,python
33,881,298
1
false
0
0
secrets of mobile hackers..what does google chrome use? Google uses three approaches..the third approach is your option find a site that does or allows triangulation involving using cell tower maps.. Or lets see mobile web page to get location via google services and than hook that into your python script on the server side
1
0
0
I'm trying to do some home/life automation scripting and I'd like to be able to get the location of my Android phone through some API. This would be useful to do things such as turn on my home security camera, or to route my home calls to my phone if I'm away. This would preferably a RESTful one, or an API with good Python interop. However, I'm not averse to using any tool to get the job done. I considered checking my router to see if my phone was connected, which will work for some things, but it would hinder me in implementing other things. I know I could probably write an Android app that would phone home to do this, but I wanted to see if there were any alternatives first. My Google-Fu came up short on this one (if it exists). Thanks in advance!
Android Phone Location API
0
0
1
58
33,885,634
2015-11-24T04:44:00.000
2
0
1
0
python,string,unicode
33,885,811
3
false
0
0
A combining overline is \u305 and it works quite well with "IV". What you want is for example: u'I\u0305V\u0305' (gives I̅V̅)
1
2
0
Hello my fellow coders! I'm an absolute beginner to Python and coding in general. Right now, I'm writing a code that converts regular arabic numerals to roman. For numbers larger than 3 999, the romans usually wrote a line over a letter to make it thousand times larger. For example, IV with a line over it represented 4 000. How is this possible in Python? I have understood that you can create an "overscore" by writing "\u203E". How can I make this appear over a letter instead of beside it? Regards
Make overlines in Python
0.132549
0
0
5,244
33,885,975
2015-11-24T05:19:00.000
3
0
1
0
python,python-2.7,python-3.x
33,886,326
2
true
0
0
You can't. Any module you import in py3 codebase needs to be py3 compatible. If you can't make the upstream project do it for you, you'll have to do it yourself. As mentioned in the comments, 2to3 utility should help you with that.
1
4
0
How to use python 2 packages in python 3 project? I have a Python 3 project, but I need some packages which are written in Python 2. I do not want to rewrite these python-2 packages, so forking / 2to3 is not an option.
How to use Python 2 packages in Python 3 project?
1.2
0
0
6,150
33,887,301
2015-11-24T07:00:00.000
1
0
1
0
python,multithreading,python-multithreading
33,888,154
1
false
0
0
jython doesn't have a GIL, so is able to use native threads effectively The GIL remains in CPython because usually the overhead of finegrained locking defeats the advantage of multiple threads. C extensions such as numpy and numexpr, can also utilise multithreading.
1
0
0
As I read somewhere on internet that single python process can not use more than one native thread simultaneously. Why?
Can single python process use two or more native threads simultaneously?
0.197375
0
0
50
33,889,476
2015-11-24T09:08:00.000
0
0
1
1
python,python-2.7
33,889,708
3
false
0
0
I assume you are running the script with command python file_name.py. You can prevent closing of the cmd by getting a character from user. use raw_input() function to get a character (which probably could be an enter).
2
1
0
I have installed Python and written a program in Notepad++. Now when I try to type the Python file name in the Run window, all that I see is a black window opening for a second and then closing. I cant run the file at all, how can run this file? Also I want to tell that I also tried to be in the same directory as a particular Python file but no success.
Cannot run a Python file from cmd line
0
0
0
162
33,889,476
2015-11-24T09:08:00.000
0
0
1
1
python,python-2.7
33,890,132
3
false
0
0
It sounds like you are entering your script name directly into the Windows Run prompt (possibly Windows XP?). This will launch Python in a black command prompt window and run your script. As soon as the script finishes, the command prompt window will automatically close. You have a number of alternatives: First manually start a command prompt by just typing cmd in the Run window. From here you can change to the directory you want and run your Python script. Create a Windows shortcut on the desktop. Right click on the desktop and select New > Shortcut. Here you can enter your script name as python -i script.py, and a name for the shortcut. After finishing, right click on your new shortcut on the desktop and select Properties, you can now specify the folder you want to run the script from. When the script completes, the Python shell will remain open until you exit it. As you are using Notepad++, you could consider installing the Notepad++ NppExec plugin which would let you run your script inside Notepad++. The output would then be displayed in a console output window inside Notepad++. As mentioned, you can add something to your script to stop it completing (and automatically closing the window), adding the line raw_input() to the last line in your script will cause the Window to stay open until Enter is pressed.
2
1
0
I have installed Python and written a program in Notepad++. Now when I try to type the Python file name in the Run window, all that I see is a black window opening for a second and then closing. I cant run the file at all, how can run this file? Also I want to tell that I also tried to be in the same directory as a particular Python file but no success.
Cannot run a Python file from cmd line
0
0
0
162
33,890,020
2015-11-24T09:36:00.000
1
0
0
0
python,django,django-admin
33,892,490
2
false
1
0
I was able to get around by overriding the save() method for my model in the models.py itself.
1
5
0
I have a model 'B' that is linked to another model 'A' as an inline model, for use in my admin site. Now, whenever I delete an object of model 'B' associated with the corresponding object of model 'A' (via the admin site), I want to perform some more tasks at the backend. I was able to override the save function using a formset and then overriding the save_existing and save_new methods. How do I go about overriding the delete method for the inline admin model?
How do I override delete method for an inline model within django?
0.099668
0
0
1,216
33,896,776
2015-11-24T14:52:00.000
0
0
1
0
python,pycharm,python-3.4
33,913,378
1
true
0
0
PyCharm actually creates 2 virtualenvs-interpreters. One named 'ftv' and one 'ftv(1)'. The second one worked even though on disk I only have 1 virtualenv, named 'ftv'.
1
0
0
Basically pycharm is underlining everything in red (even the os module) but everything runs fine. How can I fix this ? I am using the latest professional version(5.0.1) on a debian 8 with openjdk 1.7 thanks
Pycharm intellisense not working
1.2
0
0
647
33,905,229
2015-11-24T22:38:00.000
0
1
1
0
linux,python-3.x,raspberry-pi,pip
33,905,290
1
false
0
0
Try pip3.4 install tweepy. Pip supports pip{version} to install packages in multiple versions.
1
0
0
I am trying to install tweepy for python 3.4 on my raspberry pi, but when I run pip install tweepy, tweepy installs, but only for python 2.7. What is the command/procedure for installing tweepy for python 3.4? Any help will be appreciated. Thanks.
How to install python34 module tweepy onto raspberry pi with pip
0
0
0
360
33,906,206
2015-11-25T00:01:00.000
2
0
0
0
python,kdb,q-lang
33,919,611
1
true
0
0
If you're using one of the two regular python APIs for getting data from KDB, I think both implement a Flip class from which you can get the column names (usually there is an x which is an array of strings). Otherwise cols tableName gives you list of symbols (which would be deserialized as an array of strings in python)
1
1
0
How can i get the just the column headers of a table in KDB? Is there a special query for this? I am asking because when I pull the data from the table into python the column headers are lost. Thanks!
Get Column Headers in KDB/q Query
1.2
0
0
3,071
33,909,039
2015-11-25T05:26:00.000
0
0
1
0
python,mysql,python-2.7,mysql-python
34,680,890
1
false
0
0
Are you using myisam or innodb? I suggest using innodb since it has a better table/record locking flexibility for multiple simultaneous updates.
1
0
0
I have 2 different python processes (running from 2 separate terminals) running separately at the same time accessing and updating mysql. It crashes when they are using same table at the same time. Any suggestions on how to fix it?
python accessing and updating mysql from simultaneously running processes
0
1
0
41
33,910,358
2015-11-25T07:03:00.000
1
0
1
0
python,datanitro
33,919,992
1
true
0
0
There's no way to share the dataframe (each DataNitro script runs in its own process). You can read the frame each time, or if reading is slow, you can have the first script store it somewhere the other scripts can access it (e.g. as a csv or by pickling it).
1
1
1
Is there a way to have a variable (resulting from one script) accessible to other scripts while Excel is running? I have tried from script1 import df but it runs script1 again to produce df. I have a script that runs when I first open the workbook and it reads a dataframe and I need that dataframe for other scripts (or for other button clicks). Is there a way to store it in memory or should I read it every time I need it?
datanitro - pass variables from one script to others
1.2
0
0
90
33,910,489
2015-11-25T07:11:00.000
7
1
0
1
python,simplehttpserver
33,910,508
3
true
0
0
CTRL + C is usually the right way to kill the process and leave your terminal open.
2
2
0
I've started a SimpleHTTPServer via the command python -m SimpleHTTPServer 9001. I'd like to stop it without having to force quit Terminal. What's the keystrokes required to stop it?
How do you stop a python SimpleHTTPServer in Terminal?
1.2
0
0
10,625
33,910,489
2015-11-25T07:11:00.000
3
1
0
1
python,simplehttpserver
33,910,517
3
false
0
0
Use CTRL+C. This is a filler text because answer must be 30 characters.
2
2
0
I've started a SimpleHTTPServer via the command python -m SimpleHTTPServer 9001. I'd like to stop it without having to force quit Terminal. What's the keystrokes required to stop it?
How do you stop a python SimpleHTTPServer in Terminal?
0.197375
0
0
10,625
33,913,318
2015-11-25T09:53:00.000
0
0
1
0
python,python-2.7,python-3.x,pycharm
33,914,580
1
false
1
0
pycharm 64 bit not work with java 8 above ...... maximum size allowed 350 Use Java 7 instead.
1
1
0
Pycharm 5 Problem : Unrecognized VM option 'MaxPermSize=350m' Java version java version "1.9.0-ea" Java(TM) SE Runtime Environment (build 1.9.0-ea-b91) Java HotSpot(TM) 64-Bit Server VM (build 1.9.0-ea-b91, mixed mode) I have commented the line by watching the Google solution in pycharm64.vmoptions file. But unfortunately Pycharm is not opening. What is the idea to gear it up ?
Unrecognized VM option 'MaxPermSize=350m'
0
0
0
1,193
33,918,043
2015-11-25T13:42:00.000
5
0
1
0
python
33,918,217
3
true
0
0
Casting a float to an integer truncates the value, so if you have 3.999998, and you cast it to an integer, you get 3. The way to prevent this is to round the result. int(round(3.99998)) = 4, since the round function always return a precisely integral value.
2
10
1
I get a float by dividing two numbers. I know that the numbers are divisible, so I always have an integer, only it's of type float. However, I need an actual int type. I know that int() strips the decimals (i.e., floor rounding). I am concerned that since floats are not exact, if I do e.g. int(12./3) or int(round(12./3)) it may end up as 3 instead of 4 because the floating point representation of 4 could be 3.9999999593519561 (it's not, just an example). Will this ever happen and can I make sure it doesn't? (I am asking because while reshaping a numpy array, I got a warning saying that the shape must be integers, not floats.)
Will casting an "integer" float to int always return the closest integer?
1.2
0
0
3,905
33,918,043
2015-11-25T13:42:00.000
1
0
1
0
python
33,918,503
3
false
0
0
I ended up using integer division (a//b) since I divided integers. Wouldn't have worked if I divided e.g. 3.5/0.5=7 though.
2
10
1
I get a float by dividing two numbers. I know that the numbers are divisible, so I always have an integer, only it's of type float. However, I need an actual int type. I know that int() strips the decimals (i.e., floor rounding). I am concerned that since floats are not exact, if I do e.g. int(12./3) or int(round(12./3)) it may end up as 3 instead of 4 because the floating point representation of 4 could be 3.9999999593519561 (it's not, just an example). Will this ever happen and can I make sure it doesn't? (I am asking because while reshaping a numpy array, I got a warning saying that the shape must be integers, not floats.)
Will casting an "integer" float to int always return the closest integer?
0.066568
0
0
3,905
33,925,020
2015-11-25T19:35:00.000
0
0
1
0
python
33,925,736
1
false
0
0
I don't think there is established paradigm. However, I can't advise you a pattern that worked for me in several cases not only in Python, but in Ruby as well. Don't store flags passed directly in the command line, parse them once and put results into object properties (you can use namedtuple, but it's not always best way, I prefer just my own class). Then instead of passing all your flags into new processes, ask them to wait for configuration object in a queue and send it as first thing to every started process to hold as its configuration. Such object can be shared in different way, but that would depend on your scenarios. Other option is to pickle such object and pass it as file to be loaded by every started process, or encode pickling (base64 for example) and pass that as a single argument into the command line for new process. It's hard to describe a single best pattern for your case, not knowing exactly how your code is shaped, how much is shared, etc.
1
1
0
I have a somewhat large Python program. Running it spawns multiple Processes (multiprocessing.Process) that communicate over various Events and Queues. I also have a growing number of command line flags (handled with argparse) that change various data paths or execution of the Processes. Currently I put all the flags in a list and pass the list to each Processes when I create them. Not every Process uses every flag, but this approach means I just have to update the affected Processes when I add or remove a flag. However, this gets complicated as I have to remember where in each list each flag is and the different default values. I've considered making a named tuple to handle these flags or just passing the ArgumentParser. Is these some established paradigm or Pythonic way to handle this sort of situation?
Passing how to pass multiprocessing flags in Python
0
0
0
1,013
33,925,566
2015-11-25T20:10:00.000
4
0
1
0
python,python-3.x,package,pip
68,619,789
7
false
0
0
I had to start 'Xlaunch' display server and it worked, according to a @pscheit it was waiting for a connection to x-server and launching one fixed it
4
25
0
I currently have Python 3.5 on my Windows machine. I'm trying to install a Python package using the command "pip install" but as soon as I hit enter nothing happens. The action hangs for such a long time and when I try to exit the command line, it freezes. How do I get pip install to work?
Pip Install hangs
0.113791
0
0
57,471
33,925,566
2015-11-25T20:10:00.000
1
0
1
0
python,python-3.x,package,pip
69,774,091
7
false
0
0
pip install something was hanging for me when I ssh'd into a linux machine and ran pip install from that shell. Using -v from above answers showed that this step was hanging import 'keyring.backends.macOS' # <_frozen_importlib_external.SourceFileLoader object at 0x7f3d15404d90> This popped up a keyring authentication window on the linux machine's desktop, waiting for my password. Typing my password allowed this to progress. I have no idea why a macOS package was being imported on a linux machine.
4
25
0
I currently have Python 3.5 on my Windows machine. I'm trying to install a Python package using the command "pip install" but as soon as I hit enter nothing happens. The action hangs for such a long time and when I try to exit the command line, it freezes. How do I get pip install to work?
Pip Install hangs
0.028564
0
0
57,471
33,925,566
2015-11-25T20:10:00.000
-2
0
1
0
python,python-3.x,package,pip
69,822,927
7
false
0
0
Mellester's solution worked for me. I had trouble using pip list. The output would just hang until i started a Xserver.
4
25
0
I currently have Python 3.5 on my Windows machine. I'm trying to install a Python package using the command "pip install" but as soon as I hit enter nothing happens. The action hangs for such a long time and when I try to exit the command line, it freezes. How do I get pip install to work?
Pip Install hangs
-0.057081
0
0
57,471
33,925,566
2015-11-25T20:10:00.000
15
0
1
0
python,python-3.x,package,pip
67,744,064
7
false
0
0
If you're using Ubuntu via WSL2 on Windows, it might not work outside a virtualenv. python3 -v -m pip install ... showed me that it was hanging on some OS X keychain import... Hopefully this helps someone else.
4
25
0
I currently have Python 3.5 on my Windows machine. I'm trying to install a Python package using the command "pip install" but as soon as I hit enter nothing happens. The action hangs for such a long time and when I try to exit the command line, it freezes. How do I get pip install to work?
Pip Install hangs
1
0
0
57,471
33,926,704
2015-11-25T21:28:00.000
0
0
0
0
python,numpy,matrix,matplotlib,lidar
33,926,859
2
false
0
0
I am aware that I am not answering half of your questions but this is how I would do it: Create a 2D array of the desired resolution, The "leftmost" values correspond to the smallest values of x and so forth Fill the array with the elevation value of the closest match in terms of x and y values Smoothen the result.
1
3
1
I have a set of 3D coordinates points: [lat,long,elevation] ([X,Y,Z]), derived from LIDAR data. The points are not sorted and the steps size between the points is more or less random. My goal is to build a function that converts this set of points to a 2D numpy matrix of a constant number of pixels where each (X,Y) cell hold the Z value, then plot it as elevations heatmap. scales must remain realistic, X and Y should have same step size. the matrix doesn't have to catch the exact elevations picture, It will obviously need some kind of resolution reduction in order to have a constant number of pixels. The solution I was thinking of is to build a bucket for each pixel, iterate over the points and put each in a bucket according to it's (X,Y) values. At last create a matrix where each sell holds the mean of the Z values in the corresponding bucket. Since I don't have lots of experience in this field I would love to hear some tips and specially if there are better ways to address this task. Is there a numpy function for converting my set of points to the desired matrix? (maybe meshgrid with steps of a constant value?) If I build very sparse matrix, where the step size is min[min{Xi,Xj} , min{Yk,Yl}] for all i,j,k,l is there a way to "reduce" the resolution and convert it to a matrix with the required size? Thanks!
Converting coordinates vector to numpy 2D matrix
0
0
0
2,868
33,928,705
2015-11-26T00:21:00.000
0
0
0
0
python,google-bigquery
33,930,376
2
true
0
0
The fix is now live. You can run gcloud components update to pick up the latest client.
2
0
0
I've just run gcloud components update and now cannot create a view or table via the bq Python command-line tool. It returns an error Unexpected exception in mk operation: '_Make' object has no attribute 'partitioning_specification'. How to deal with this?
Error creating a view or table via CLI : "Unexpected exception in mk operation: '_Make' object has no attribute 'partitioning_specification'"
1.2
0
0
54
33,928,705
2015-11-26T00:21:00.000
0
0
0
0
python,google-bigquery
33,928,706
2
false
0
0
There was a bug in the "bq" tool that was pushed in the most recent build. We are working on adding some additional testing to catch this type of error. In the meantime, please do a gcloud components restore to restore your previous version of the Google Cloud SDK if you encounter this error and need to create a view or table.
2
0
0
I've just run gcloud components update and now cannot create a view or table via the bq Python command-line tool. It returns an error Unexpected exception in mk operation: '_Make' object has no attribute 'partitioning_specification'. How to deal with this?
Error creating a view or table via CLI : "Unexpected exception in mk operation: '_Make' object has no attribute 'partitioning_specification'"
0
0
0
54
33,929,293
2015-11-26T01:38:00.000
0
0
1
0
python,c,memory-leaks
33,929,319
1
false
0
0
Pythons integers don't have a size limit but instead increase until they use up your entire memory.
1
1
0
I usually get memory leaks in C when I want to get an int from a user, but in Python, I can give a really large number and not get a null or segmentation fault or memory leak. Why is this?
Why do I have a "segmentation fault" in C but no memory leak in Python?
0
0
0
75
33,930,490
2015-11-26T04:15:00.000
1
1
0
0
python,c++,numpy,module,shedskin
35,816,211
1
false
0
0
I have found an answer. From Shedskin implementation: Library Limitations Programs to be compiled with Shed Skin cannot freely use the Python standard library. Only about 17 common modules are currently supported. Note that Shed Skin can be used to build an extension module, so the main program can use arbitrary modules (and of course all Python features!). See Compiling an Extension Module. In general, programs can only import functionality that is defined in the Shed Skin lib/ directory. The following modules are largely supported at the moment: bisect collections ConfigParser copy datetime fnmatch getopt glob math os (some functionality missing under Windows) os.path random re socket string sys time
1
0
0
I'm using shedksin to convert a python file (that is dependent on numpy) to a C++ file. When executing through command prompt I get the error. Any ideas on what might be the problem ?
Shedskin can't locate module numpy
0.197375
0
0
611
33,933,195
2015-11-26T07:54:00.000
3
0
0
0
python,treeview,action,odoo-8,function-call
34,075,307
2
true
1
0
There is one way to make it happen. Just add that functional field in tree view and make it invisible So it will be also called in tree view
1
1
0
Is there a way to call a python function (server action) to a view being opened. So when I click a menuitem not only a tree view opens (window action) but also a python function executes (server action). Maybe something like an onload() function? Or a server action from within the tree view? Thanks
Odoo 8 function call on opening (tree) view
1.2
0
0
2,610
33,935,183
2015-11-26T09:43:00.000
2
0
1
0
python,numpy,scipy,equation
33,935,397
1
false
0
0
Just use the power of mathematics! Split this equation into two on different domains and solve each one: Solve x^2 = 10 for x > 15 and 15x = 10 for x= < 15
1
0
0
How can I resolve the equation like x * max(x,15) + max(x, 45) / x = 10 with python libraries? I am forced not to use Sympy library, because symbolic calculations is very slow. The max() means the maximum between given two arguments. Maybe Scipy or Numpy libraries are good for it? My equations has a more complicated form, but I want to resolve it in simplified form.
How to solve nonlinear equation without sympy(max and min)?
0.379949
0
0
461
33,935,637
2015-11-26T10:02:00.000
0
0
0
0
python,django
33,936,374
2
false
1
0
You have to create some kind of lock that indicates that the price is currently assigned to a user and that prevents other users from going to the same form. You could create a random token, store it in the DB (or redis), and add it a hidden field to the form. I suggest you also add an expiration date. As long as a valid token exists, no other user can access the form. When user 1 submits the form, you check that it contains a valid token.
2
0
0
Sorry for the vague question, here's what's going on: I will be giving away "win codes" to people. My django app is written so that the first one to enter a valid code XX hours after the last win will again be a winner. If the user is a winner he will be redirected to a page with a form to claim his prize. 1) User enters code 2) I check the datetime of the last win 3) If it's a winner again, go to the form page The problem is: if someone wins and then another person enters a code before the first one has filled in the form to claim the prize, the second one will get to that form as well because the last winner is still more than XX hours ago. How can I avoid this? Can I somehow check if someone already made it to that form?
Django: can I block new entries until one is completed?
0
0
0
65
33,935,637
2015-11-26T10:02:00.000
0
0
0
0
python,django
33,937,381
2
true
1
0
Other approach is to write last win datetime right away in step 3, so 3) If it's a winner again, create win record and give it (or other form) to user to fill the fields As mentioned, after some time expiration you can check/remove empty win records
2
0
0
Sorry for the vague question, here's what's going on: I will be giving away "win codes" to people. My django app is written so that the first one to enter a valid code XX hours after the last win will again be a winner. If the user is a winner he will be redirected to a page with a form to claim his prize. 1) User enters code 2) I check the datetime of the last win 3) If it's a winner again, go to the form page The problem is: if someone wins and then another person enters a code before the first one has filled in the form to claim the prize, the second one will get to that form as well because the last winner is still more than XX hours ago. How can I avoid this? Can I somehow check if someone already made it to that form?
Django: can I block new entries until one is completed?
1.2
0
0
65
33,935,892
2015-11-26T10:14:00.000
1
0
0
1
python,movie
34,064,048
1
true
0
0
opencv can solve this cross platform and has python bindings.
1
1
0
I am developing a small application that generates a stochastic animation, and I would want the option to save the animation as a movie. An obvious solution in linux would be to save the images and subprocess a call to ffmpeg or the like, but the program should preferable run on windows as well, without any external dependencies and installations needed (I pack the program with pyinstaller for windows). Is there a solution for this, or will I have to depend on different external applications depending on the platform?
Cross platform movie creation in python
1.2
0
0
35
33,940,518
2015-11-26T13:56:00.000
0
1
1
0
python,unit-testing,testing,asynchronous,tornado
33,975,984
1
true
0
0
In general, using yield gen.moment to trigger specific events is dicey; there are no guarantees about how many "moments" you must wait, or in what order the triggered events occur. It's better to make sure that the function being tested has some effect that can be asynchronously waited for (if it doesn't have such an effect naturally, you can use a tornado.locks.Condition). There are also subtleties to patching IOLoop.time. I think it will work with the default Tornado IOLoops (where it is possible without the use of mock: pass a time_func argument when constructing the loop), but it won't have the desired effect with e.g. AsyncIOLoop. I don't think you want to use AsyncTestCase.stop and .wait, but it's not clear how your test is set up.
1
0
0
I'm using Tornado as a coroutine engine for a periodic process, where the repeating coroutine calls ioloop.call_later() on itself at the end of each execution. I'm now trying to drive this with unit tests (using Tornado's gen.test) where I'm mocking the ioloop's time with a local variable t: DUT.ioloop.time = mock.Mock(side_effect= lambda: t) (DUT <==> Device Under Test) Then in the test, I manually increment t, and yield gen.moment to kick the ioloop. The idea is to trigger the repeating coroutine after various intervals so I can verify its behaviour. But the coroutine doesn't always trigger - or perhaps it yields back to the testing code before completing execution, causing failures. I think should be using stop() and wait() to synchronise the test code, but I can't see concretely how to use them in this situation. And how does this whole testing strategy work if the DUT runs in its own ioloop?
Unit-testing a periodic coroutine with mock time
1.2
0
0
196
33,943,643
2015-11-26T16:49:00.000
1
0
0
0
python,p2p,libtorrent,libtorrent-rasterbar
33,956,031
2
false
0
0
I've ran into the same problem as you. Setting a torrent to sequential download means the pieces will be downloaded in a somewhat ordered fashion. This may be the intuitive solution for streaming. However, streaming video is more complicated then just downloading all the pieces in order. Video files come in different containers (e.g. mkv, mp4, avi) and different codes (h264, theora, etc). Some codecs/containers store metadata/headers in different locations in a file. I can't remember off the top of my head but a certain container/codec stores all header information at the end of the file. Such a file may not stream well if downloaded sequentially. Unless you write the code for determining which pieces are needed to start streaming, you will have to rely on an existing mechanisms. Take for example Peerflix which spawns a browser video player, VLC, of Mplayer. These applications have a good idea of what byte ranges they need for various containers/codecs. When Peerflix launches VLC to play, lets say, an AVI file, VLC will attempt to read the first several bytes and last several bytes (headers). The genius behind Peerflix is that it tries to serve the video file through it's own web server and therefore knows what byte ranges of the file VLC is seeking. It then determines which pieces the byte ranges fall into and prioritizes those pieces. Peerflix uses some Node.js BitTorrent library, whose exact piece prioritization mechanisms are unknown to me. However, in the case of libtorrent-rasterbar, the set_piece_deadline() function allows you to signal the library to what pieces you need. In my experience, once I determined the pieces needed, I would call set_piece_deadline() with a short deadline (50ms or so) and wait for the arrival. Please note that using set_piece_dealine() is incompatible with sequential downloads (just set them to false). One thing to note, libtorrent-rasterbar will not write the piece to the hard drive as soon as it gets it. This is a trap I fell into because I tried to read that byte range from the file when the piece arrived. For this you will need to run a thread to catch the alerts that libtorrent-rasterbar passes to your application. More specifically you will receive the raw binary data for that piece in a read_piece_alert.
1
1
0
i'm working on my project which is to make a streaming client over libtorrent. i'm using the python client (python binding). i searched a lot about these functions set_sequential_download() and set_piece_deadline() and i couldn't find a good answer on how to force download pieces in order, which means first piece 1 and then 2,3,4 etc.. i saw people are asking this in forums, but none of them got a good answer on the changes need to be done in order it to succeed. i understood that the set_sequential_download() just asks for the pieces in order but in fact they are randomly downloaded. i tried to change the deadline of the pieces using set_piece_deadline() , increment each piece but it doesn't work for me at all. ** UPDATE the goal i'm trying to acomplish , it's downloading one piece at a time so i can make a streaming throgh torrents. i hope some of you can help me, thanks Ben.
set_sequential_download() and set_piece_deadline() in libtorrent
0.099668
0
0
1,713
33,946,061
2015-11-26T19:49:00.000
0
0
0
1
java,python,macos,intellij-idea,path
38,203,517
1
false
0
0
Regarding your second question: Go to the menu item on the top called "IntelliJ Idea" and under that you'll find a "Preferences item"
1
0
0
I'm trying to set a path variable because I'm executing command on my Mac using Runtime.getRuntime().exec();. They work when pressing the "play" button in IntelliJ, and also when running from command line, however, not when double-clicking. I have found that I should set the PATH variable. In terminal, the PATH variable is /Library/Frameworks/Python.framework/Versions/3.5/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/binsr/bin:/bin Which is also weird, because of the /binsr but that doesn't matter that much. I want to make IntelliJ set the PATH variable of my application to this. The documentation and some other answers say it is in here: File | Settings | Build, Execution, Deployment | Path Variables But there is no "Settings" under "File"!! There is a "Preferences" on Mac, and it does have a Build, Execution and Deployment, but that doesn't have path variables??!!? This is really frustrating me, and I would appreciate any help. Thanks in advance, Sten
IntelliJ setting path variable?
0
0
0
1,199
33,946,694
2015-11-26T20:42:00.000
1
1
0
0
python,imap
33,947,171
1
false
0
0
You pretty much have to brute force it, but there's really only three setups, and it only requires two connects: First connect on port 993 with regular SSL/TLS. If this works: 993/TLS. If this fails: Connect to port 143, and check if CAPABILITY STARTTLS exists. If it does: try StartTLS. If this works: 143/STARTTLS. Else: See if you can log in on port 143. if this fails, no good configuration. This wouldn't be secure anyway, so should be discouraged. SMTP is a bit more complex: You can try 587 with StartTLS, 465 with TLS, or 25 with StartTLS, plain, or no authentication at all. Note: autodetecting STARTTLS is dangerous, as it allows a MITM attack, where the attacker hides the STARTTLS capability so that you attempt to login without it. You may want to ask the user if they wish to connect insecurely, or provide a 'disable security' setting that must be opted into.
1
0
0
I have these values to connect to an IMAP sever: hostname username password I want to auto-detect the details with Python: port ssl or starttls If the port is one of the well-known port numbers there should be not too many possible combinations. Trying all would be a solution, but maybe there is a better way?
Autodetect IMAP connection details
0.197375
0
1
237
33,947,510
2015-11-26T21:52:00.000
1
0
0
0
python,rabbitmq,rabbitmq-exchange
33,947,694
2
false
0
0
Just create multiple queues. They are at zero cost from RabbitMQ's point of view & exactly express your requirement.
2
2
0
I would like to know if it is possible to have multiples producers and multiple consumers? For example: -> Consumer A only receives message from Producer A -> Consumer B only receives message from Producer B Or Do i need to create multiple queues? Can someone post and example?
Using RabbitMQ - Multiple Producer and Multiple Consumers
0.099668
0
0
1,203
33,947,510
2015-11-26T21:52:00.000
1
0
0
0
python,rabbitmq,rabbitmq-exchange
33,947,801
2
false
0
0
Short answer: You need to create multiple queues. The queue is just that, an ordered sequence of messages, where you can access the messages in the order they arrived. This would make it unpractical to have messages for specific consumers on the same queue, since if the message isn't for your consumer, you would have to "give it back" to not lose it, but then it's foremost in the queue again and you would just get the same message again, unless you are lucky and the actual receiver gets it instead. Multiple consumers on one queue is useful when you want to divide the load of processing the messages between multiple receivers, but if you want the messages to reach a specific endpoint, create a queue dedicated to that endpoint.
2
2
0
I would like to know if it is possible to have multiples producers and multiple consumers? For example: -> Consumer A only receives message from Producer A -> Consumer B only receives message from Producer B Or Do i need to create multiple queues? Can someone post and example?
Using RabbitMQ - Multiple Producer and Multiple Consumers
0.099668
0
0
1,203
33,949,964
2015-11-27T03:47:00.000
0
0
1
0
django,ipython
37,856,346
3
false
1
0
you can uninstall ipython from the project virtualenv, that all you need
1
5
0
If it is a multi-user environment and uninstall IPython is not an option, how would you go about launching a Django shell without IPython.
How do I disable IPython when opening a Django shell
0
0
0
769
33,951,619
2015-11-27T06:43:00.000
9
1
0
0
python,python-3.x,boto,boto3
33,959,240
3
true
1
0
It's not clear from the question whether you are talking about boto or boto3. Both allow you to use environment variables to tell it where to look for credentials and configuration files but the environment variables are different. In boto3 you can use the environment variable AWS_SHARED_CREDENTIALS_FILE to tell boto3 where your credentials file is (by default, it is in ~/.aws/credentials. You can use AWS_CONFIG_FILE to tell it where your config file is (by default, it is in ~/.aws/config. In boto, you can use BOTO_CONFIG to tell boto where to find its config file (by default it is in /etc/boto.cfg or ~/.boto.
1
11
0
Is there any way to have Boto seek for the configuration files other than the default location, which is ~/.aws?
Boto3: Configuration file location
1.2
0
1
13,555
33,954,438
2015-11-27T09:41:00.000
1
0
0
0
python,django,opencv
33,954,610
2
false
1
0
Am I right that you dream about Django application able to capture video from your camera? This will not work (at least not in a way you expect). Did you check any stack traces left by your web server (the one hosts Django app or the one started as Django built-in)? I suggest you start playing with OpenCV a bit just from Python command line. If you're on Windows use IDLE. Observe behaviour of your calls from there. Django application is running inside WSGI application server where there are several constraints what a module of particular type can and cannot do. I didn't try to repeat what you've done (I don't have camera I can access). Proper way of handling camera in web application requires browser side handling in JavaScript. Small disclaimer at the end: I'm not saying you cannot use OpenCV at all in Django application, but attempt to access the camera is not a way to go.
2
3
1
I want to use OpenCV in my Django application. As OpenCV is a library, I thought we can use it like any other library. When I try to import it using import cv2 in the views of Django, it works fine but when I try to make library function calls in Django view like cap = cv2.VideoCapture(0) and try to run the app on my browser, nothing happens: the template does not load and no traceback in the terminal and the application remains loading forever. Don't know why but the cv2 function call is not executing as expected. Since there is no traceback, I am not able to understand what is the problem. If anyone can suggest what is wrong ? Is it the right way to use OpenCV with Django ?
Using OpenCV with Django
0.099668
0
0
4,890
33,954,438
2015-11-27T09:41:00.000
2
0
0
0
python,django,opencv
35,443,792
2
false
1
0
Use a separate thread for the cv2 function call and the app should work like a charm. From what I figure..infinite loading is probably because the video never ceases recording and hence the code further up ahead is never taken into account, ergo an infinite loading page. Threads should probably do it. :) :)
2
3
1
I want to use OpenCV in my Django application. As OpenCV is a library, I thought we can use it like any other library. When I try to import it using import cv2 in the views of Django, it works fine but when I try to make library function calls in Django view like cap = cv2.VideoCapture(0) and try to run the app on my browser, nothing happens: the template does not load and no traceback in the terminal and the application remains loading forever. Don't know why but the cv2 function call is not executing as expected. Since there is no traceback, I am not able to understand what is the problem. If anyone can suggest what is wrong ? Is it the right way to use OpenCV with Django ?
Using OpenCV with Django
0.197375
0
0
4,890
33,954,850
2015-11-27T10:01:00.000
0
0
1
0
python,ruby,software-distribution
33,954,923
2
false
0
0
You can use Installshield to prepare installation for Mac, Windows and Linux.
1
0
0
My Ruby application uses some Python scripts for specific calculations. At the moment I use the local Anaconda installation, but this is not convenient for distribution as it requires the user to install Anaconda. I would have the same issue if I used the system Python installation. Is it possible to create a 'local' python environment that I can distribute and use with my app? I am working on Mac but the solution should work on PC as well.
How do I create a python environment for distribution with my app?
0
0
0
46
33,958,132
2015-11-27T13:09:00.000
1
0
1
0
python,django,qpython
34,068,622
1
false
1
0
You can upload the django or other dependency modules into the mobile's /sdcard/com.hipipal.qpyplus/lib/python2.7/site-packages/ BTW: you can install django through QPython's pip_console.py easily .
1
0
0
I have downloaded django and other modules for qpython, then here comes the confusion, I have no idea how to manually install django for qpython, I thaught this should had been automatically. Please how can django be install manually. Thanks
how to manually install modules in qpython
0.197375
0
0
2,382
33,960,557
2015-11-27T15:29:00.000
1
0
1
0
python,algorithm
33,964,607
1
false
0
0
Your approach will find an upper bound for the set of forbidden characters. You can use sets and unions of sets to find out whether there is a set of characters that is better than your upper-bound set. The following approach should work, but it will create large sets: Create a dictionary with the 26 letters as keys and with an empty set as value. Read the words and add them to the sets for the letters that they contain. Find the letters with the five smallest word sets. The sum of the set lengths for these letters is your upper bound. Filter all letters whose sets are larger than that upper bound out of the dictionary. Now find the union of all combinations of five of the remaining letters and find the one whose union is smallest. You can do that recursively.
1
2
0
From "Think Python" - The author provides word filtering exercises (tasks are to include/exclude words from a list based on minimum length, characters required, or forbidden, etc.) An extra question he includes: Can you find a combination of 5 forbidden letters that excludes the smallest number of words? (I found topics here and elsewhere generally related to above exercises, but not an algorithm/answer for this extra question.) Here's my start in working it out, and where I got stuck: For each character in the word list identify the number of words it occupies Build a dictionary with each key = to a given character; each key value = total number of words occupied by that character. Sort by value to identify the 5 characters, in ascending order, that occupy the least number of words. I'm a bit stuck at this point - because if characters jointly occur in some words in various combinations, that can reduce the total number of words they cause to get excluded from this list. I wasn't sure how to follow that reasoning to 'abstract' the problem and figure out a general solution. Any pointers?
Identify 5 "forbidden" characters that result in *fewest* exclusions from word list
0.197375
0
0
257
33,960,828
2015-11-27T15:46:00.000
7
0
1
0
python,types,floating-point,double
33,960,858
1
true
0
0
Python does not have C-style float, it only has a C-style double, which is called float. You must decide yourself whether you need the extra precision of a double when you design your structures. Or, rather, you must decide whether gaining a bit of space efficiency is worth the loss of precision.
1
4
0
I need to pack numbers in Python using the struct module. I want to check if a number is float or double, when I pack it. How can I do that? Is there any built-in functions, or I need to write one? Or I need to write a class for simple Floats instead of using python's floats(which are actually Doubles)? Thanks in advance!
Python: Float or Double
1.2
0
0
14,630
33,961,931
2015-11-27T16:59:00.000
2
0
1
0
python,syntax
33,962,098
2
false
0
0
Python's syntax is defined by the interpreter's implementation. You cannot add your own syntax without modifying it. Unless you somehow manage to "preprocess" the source into the "normal" form before it reaches the interpreter. You may have some luck with PyPy where the interpreter is implemented in Python itself. You need to make a separate unit of "normal" code that would somehow access PyPy's objects (e.g. by importing corresponding modules) modify them feed the "unusual" source to the modified interpreter Do note that your proposed construct will need to be parsed and thus be distinguishable from other language constructs. Your |x| is, at first sight, indistinguishable from the | operator (bitwize OR).
1
0
0
I'd like to know if there is a way to define a operation that isn't built into Python - rather than creating some member function of an object and calling it. For example, define an "absolute value" operation for a class C, then invoke it as |C|. If not, I know it's possible to import operations (I think abs val is in math), so could I look up the tags for operations I import? By "tag" I mean what would replace eq in def __eq__(self): or is there no "tag" for imported operations?
Define an operator that's not already built into the syntax
0.197375
0
0
45
33,962,664
2015-11-27T17:55:00.000
0
0
1
0
python,multithreading,python-2.7,libgomp
33,995,671
1
false
0
0
Strange. That error is only raised when pthread_key_create fails in [GCC]/libgomp/team.c:initialize_team, which is an __attribute__((constructor)) function, only called (once!) at process/libgomp initialization. So, either your process is doing "strange things" with dlopening libgomp (multiple times?), or you're running into some resource shortage/limitation at the OS level. Or, a libpthread (glibc) bug. Can you find any other reports of pthread_key_create failures for your OS/software distribution?
1
0
0
I am using python and am getting a very odd error I can't seem to find anywhere. It has to do with the libgomp library and threading. The error is: libgomp: could not create thread pool destructor. The odd thing is that the error occurs after a certain amount of time/processes. It does not happen in a certain line of my code. The code I am running is an iterative solver and as such, I can control the amount of time the code runs very easily by relaxing the tolerances. Right now if I set the tolerances so that everything "converges" in ~9 iterations (about 15s) then the code completes just fine. If I increase it to >9 then I get the error. It clearly is not a problem with one part of the code as it happens in different parts every time and there is no traceback info. Any ideas?
Python error unknown: thread pool destructor
0
0
0
963
33,962,771
2015-11-27T18:05:00.000
0
1
1
0
python,performance
33,962,888
2
false
0
0
Picking a better algorithm can often lead to remarkable speed improvements. For example, replacing bubble sort with quicksort; same job, better algorithm, much faster. The faster algorithm may be harder to understand, and there are other costs in time to write and maintain the code; but I have seen a job which took 4 days to run reduced to 25 seconds by a better algorithm. For more detail, search on "Big O Notation" - a way of describing the asymptotic performance of algorithms.
2
0
0
While surfing stack exchange, I have seen many people mentioning speed efficiency in their answers. How is one code faster than the other one which does the same function? What makes the code run faster? Less lines? Does importing mean loss of performance? What things should I keep in mind to write performance efficient code? Why do I need a performance efficient code? I have also seen people writing Loose on speed to gain on beauty? Why are beautiful codes slow?
Write speed efficient code in python
0
0
0
131
33,962,771
2015-11-27T18:05:00.000
3
1
1
0
python,performance
33,963,532
2
true
0
0
I'll try to give you a concrete answer. Your question is too abroad and involves a lot of things that are not trivial to explain. How is one code faster than the other one which does the same function? For example, you have to find an element in an array. You can look for it in every cell until the end or stop when you have found it. This is the most trivial example I can give you, but I think that is good to a initial idea. Well, now you would look for "bubble sort" and "quickSort" as examples of algorithm to sort arrays. They do the same job, but the second one much faster. What makes the code run faster? Less lines? Not really, you may be interested in learn something about complexity of algorithms. Search on "Big O Notation", which is a way of describing the asymptotic performance of algorithms. Does importing mean loss of performance? It depends on the context. If you need to save resources because of hardware where you will execute your code may yes. In another situations may the differences about performance is very little so you don't have a real problem. What things should I keep in mind to write performance efficient code? You have to learn differents skills about algorithms. (backtracking algorithms, divide and conquer algorithms, dynamic programming algorithms, greedy algorithms, branch and bound algorithms...) Before that, you will usually use brute force algorithms whick makes your code less efficient. Why do I need a performance efficient code? You have a cpu which is not God. If the mayority of the code which is actually run in your computer aren't efficient you would have a problem, everything will be slower. And in some cases impossible to maintain. Loose on speed to gain on beauty? Beautiful code is usually code that is easy to understand. Often more efficient algorithm are harder to understand, and there are other costs in time to write and maintain the code. But you can reduce that effect if you get used to write Clean Code. Your code will be much better, beauty and more efficient in a lot of ways. It's very important trying to make your code understandable, but if you can't, give some information in comments or documentation about what are you doing in these lines. I hope this helps you.
2
0
0
While surfing stack exchange, I have seen many people mentioning speed efficiency in their answers. How is one code faster than the other one which does the same function? What makes the code run faster? Less lines? Does importing mean loss of performance? What things should I keep in mind to write performance efficient code? Why do I need a performance efficient code? I have also seen people writing Loose on speed to gain on beauty? Why are beautiful codes slow?
Write speed efficient code in python
1.2
0
0
131
33,964,851
2015-11-27T21:13:00.000
0
0
1
0
python,python-2.7,menu
33,965,008
3
false
0
0
In order to pass a variable from a function to another it needs to be ‘‘global’’. One easy way to do that is to initialize the variable outside of all functions and just let all functions call it. This way it will be defined in all functions.
1
0
0
I am having trouble with a menu that allows the user to choose which function to call. Part of the problem is that when I run the program it starts from the beginning (instead of calling the menu function), and the other part is that I don't know how to pass the table and the number of rows and columns from the first function to the rest of them (when I tried it said they were not defined). The program is supposed to encrypt and decrypt text using a table.
Python: function menu not working
0
0
0
158
33,965,603
2015-11-27T22:37:00.000
1
0
1
0
python,warnings,pycharm
33,965,817
1
false
0
0
Program you run needs a virtual environment to run. This is a nice python feature that allows you to isolate a python program within the scope of the specific python set of libraries it may need. If you do not run more than one Python program you may not be really worried about it. However, it you do, I would recommend to ready about virtualenv first. ps. Also, it might be just a debug log for developers to make sure they switched virtual env.
1
0
0
I installed PyCharm 2.7 under Windows 8.1. It works fine, but every time I run a program, I get the following warning: WARNING:root:No virtualenv active Why does it happen and how can I fix it?
Python: "No virtualenv active"
0.197375
0
0
72
33,968,542
2015-11-28T06:43:00.000
0
0
0
0
python,tkinter
33,969,234
1
true
0
1
Tkinter is already installed with Python so you just need to import tkinter. Don't import Tkinter as you might see in some tutorials because Python 3+ has a different version.
1
0
0
After researching a little I found some libraries like pygame, tkinter, etc. I was able to install pygame easily but am not able to find any help on how to install tkinter for v3.2. I know many of these type of q's heve been answered but most of them show only errors when importing Python or it is for Linux/mac. I have a Windows based os.
Tkinter install in Python version 3.2
1.2
0
0
189
33,972,287
2015-11-28T14:14:00.000
0
0
0
1
python,hadoop,dictionary,cloudera,reduce
33,974,868
2
false
0
0
If that's just for initial analysis purpose to understand the data and key then you probably would like to set the Reducer count to 0 and get the map's output. -D mapred.reduce.tasks =0 is a way in java, explore the same for Python.
1
0
0
In the file part-00000 we can find the result of all the process (map+reduce), but i would like to see the result of the firt step (mapping) then the whole results. I'm working on Hadoop Cloudera with python map-reduce scripts.
Is it possible to visualise the mapper results in a map-reduce process?
0
0
0
101
33,974,018
2015-11-28T17:06:00.000
0
0
1
0
python,tkinter,textfield
34,096,603
1
false
0
1
tkinter doesn't have anything built-in to support this. Tkinter likely has all of the fundamental building blocks in order to build it yourself, but it will require a lot of work on your part.
1
0
0
So far I am using Tkinter to make textfields in Python. My question is how do I make it so there are placeholders, preferably in the style of mathematica or something similar so that when a user starts a new line, a right and left place holder appear on that line and the user can only enter text in these placeholders? Eventually I would like to be able to make it so all the right placeholders are aligned as well, but that may be too complicated. I can't seem to find a way to do this in Tkinter. Is there possibly a better package for this? I'm not sure how to generate and format "text placeholders" Edit: I think this question is coming down to: how do I dynamically add text placeholders within already existing text fields based on certain key commands?
Adding dotted text placeholders within a textfield in Python?
0
0
0
62
33,975,835
2015-11-28T20:01:00.000
0
0
0
0
python,fft,dft,period
33,980,259
1
false
0
0
Before doing the FFT, you will need to resample or interpolate the data until you get a set of amplitude values equally spaced in time.
1
0
1
I spent couple days trying to solve this problem, but no luck so I turn to you. I have file for a photometry of a star with time and amplitude data. I'm supposed to use this data to find period changes. I used Lomb-Scargle from pysca library, but I have to use Fourier analysis. I tried fft (dft) from scipy and numpy but I couldn't get anything that would resemble frequency spectrum or Fourier coefficients. I even tried to use nfft from pynfft library because my data are not evenly sampled, but I did not get anywhere with this. So if any of you know how to get from Fourier analysis main frequency in periodical data, please let me know.
Fourier series of time domain data
0
0
0
243
33,977,130
2015-11-28T22:26:00.000
2
0
0
1
python,google-app-engine,google-cloud-sql
33,978,178
1
false
1
0
Each single app engine instance can have no more than 12 concurrent connections to Cloud SQL -- but then, by default, an instance cannot service more than 8 concurrent requests, unless you have deliberately pushed that up by setting the max_concurrent_requests in the automatic_scaling stanza to a higher value. If you've done that, then presumably you're also using a hefty instance_class in that module (perhaps the default module), considering also that Django is not the lightest-weight or fastest of web frameworks; an F4 class, I imagine. Even so, pushing max concurrent requests above 12 may result in latency spikes, especially if serving each and every request also requires other slow, heavy-weight operations such as MySQL ones. So, consider instead using many more instances, each of a lower (cheaper) class, serving no more than 12 requests each (again, assuming that every request you serve will require its own private connection to Cloud SQL -- pooling those up might also be worth considering). For example, an F2 instance costs, per hour, half as much as an F4 one -- it's also about half the power, but, if serving half as many user requests, that should be OK. I presume, here, that all you're using those connections for is to serve user requests (if not, you could dispatch other, "batch-like" uses to separate modules, perhaps ones with manual or basic scheduling -- but, that's another architectural issue).
1
0
0
I want to connect my project from app engine with (googleSQL), but I get that error exceeded the maximum of 12 connections in python, I have a CLOUDSQL D8 1000 simultaneous connections how can i change this number limit conexions, I'm using django and python thanks
As codified the limit of 12 connections appengine to cloudsql
0.379949
1
0
223
33,979,787
2015-11-29T05:42:00.000
0
0
0
0
python,interactive-brokers,ibpy
40,791,643
2
false
0
0
You could manage each leg of the spread seperately for more precision when placing and modifying orders, rather than using combo orders. Place the orders for each leg seperately and add any newly placed orders to a collection of orders. Monitor your orders to see if they have been completely filled or if they require modification. Once each leg of the spread is completely filled, add the spread to a collection of positions.
1
1
0
I'm interested in trying the Python wrapper for Interactive Brokers' API, but I trade option spreads (mostly iron condors) and not just single options. Is there a reasonable way to do this with ibPy?
Option spreads with Interactive Brokers' API?
0
0
0
1,892
33,980,603
2015-11-29T07:52:00.000
0
0
0
0
python,tkinter
33,993,563
1
true
0
1
I figured it out with Furas's help - with Pyhook I can wait for events globally, and then tie in the event with tkinter events.
1
0
0
I trying to make an application with a pop-up menu - when I type SPACE-R_ALT on my keyboard, globally across the OS (Windows in my case). When that happens, I want to pop-up a window (I know how to do that), and it is crucial that I can happen to be using Chrome or Word, then tap Space-Right Alt, then be able to open up this little menu. Tkinter event bindings have two problems: First, when I use an event binding for <Key> and then, in the function, use evt.keysym, I can see that the program can't register both at the same time. I could use a timer and then see if it works, but I would prefer one line that fixes it all. Second, I find that tkinter event bindings only work when the binded widget's window (or window itself) is FOCUSED. I will hide my root and TopLevel at all times, and so is not focused. I would appreciate any help on this. If your suggestion uses another module, I don't really care, as long as it works on Windows 10 (not Mac OS X, not Linux, but Windows). I'm also using Python 3, but any version (aka 2) would also be okay, as I could either try to port YOUR suggestion to Py3, or port MY code to Py2. Thanks!
Check for tkinter events globally (across OS)
1.2
0
0
65
33,993,034
2015-11-30T06:59:00.000
0
0
0
1
python,google-app-engine
34,002,070
3
false
1
0
You can use the datastore (eventually shadowed by memcache for performance) to persist all the necessary APN (or any other) connection/protocol status/context info such that multiple related requests can share the same connection as if your app would be a long-living one. Maybe not trivial, but definitely feasible. Some requests may need to be postponed temporarily, depending on the shared connection status/context, that's true.
2
0
0
What would be the best practice in this scenario? I have an App Engine Python app, with multiple cron jobs. Instantiated by user requests and cron jobs, push notifications might be sent. This could easily scale up to a total of +- 100 pushes per minute. Setting up and tearing down a connection to APNs for every batch is not what I want. Neither is Apple advising to do this. So I would like to keep the connection alive, even when user requests finish or when a cron finishes. Possibly with a timeout (2 minutes no pushes, then close then connection). Reading the GAE documentation, I couldn't figure out if there even is such a thing available. Also, I might need this to be available in different apps and/or modules.
Python APNs background connection
0
0
0
64
33,993,034
2015-11-30T06:59:00.000
0
0
0
1
python,google-app-engine
33,993,066
3
false
1
0
You can put the messages in a pull taskqueue and have a backend instance (or a cron job) to process the tasks
2
0
0
What would be the best practice in this scenario? I have an App Engine Python app, with multiple cron jobs. Instantiated by user requests and cron jobs, push notifications might be sent. This could easily scale up to a total of +- 100 pushes per minute. Setting up and tearing down a connection to APNs for every batch is not what I want. Neither is Apple advising to do this. So I would like to keep the connection alive, even when user requests finish or when a cron finishes. Possibly with a timeout (2 minutes no pushes, then close then connection). Reading the GAE documentation, I couldn't figure out if there even is such a thing available. Also, I might need this to be available in different apps and/or modules.
Python APNs background connection
0
0
0
64
33,995,862
2015-11-30T10:02:00.000
4
0
0
1
python,django,apache
33,995,927
1
true
1
0
mod_wsgi would be the wrong technology if you want to do this. It runs as part of Apache itself, so there literally is nothing to run in the Django container. A better way would be to use gunicorn to run Django in one container, and have the other running the webserver as a proxy - you could use Apache for this, although it's more common to use nginx.
1
5
0
We have an application written in django. We are trying a deployment scenario which will have one docker running apache, the second docker running django and the third docker running the DB server. In most of the documentation it is mentioned that apache and django will sit on the same machine (django in virtualenv to be precise), is there any way we can ask apache to talk to mod_wsgi sitting on a remote machine which has the django application?
Django and apache on different dockers
1.2
0
0
807
33,997,336
2015-11-30T11:16:00.000
0
0
0
0
python,arrays,list,matrix,append
33,997,375
4
false
0
0
Use .append('item-goes-here') to append.
1
0
1
I have an array which is a 1X3 matrix, where: column 1 = x coordinate column 2 = y coordinate column 3 = direction of vector. I am tracking a series of points along a path. At each point i want to store the x,y and direction back into the array, as a row. So in the end, my array has grown vertically, with more and more rows that represents points along the path. Im struggling to build this function inside a class. Help plz? Xx
How do I append rows to an array in Python?
0
0
0
645
33,997,369
2015-11-30T11:18:00.000
1
0
0
0
django,python-3.x,bdd,lettuce
34,083,462
2
false
1
0
I have used before/after.each_example() hook available in Aloe_django. You put this piece of code into your terrain.py file. @before.each_example def before_each_example(scenario,outline,steps): call_command(#your command#)
1
1
0
I wanted to do some operations(clear cookies, clear database etc) after each scenario in one feature, but the after.each_feature is not available in aloe_django. How did you deal with this problem. Any suggestions to handle this. The following hook is not available in aloe_django. @before.each_scenario def setup_some_scenario(scenario): populate_test_database() I need this because I want to have several scenarios in one feature, when first feature is completed I log out from admin and need to log in again in the next scenario(not logging out does not help), but in the next scenario it gives an error telling that my credentials are not valid(in the first scenario it was valid). When I put this scenarios as different feature and reset my db and migrate it works fine. I think when it jumps from one scenario to another within the feature it messes up the db or uses different one, so I need after.each_scenario() hook to reset and migrate my db.
after.each_scenario hook is not working(not available) in aloe_django
0.099668
0
0
167
34,000,107
2015-11-30T13:46:00.000
2
1
0
0
python,indexing,whoosh
34,003,883
1
true
0
0
In your python code, you should separate the Indexer from the Searcher. Configure your php file to call the Searcher only; run the indexer manually from time to time when there is new data added or old data altered. The key idea is index only when you really need it ; not at every search operation.
1
3
0
When indexing and searching for query words in whoosh does the program index every time it is ran? I am making a web interface with it so it can display certain results to the user. To do so I am using php to call the python file in the html. I have 1GB of data to index so is it going to take a long time everytime I run the file or the first time will be long and the rest significantly faster than the first due to the fact the program won't need to index all documents from start.
Whoosh indexing
1.2
0
0
637
34,004,281
2015-11-30T17:19:00.000
0
0
0
0
python,oauth-2.0,token,google-api-python-client
34,023,824
1
true
0
0
Okay, found it myself. You gotta refresh you token everytime it expires, using httplib2. Quick hint: import httplib2 http = httplib2.Http() http = credentials.authorize(http) where credentials contains what you got from your first authorization flow. Cheers
1
0
0
I'm a super noob in python and oauth2 but still I've wasted days on this one, so if you guys could give me hand, I would be eternally grateful :') Goal: writing a script that download a file everything 5min from google drive Achieved: Get the credentials with tokens and download it once Problem: how do I refresh the token? I achieved to get my tokens once but I don't understand what to do so that I don't need to rebuild a refresh token eveytime... I don't really know if I'm getting oauth2 wrong, but I've read that it should be stored and (there is store method, right?) Thanks :)
Google API oauth2 - how to store credentials in order ot refresh token later
1.2
0
1
336
34,005,438
2015-11-30T18:27:00.000
0
0
1
0
python-3.x,matplotlib,pdflatex
34,017,212
1
true
0
0
I could solve the problem using rc('text', usetex=False) which apparently make the matplotlib to use the internal mathtext instead of my default latex installation. Still I can not figure out the reason why my OS latex installation fails.
1
0
1
While something like matplotlib.pyplot.xlabel(r'Wavelenghth [$\mu$m]') works in python2 I get error when I use it in python 3 TypeError: startswith first arg must be str or a tuple of str, not bytes Does anyone know what it the problem? Is it from my latex installation?!
Can't render latex in matplotlib.pyplot in python3
1.2
0
0
623
34,009,293
2015-11-30T22:34:00.000
0
0
1
0
python,lookup
34,009,340
2
false
0
0
I think you should use regular expressions. There is module called re in python for that.
1
0
0
Is there a way to search for a substring in a larger string, and then return a different substring x places left or right of the original substring? I want to look through a string like "blahblahlink:"www.example.com"blahblah for the string "link:" and return the subsequent url. Thanks! Python 3, if that matters.
Equivalent to vlookup for strings in Python?
0
0
0
239
34,010,978
2015-12-01T01:15:00.000
0
0
0
0
python,api,rest,elasticsearch,elasticsearch-py
45,581,138
2
false
0
0
You may also try elasticsearch_dsl it is a high level wraper of elasticsearch.
1
1
0
I am newbie in the real-time distributed search engine elasticsearch, but I would like to ask a technical question. I have written a python module-crawler that parses a web page and creates JSON objects with native information. The next step for my module-crawler is to store the native information, using the elasticsearch. The real question is the following. Which technique is better for my occasion? The elasticsearch RESTful API or the python API for elastic search (elasticsearch-py) ?
Elasticsearch HTTP API or python API
0
0
1
520
34,013,905
2015-12-01T06:24:00.000
0
1
1
0
python-2.7,tornado
34,033,888
2
false
0
0
The two arguments are the tornado.web.Application and the tornado.httputil.HTTPServerRequest. Normally, rather than constructing a RequestHandler directly, Tornado applications are tested via tornado.testing.AsyncHTTPTestCase which will create the handlers as needed. (You could construct the application and request by hand, but I wouldn't recommend it) Do the functions you want to test need the application or request objects? If not, you could move them out of the subclass of RequestHandler to test them in isolation. If they do need either of these objects, then AsyncHTTPTestCase is the simplest way to get them.
2
1
0
I have a handler (subclass of RequestHandler) which handles GET, POST, PUT and DELETE requests. The class also has independent functions which operates on DB. I am writing unit test for the class, but I am not able to initialize the class since it requires 2 arguments. How can I do it? Note: I don't have issues while testing rest calls.
Python - Tornado - How to write unit test for independent functions?
0
0
0
239
34,013,905
2015-12-01T06:24:00.000
1
1
1
0
python-2.7,tornado
34,308,339
2
true
0
0
I solved the issue by making my testcase class a subclass of the class under testing.
2
1
0
I have a handler (subclass of RequestHandler) which handles GET, POST, PUT and DELETE requests. The class also has independent functions which operates on DB. I am writing unit test for the class, but I am not able to initialize the class since it requires 2 arguments. How can I do it? Note: I don't have issues while testing rest calls.
Python - Tornado - How to write unit test for independent functions?
1.2
0
0
239
34,014,148
2015-12-01T06:41:00.000
3
0
1
0
python-3.x,tkinter,visual-studio-2015
34,078,641
1
false
0
1
Tkinter should be installed by default, so there's no need to install it. If you don't have it, open Programs and Features, find Python and select Change, then make sure that the Tcl/Tk option is selected.
1
2
0
I can't seem to get Tkinter to load up in the pip search menu. This is what I have done so far: view>python enviorments>updated/installed setuptools>pip search menu> input"Tkinter" with no luck.Please help, Thanks guys!
How to install Tkinter to visual studio 2015
0.53705
0
0
11,918
34,018,450
2015-12-01T10:46:00.000
0
0
0
0
python,facebook,selenium,xpath
34,019,477
1
false
1
0
This error usually comes if element is not present in the DOM. Or may be element is in iframe.
1
1
0
I'm trying to get the XPATH for Code Generator field form (Facebook) in order to fill it (of course before I need to put a code with "numbers"). In Chrome console when I get the XPATH I get: //*[@id="approvals_code"] And then in my test I put: elem = driver.find_element_by_xpath("//*[@id='approvals_code']") if elem: elem.send_keys("numbers") elem.send_keys(Keys.ENTER) With those I get: StaleElementReferenceException: Message: stale element reference: element is not attached to the page document What means wrong field name. Does anyone know how to properly get a XPATH?
Python selenium test - Facebook code generator XPATH
0
0
1
231
34,021,351
2015-12-01T13:15:00.000
0
0
0
0
python,gensim
34,023,424
3
false
0
0
The file "init.py" is trying to import things from gensim.py. It's unable to import one of the classes. As you can see in the last line of your error, it says it was unable to import name parsing. I suggest: -if you downloaded the package from the internet(I'm quite new to python, and still don't know all the downloadable content): -Search the website for what this package means and try redownloading it (re-install the module). Also, try looking after if the versions are compatible. If this package have many versions, find the appropriate version according to your python version. What happens is that part of the package is missing.
1
0
0
I'm having a problem when trying to import gensim in python. When typing: import gensim I got the following error: Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.7/site-packages/gensim/init.py", line 6, in from gensim import parsing, matutils, interfaces, corpora, models, similarities, summarization ImportError: cannot import name parsing Also, when I view "init.py" it contains only the following lines: bring model classes directly into package namespace, to save some typing from .summarizer import summarize, summarize_corpus from .keywords import keywords Any idea on how to solve this problem is highly appreciated. I'm using: MAC 10.10.5 and Python 2.7 Thank you
importing gensim in mac
0
0
0
1,905
34,025,020
2015-12-01T16:21:00.000
1
0
1
0
python,c++,mingw
34,035,423
1
true
0
0
Just remove -mno-cygwin option from Makefile and you are in
1
0
0
Long version: I have a python package that is written in C++ (called leven), to install it I need to build the leven package by compiling it in c++. For this I tried using Visual C++ and Mingw. The error code in Visual c++ was generic, so I decided to use mingw instead (which I already had installed since I used to use Codeblocks). The problem is that when trying to build I get the following error message: "mno-cygwin unrecognized", after some research, it appears that such command has been deprecated (people said finally!). however, I need it to finish installing my package. So my question is, how can I install a previous version of mingw? should I uninstall codeblocks and the current mingw version? tldr, Need to install previous version of mingw cpp compiler
How to install older mingw versions to install python package (leven)
1.2
0
0
139
34,025,404
2015-12-01T16:38:00.000
0
0
0
0
python,scikit-learn
34,321,420
1
true
0
0
The answer is: It is possible. However, the feature is only available to binary cases under the stated question. As explained by @AndreasMueller.
1
0
1
Reading the scikit-learn documentation and looking for similar topics I couldn't figure out an answer. Can I apply GridSearchCV having scoring="roc_auc" on Random Forest or Decision Trees without any drawback? Thank you in advance for any clarification.
scoring="roc_auc" on GridSearchCV for RF and DT
1.2
0
0
456
34,027,346
2015-12-01T18:20:00.000
3
0
1
0
python,algorithm,hash,hash-function
34,027,631
1
false
0
0
As others have said here, this is an advanced topic, and you shouldn't try to make a feasible Hash function unless you know what you're doing. However, if you want to understand the basics of hashing, here are some things to think about. Equivalent output: In every Hash function, you should be able to get the same output for every input that is identical to each other, such that, hash(8) = 'y758tff' should be 'y758tff' every time hash(8) is called. Avoiding Collisions: Good Hashing functions give unique outputs for as many inputs as possible. Meaning, Hash(n) and Hash(x), should not give the same Hash output, and if it has to happen, it should be very rare. Irreversibility: A good hash function, will be near impossible to reverse back to its key. Meaning, for every Hash(n) = N, there should be no function so that function(N) = n. As an example, if you had a hash function that simply reverses the input, it would be very easy to make a function that reverses that Hash output. Identical lengths of keys: Regardless of the length of an input for a good hash function, the output must be the same length of all inputs. Such that, Hash('a') = '46fhur78', and Hash('Tomatoes') = 'yfih78rr', both length of 8.
1
1
0
I am modeling the user sign-in and account creation of a social network and have to create a hash function (not using hashlib) using my own hashing algorithm. The point of it is to take a password and hash it so that it becomes a random string of letters and numbers. The hashed password should also change dramatically when only one letter of the password is changed. For example, if "heyguys" goes to 7h8362, "hayguys" would go to something totally different like "bbb362". A small change in input string should result in a large change in output string. The reason I am doing this is because I am storing user data in a dictionary and it is dangerous to store a password in plaintext. How would I go about doing this? I am a beginner and know hashlib but other than that, I cannot seem to figure out where to even begin.
Implement my own hash algorithm for a hash function in Python
0.53705
0
0
3,508
34,031,206
2015-12-01T22:13:00.000
1
0
0
0
python,matplotlib,interactive
34,044,627
1
true
0
0
Place the axes in a list or dictionary when creating. Then when a pick event has occurred, match the pick event axis to the dictionary. Thank you all.
1
0
1
So I have three matplotlib subplots. I can use a pick event to pull off and re-plot the data in any one of the subplot. Is it possible to read the pick event and to find out what subplot number was selected?
Find Subplot Number from Matplotlib Pick Event
1.2
0
0
531
34,031,327
2015-12-01T22:21:00.000
1
0
0
0
python,django,python-3.4,mezzanine,python-3.5
34,031,385
3
true
1
0
As of today, yes, it is probably best to downgrade to Python 3.4. With Django 1.8, the current release of Django, Python 3.5 is not officially supported. The 1.9 release of Django will officially support Python 3.5, but that is not a guarantee that your 3rd party libraries will as well. Ensuring that will likely come down to a matter of testing, and checking the compatibility of each of your 3rd party apps. EDIT: As noted by knbk, Django 1.8.6 did add official support for Python 3.5. However, this does not invalidate the possibility that your other libraries may not yet support Python 3.5.
1
1
0
I just installed Python 3.5 and created a virtual environment with it. Installed Mezzanine (Django CMS) and tried to run the manage.py file and migrate and syncdb etc. I've been getting constant errors with 3.5 and I think the reason is that the 3.5 have changed some things that Mezzanine depends on. Is it a good idea to downgrade 3.5 to 3.4? Or will I have more problems when upgrading later if I don't adapt to the changes now. Maybe a very fuzzy question, but I come from 2.7 and I think a lot have changed. I don't know what to do :)
Should I downgrade Python 3.5 to 3.4?
1.2
0
0
1,774
34,031,623
2015-12-01T22:43:00.000
0
1
0
0
python,matlab,python-interactive,interactive-shell
34,031,693
3
false
0
0
Use iPython or some other Python shell. There are plenty. You may even program your own that will do whatever you want.
1
0
0
I recently switched from Matlab to Numpy and love it. However, one really great thing I liked about Matlab was the ability to complete commands. There are two ways that it does this: 1) tab completion. If I have a function called foobar(...), I can do 'fo' and it will automatically fill in 'foobar' 2) "up-button" completion (I'm not sure what to call this). If I recently entered a command such as 'x = linspace(0, 1, 100); A = eye(50);' and then I wish to quickly type in this same command so that I can re-evaluate it or change it slightly, then I simply type 'x =' then press up and it will cycle through all previous commands you typed that started with 'x ='. This was an awesome awesome feature in Matlab (and if you have heard of Julia, it has done it even better by allowing you to automatically re-enter entire blocks of code, such as when you are defining functions at the interactive prompt) Both of these features appear to not be present in the ordinary python interactive shell. I believe tab autocomplete has been discussed before and can probably be enabled using the .pythonrc startup script and some modules; however I have not found anything about "up-button" completion. Python does have rudimentary up-button functionality that simply scrolls through all previous commands, but you can't type in the beginning of the command and have that narrow down the range of commands that are scrolled through, and that makes a huge difference. Anyone know any way to get this functionality on the ordinary python interactive shell, without going to any fancy things like IPython notebooks that require separate installation?
python "up-button" command completion, matlab/julia style
0
0
0
191
34,031,727
2015-12-01T22:51:00.000
0
0
0
0
python,omxplayer
34,075,563
2
false
0
1
My solution is to used Hello_font from the hello_pi examples on the raspberry pi.
1
0
0
I own a raspberry pi 2 and i start learning Python. I would like to do something very basic : the window of my Python program on top of omxplayer window like a notification system. I have been able to make an "always on top" window with TKinter but when i launch omxplayer my window is no more on top. I would apreciate some help ! Thanks
Python + TKinter + OMXPlayer window on top
0
0
0
985
34,032,055
2015-12-01T23:17:00.000
3
1
1
0
python,profiler,cprofile
59,113,102
2
false
0
0
I run the python program with -m cProfile example: python -m cProfile <myprogram.py> This will require zero changes to the myprogram.py
1
1
0
I have a program in Python that takes in several command line arguments and uses them in several functions. How can I use cProfile (within my code)to obtain the running time of each function? (I still want the program to run normally when it's done). Yet I can't figure out how, for example I cannot use cProfile.run('loadBMPImage(sys.argv[1])') to test the run time of the function loadBMPImage. I cannot use sys.argv[1] as an argument. Any idea how I can use cProfile to test the running time of each function and print to stdout, if each function depends on command line arguments? Also the cProfile must be integrated into the code itself. Thanks
How can I use the cProfile module to time each funtion in python?
0.291313
0
0
2,680
34,033,149
2015-12-02T01:02:00.000
1
0
0
0
python,pygame,pygame-surface
34,033,205
1
true
0
1
I do not believe the existing API provides a way to do this. I think the intended use is to convert all your surfaces (why wouldn't you?) so you never have to worry about it. Perhaps it is possible to subclass pygame.Surface and override the convert methods to set a flag in the way you wish.
1
1
0
The title says it all, really. I am writing functions that deal with pygame.Surface objects from multiple sources. Among other operations, these functions will ensure that the Surface objects they return have been convert()ed at least once (or, according to user preference, convert_alpha()ed), as is required to optimize them for blitting in the current display mode. But I don't want to run the the convert() or convert_alpha() methods needlessly since they create copies of the surface and therefore take up time and memory. How do I tell whether I need to do it? I have looked at the output of S.get_flags() before and after S = S.convert_alpha() but it doesn't seem to change. The scalar value of S.get_alpha() does change (from 255 to 0) but I'm not convinced that's meaningful or reliable (and it doesn't solve the problem of knowing whether you have to .convert() in the case where alpha blending is not desired).
how can you tell whether convert()/convert_alpha() has already been run on a pygame.Surface?
1.2
0
0
86
34,033,701
2015-12-02T02:07:00.000
18
0
0
0
python,turtle-graphics
42,260,054
2
true
0
1
I want to clarify what various turtle functions do as there are misunderstandings in this discussion, including in the currently accepted answer, as the method names themselves can be confusing: turtle.mainloop() aka turtle.Screen().mainloop() Turns control over to tkinter's event loop. Usually, a lack of turtle.Screen().mainloop() (or turtle.Screen().exitonclick(), etc.) will cause the window to close just because the program will end, closing everything. This, or one of its variants, should be the last statement in a turtle graphics program unless the script is run from within Python IDLE -n. turtle.done() (Does not close window nor reset anything.) A synonym for turtle.mainloop() turtle.clear() Deletes everything this turtle has drawn (not just the last thing). Otherwise doesn't affect the state of the turtle. turtle.reset() Does a turtle.clear() and then resets this turtle's state (i.e. direction, position, etc.) turtle.clearscreen() aka turtle.Screen().clear() Deletes all drawing and all turtles, reseting the window to it's original state. turtle.resetscreen() aka turtle.Screen().reset() Resets all turtles on the screen to their initial state. turtle.bye() aka turtle.Screen().bye() Closes the turtle graphics window. I don't see a way to use any turtle graphics commands after this is invoked. turtle.exitonclick() aka turtle.Screen().exitonclick() After binding the screen click event to do a turtle.Screen().bye() invokes turtle.Screen().mainloop() It's not clear that you can close and reopen the graphics window from within turtle without dropping down to the tkinter level that underpins turtle (and Zelle's graphics.py) For purposes of starting a new hand in your blackjack game, I'd guess turtle.reset() or turtle.resetscreen() are your best bet.
1
7
0
I am a making a blackjack game with cards using turtle and each time I play a hand turtle just prints over the last game instead of clearing the window. Is there a method that closes the window when it is called or is there another why of doing this?
Python: How to reset the turtle graphics window
1.2
0
0
46,320
34,034,812
2015-12-02T04:14:00.000
0
0
1
0
python
35,604,298
3
false
0
0
hint: To make your classes comparable you can use __cmp__. The method is called with the two classes which are compared. Return a positive value if the first class is bigger. Return a nagative value if the second class is bigger. Return zero if they have the same size. You don't have to use the magic methods for every possibility.
1
2
0
Base on my understanding, magic methods such as __str__ , __next__, __setattr__ are built-in features in Python. They will automatically called when a instance object is created. It also plays a role of overridden. What else some important features of magic method do I omit or ignore?
what is the role of magic method in python?
0
0
0
486
34,035,329
2015-12-02T05:08:00.000
0
0
0
0
android,python,apk,kivy
34,042,147
1
false
0
1
You need to provide information about what goes wrong when you try to run the VM - nobody can help if we know only that something doesn't work. Alternatively, you can make a new VM with any linux distro (a recent Ubuntu is a good choice) and install all the dependencies. Kivy's VM is nothing special except that it has them preinstalled.
1
1
0
I am trying to build the android app from kivy application. I used the Buildozer image as given on the kivy.org downloads page, but that Virtual Machine never runs on the virtualbox. Is there any other way to do this. I have seen some SO questions regarding this but they seem to be very old and hasn't been of much help till now.
Building Android apk from kivy application
0
0
0
145
34,035,729
2015-12-02T05:45:00.000
0
0
1
0
python,debugging,python-idle
34,047,565
1
true
0
0
No, not as you meant the question. One thing on my list of improvements for Debugger is to suppress, by default, reserved xyz names so the the globals list starts empty and only shows names created by user code. However, import is a disguised assignment statement and as fodma1 says, import * can flood globals with multiple assignments. These are all names 'created with user code'. So yes, you can avoid this. If you don't want to type pylab over and over, abbreviate on input with as P or whatever. Or just import the specific names you need if not too many. I ran into this same issue yesterday trying to debug SO question code starting with from tkinter import *. Len(tkinter.dict) == 165. (Fortunately, most begin with a Capital letter and the user's names did not.)
1
0
0
I'd like to use IDLE's Debug mode to watch my variables. Unfortunately the Local and Global lists in the Debug window are filled with hundreds of classes, types and functions, which must be from importing pylab. This makes Debug difficult to use since I have to pick through a huge list to find my variables. Is there any way I can simply watch the dozen or so variables I've used in my program? Many thanks. Update I took Terry's good advice and instead of from pylab import * I imported only the individual names the code needed. Now it is possible to watch the relevant variables during Debug.
IDLE for Python - tracing only my variables in Debug mode
1.2
0
0
546
34,043,652
2015-12-02T13:12:00.000
0
0
0
0
python,c++,swig
34,044,494
2
false
0
0
Without more information it's difficult to offer a solution, however: Did you write the library? If so can you rework it to throw a logic_error instead of calling exit? If the library is calling exit, this implies an utterly catastrophic failure. The internal state of the library could well be inconsistent (you ought to assume that it is!) - are you sure you want to continue the process after this? If you didn't write the library you're not in a position to reason about this. If you did, see above. Perhaps you could write a wrapper process around the library, and marshall calls across the process boundary? This will be slower in execution and more painful to write and maintain, but it will allow the parent process (python) to detect the termination of the child (the library wrapper).
1
3
0
I'm writing a Swig-Python wrapper for a C++ library. When critical error occurs, the library calls exit(err);, which in turn terminates the whole python script that executes functions from that library. Is there a way to wrap around exit() function to return to the script or throw an exception?
Calling exit() in C++ library terminates python script that wrapps that library using swig
0
0
0
2,561