Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
35,726,948 | 2016-03-01T15:32:00.000 | 5 | 0 | 0 | 1 | python,celery,flower | 38,764,411 | 2 | false | 1 | 0 | You can use the persisent option,eg: flower -A ctq.celery --persistent=True | 1 | 5 | 0 | I am building a framework for executing tasks on top of Celery framework.
I would like to see the list of recently executed tasks (for the recent 2-7 days).
Looking on the API I can find app.backend object, but cannot figure out how to make a query to fetch tasks.
For example I can use backends like Redis or database. I do not want to explicitly write SQL queries to database.
Is there a way to work with task history/results with API?
I tried to use Flower, but it can only handle events and cannot get history before its start. | Celery task history | 0.462117 | 0 | 0 | 4,109 |
35,731,339 | 2016-03-01T19:08:00.000 | 0 | 0 | 0 | 0 | python,arrays,performance | 35,731,697 | 2 | true | 0 | 0 | It depends on your requirement. If you pass an array to the function, python passes the reference of that array. Whereas if you pass the array in the form of unpacked arguments, objects comprising the array is passed.
Within the function if the order of your original array is not getting modified, it is better to pass it as reference i.e as a argument. | 2 | 0 | 1 | Sorry in advance, I know almost nothing about efficiency, so I might need a little extra help.
I have a function that calls another function and it passes in pretty big arrays. Due to limitations in memory, I've just been saving the arrays to pickle files and unpickling inside the second function. Just for context, I have 64 pickle files that is each 47 megabytes (3 gigs total). I'm calling this function upwards of 100,000 times. I'm assuming that unpickling is far less efficient than unpickling once and passing the arrays, but I was wondering on the order of how much time I'd be losing in doing the way I'm doing it now and whether there are more efficient ways of doing this | Efficiency of passing arrays to a function vs unpickling the array inside the function in Python? | 1.2 | 0 | 0 | 45 |
35,731,339 | 2016-03-01T19:08:00.000 | 0 | 0 | 0 | 0 | python,arrays,performance | 35,732,141 | 2 | false | 0 | 0 | If you memory limitations doesn't allow to keep all files in memory, you will read the file each time and pass list to another function (it will be passed by reference). | 2 | 0 | 1 | Sorry in advance, I know almost nothing about efficiency, so I might need a little extra help.
I have a function that calls another function and it passes in pretty big arrays. Due to limitations in memory, I've just been saving the arrays to pickle files and unpickling inside the second function. Just for context, I have 64 pickle files that is each 47 megabytes (3 gigs total). I'm calling this function upwards of 100,000 times. I'm assuming that unpickling is far less efficient than unpickling once and passing the arrays, but I was wondering on the order of how much time I'd be losing in doing the way I'm doing it now and whether there are more efficient ways of doing this | Efficiency of passing arrays to a function vs unpickling the array inside the function in Python? | 0 | 0 | 0 | 45 |
35,731,438 | 2016-03-01T19:13:00.000 | 1 | 0 | 0 | 0 | python,django | 35,732,320 | 2 | false | 1 | 0 | Python\Django is modular.
App should include just those models which usually solve 1 concrete task.
If some of models from p.1 can be usefull in other tasks, then probably would be better to create a new apps for those models. Ie if some models are shared between multiple tasks then there is a logic to make a new apps with those models.
For example you have a forum app. This forum has such features like, polls, registrations, PM, etc. Logically everything seems to be combined together. However, if your site is just a forum - ok, but if there are other content, for example blogs with comments, then "registration model" can be made as a separate app and can be shared between parts of site such as "blogs with comments" and "forum".
Regarding admin\frontend. Ive seen apps\projects with more than 10 models together. Based on the forum example above, if the admin part does not do any task out of scope of your app, then I would make admin and front-end inside of one app. Otherwise, if admin has any relation to another task, which is out of scope of your main app - admin should be as a seperate app. | 1 | 2 | 0 | I'm build a Django app with a number of models (5-10). There would be an admin side, where an end-user manages the data, and then a user side, where other end-users can only read the data. I understand the point of a Django app is to encourage modularity, but I'm unsure as to "where the line is drawn" so-to-speak.
In "best practices terms": Should each model (or very related groups of models) have their own app? Should there be an 'admin' app and then a 'frontend' app?
In either case, how do the other apps retrieve and use models/data inside other apps? | Django - What constitutes an app? | 0.099668 | 0 | 0 | 100 |
35,732,758 | 2016-03-01T20:25:00.000 | 1 | 0 | 0 | 0 | mysql,django,python-3.x,django-database | 35,777,867 | 2 | false | 1 | 0 | So here is the answer for all the django (or coding in general) noobs like me.
python manage.py createcachetable
I totally forgot about that and this caused all the trouble with "app_cache doesn't exist". At least in this case...
I changed my database to PostgreSQL, but I am sure it also helps with MySQL... | 2 | 0 | 0 | Hello everybody this is my first post,
I made a website with Django 1.8.9 and Python 3.4.4 on Windows 7. As I was using SQLite3 everything was fine.
I needed to change the database to MySQL. I installed MySQL 5.6 and mysqlclient. I changed the database settings and made the migration ->worked.
But when I try to register a new account or logging into the admin (made createsuperuser before) I get this Error:
(1146, "Table 'community_db.app_cache' doesn't exist")
I restarted the server and restarted command prompt.
What also confuses me is the next row:
C:\Python34\lib\site-packages\MySQLdb\connections.py in query, line 280
I was reading that there isn't any MySQLdb for Python 3
Would be nice if there is any help. I already spent such a long time for this website and I tried to solve this problem like allllll the other ones before, but for this one I can't find any help via google/stackover. I don't know what to do | Django - MySQL : 1146 Table doesn't exist | 0.099668 | 1 | 0 | 2,782 |
35,732,758 | 2016-03-01T20:25:00.000 | 0 | 0 | 0 | 0 | mysql,django,python-3.x,django-database | 35,733,218 | 2 | false | 1 | 0 | I would assume this was an issue with permissions. As in the web-page connects with a user that doesn't have the proper permissions to create content.
If your tables are InnoDB, you'll get the table doesn't exist message. You need the ib* files in the root of the MySQL datadir (e.g. ibdata1, ib_logfile0 ib_logfile1)
If you don't have these files, you might need to fix permissions by logging directly into your DB | 2 | 0 | 0 | Hello everybody this is my first post,
I made a website with Django 1.8.9 and Python 3.4.4 on Windows 7. As I was using SQLite3 everything was fine.
I needed to change the database to MySQL. I installed MySQL 5.6 and mysqlclient. I changed the database settings and made the migration ->worked.
But when I try to register a new account or logging into the admin (made createsuperuser before) I get this Error:
(1146, "Table 'community_db.app_cache' doesn't exist")
I restarted the server and restarted command prompt.
What also confuses me is the next row:
C:\Python34\lib\site-packages\MySQLdb\connections.py in query, line 280
I was reading that there isn't any MySQLdb for Python 3
Would be nice if there is any help. I already spent such a long time for this website and I tried to solve this problem like allllll the other ones before, but for this one I can't find any help via google/stackover. I don't know what to do | Django - MySQL : 1146 Table doesn't exist | 0 | 1 | 0 | 2,782 |
35,735,669 | 2016-03-01T23:26:00.000 | 4 | 0 | 1 | 0 | python-3.x,python-3.5,decompiler | 46,968,054 | 3 | false | 0 | 0 | To decompile compiled .pyc python3 files, I used uncompyle6 in my current Ubuntu OS as follows:
(i)Installation of uncompyle6:
pip3 install uncompyle6
(ii)To create a .py file from .pyc file Run:
uncompyle6 -o . your_filename.pyc
(iii)Automatically a new .py file will be created with the same existing .pyc file name.
Hope this will help. | 1 | 6 | 0 | I have been searching around and haven't been able to find anything that can help me to decompile Python 3.5. Does anyone know of one? | How do I decompile Python 3.5 .pyc? | 0.26052 | 0 | 0 | 22,171 |
35,737,093 | 2016-03-02T02:03:00.000 | 0 | 0 | 0 | 0 | python,function,language-features,pos-tagger,crf | 38,946,931 | 1 | false | 0 | 0 | i recommend use CRF tagger, it's very easy. | 1 | 0 | 0 | I am using CRF POS Tagger in Python, training English PTB sample corpus and the result is quite good.
Now I want to use CRF to train on a large Vietnamese corpus. I need to add some Vietnamese features into this tagger like proper name, date-time, number,... I tried for days but cannot figure out how to do that. I already knew the format of data so it is not problem.
I am quite new to Python. So any detailed answer can be helpful. Thanks. | How to add specific features to CRF POS Tagger in Python? | 0 | 0 | 0 | 293 |
35,737,188 | 2016-03-02T02:13:00.000 | 0 | 0 | 1 | 0 | python-2.7,anaconda,pyinstaller | 47,044,598 | 2 | false | 0 | 1 | Try using the --hidden-import=matplotlib when calling pyinstaller. For example, in the command prompt you would type:
Pyinstaller --hidden-import=matplotlib your_filename_here.py
and you could try giving it a shot with tkinter in there as well.
Pyinstaller --hidden-import=matplotlib --hidden-import=tkinter your_filename_here.py | 1 | 1 | 0 | I am using Pyinstaller (after spending a long time with py2exe) to convert my REAL.py file to .exe. I used Anaconda to make .py file which is running perfectly on my computer. But when I make .exe file, it shows no error and an application is created in dist\REAL folder. But when I run the .exe file, the console opens and closes instantly.
It should ideally show a GUI window and take inputs and use them to make plots. It does so when I run REAL.py file. I am using Tkinter, Matplotlib, numpy, scipy which comes with Anaconda.
EDIT: I tried to run simple code to check the compatibility with matplotlib:
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
The same issue persists with this. Opens console window and then closes but no plot is given out. | Pyinstaller with Tkinter Matplotlib numpy scipy | 0 | 0 | 0 | 2,158 |
35,738,199 | 2016-03-02T03:54:00.000 | 0 | 0 | 0 | 0 | python,matplotlib,histogram | 60,751,250 | 9 | false | 0 | 0 | If you are working with pandas, make sure the data you passed in plt.hist() is a 1-d series rather than a dataframe. This helped me out. | 3 | 17 | 1 | I'm plotting about 10,000 items in an array. They are of around 1,000 unique values.
The plotting has been running half an hour now. I made sure rest of the code works.
Is it that slow? This is my first time plotting histograms with pyplot. | Matplotlib.pyplot.hist() very slow | 0 | 0 | 0 | 21,439 |
35,738,199 | 2016-03-02T03:54:00.000 | 0 | 0 | 0 | 0 | python,matplotlib,histogram | 58,707,409 | 9 | false | 0 | 0 | For me it took calling figure.canvas.draw() after the call to hist to update immediately, i.e. hist was actually fast (discovered that after timing it), but there was a delay of a few seconds before figure was updated. I was calling hist inside a matplotlib callback in a jupyter lab cell (qt5 backend). | 3 | 17 | 1 | I'm plotting about 10,000 items in an array. They are of around 1,000 unique values.
The plotting has been running half an hour now. I made sure rest of the code works.
Is it that slow? This is my first time plotting histograms with pyplot. | Matplotlib.pyplot.hist() very slow | 0 | 0 | 0 | 21,439 |
35,738,199 | 2016-03-02T03:54:00.000 | 2 | 0 | 0 | 0 | python,matplotlib,histogram | 56,879,388 | 9 | false | 0 | 0 | For me, the problem is that the data type of pd.Series, say S, is 'object' rather than 'float64'. After I use S = np.float64(S), then plt.hist(S) is very quick. | 3 | 17 | 1 | I'm plotting about 10,000 items in an array. They are of around 1,000 unique values.
The plotting has been running half an hour now. I made sure rest of the code works.
Is it that slow? This is my first time plotting histograms with pyplot. | Matplotlib.pyplot.hist() very slow | 0.044415 | 0 | 0 | 21,439 |
35,741,698 | 2016-03-02T08:14:00.000 | 1 | 0 | 0 | 0 | python,dxf | 36,529,617 | 2 | true | 0 | 0 | dxfgrabber and ezdxf are just interfaces to the DXF format and do not provide any kind of CAD or calculation functions, and the geometrical length of DXF entities are not available attributes in the DXF format. | 1 | 3 | 0 | I am trying to find total length(perimeter),area of a spline from dxf file.
Is there any function in dxfgrabber or ezdxf to find total length of an entity from dxf file ? | how to find length of entity from dxf file using dxfgrabber or ezdxf packages | 1.2 | 0 | 0 | 3,245 |
35,742,024 | 2016-03-02T08:33:00.000 | 5 | 0 | 1 | 0 | python,python-2.7,modulo | 35,742,052 | 1 | false | 0 | 0 | Since * and % have exactly the same precedence, associativity comes into play. Since both operators are evaluated from left to right (i.e. the associativity is from left to right), your expression is equivalent to
(25 * 3) % 4
which is, of course, 75 % 4 which is also 3. | 1 | 2 | 0 | going through a tutorial I've learnt that the modulus function returns the remainder of the equation. Thus, for example, 3 % 4 equals 3
But I don't seem to understand how 25 * 3 % 4 = 3. What's happened to 25?
I've run the script on PowerShell as well as online Google calculator, returns the same. Anyone willing to explain this, kindly? | Use of % modulo function | 0.761594 | 0 | 0 | 99 |
35,742,472 | 2016-03-02T08:56:00.000 | 0 | 0 | 0 | 0 | python,plugins,qgis | 35,810,383 | 2 | false | 0 | 0 | You could add the qgis plugin folder to your path. That way you should be able to import them as a module. | 1 | 4 | 0 | I would like to access external qgis plugins through a python script. I have been able to access the built in qgis processing and vector toolboxs, but have been unsuccessful with external plugins such as the topology checker plugin. I have tried this both using the built in qgis python console, and an exteranl IDE, but attempts have failed.
I am sure that there is a way to do this, has someone done this before?
Thank you! | How to run QGIS plugin from python script | 0 | 0 | 0 | 2,798 |
35,742,708 | 2016-03-02T09:08:00.000 | 1 | 1 | 0 | 0 | python,c,html,websocket,autobahn | 35,795,470 | 1 | false | 1 | 0 | Autobahn|Python provides both a WebSocket implementation for Python and an implementation for a client for the WAMP protocol on top of that.
You can use the WebSocket part on its own to implement your WebSocket server. | 1 | 0 | 0 | I'm researching about WebSockets atm and just found Autobahn with Autobahn|Python.
I'm not sure that I understand the function of that toolset (?) correctly.
My intention is to use a WebSocket-Server for communication between a C program and a HTML client.
The idea is to let the C program connect via WebSocket to the Server and send the calculation progress of the C program to every HTML client that is connected to that WebSocket-Server.
Am I able to write a WebSocket Server with Autobahn|Python and then connect with an HTML5-client and a C program client? | Is the following principle the right for autobahn python? | 0.197375 | 0 | 1 | 59 |
35,749,102 | 2016-03-02T13:51:00.000 | 0 | 0 | 1 | 0 | python,python-jedi | 36,939,902 | 3 | false | 0 | 0 | This is currently not something that is supported in Jedi. You could certainly do it, but not with the public API. There are two things that Jedi's API is currently missing:
Getting the class/function by position (You can get this by playing with jedi's Parser).
Getting the code once you have your class. This is very easy: node.get_code()
Try to play with jedi.parser.Parser. It's quite a powerful tool, but not yet publicly documented. | 1 | 0 | 0 | Basically I want to use jedi to retrieve a function's or a class' code from the details of it's definition(s) (path, line, column). To be more explicit, what I really wish is to get the code from a file, that is not executed, static. | Get a function/class code from a file knowing the line and the column of it's definition | 0 | 0 | 0 | 53 |
35,750,276 | 2016-03-02T14:39:00.000 | 2 | 0 | 0 | 1 | python,macos,pip,homebrew,pyaudio | 35,867,245 | 2 | true | 0 | 0 | You can try export MACOSX_DEPLOYMENT_TARGET='desired value' in Terminal just before you run the installation process. | 1 | 2 | 0 | I used brew to install port audio.
I then tried pip install pyaudio.
I get:
error: $MACOSX_DEPLOYMENT_TARGET mismatch: now "10.9" but "10.11" during configure
How can I set the MACOSX_DEPLOYMENT_TARGET so that I don't get this error? | Trying to install PyAudio on OS X ( 10.11.3) | 1.2 | 0 | 0 | 345 |
35,755,572 | 2016-03-02T18:45:00.000 | 32 | 0 | 1 | 0 | python,visual-studio,visual-studio-2015 | 37,321,206 | 3 | false | 0 | 0 | Tools -> Options -> Text editor -> Python -> Tabs and set it to Smart | 1 | 18 | 0 | This is probably a simple issue but something is wrong with my Python tools for visual studio. When I first started using VS2015 for Python it would auto-indent whenever I used a colon. Now VS2015 is just acting like a text editor with syntax highlighting. I tried uninstalling and reinstalling the Python tools but that did not work. How do I fix Visual Studio to auto-style as I write Python again? | Visual Studio, Python not auto-indenting | 1 | 0 | 0 | 12,572 |
35,758,928 | 2016-03-02T21:56:00.000 | 1 | 0 | 1 | 0 | python,virtualenv | 35,759,297 | 1 | false | 0 | 0 | virtualenv is a good option if you are transferring the folder between two same operating systems.
In order to include the correspond site packages that are already installed in your computer, install them inside the virtualenv context by doing pip install in the virtualenv shell.
You could use pip freeze to get a list of installed python packages from your computer.
You could then include a .bat file (if it is a windows system) or .sh file (if its a linux system) so it would run your script with the virtualenv context. | 1 | 1 | 0 | So I develop a python application and I plan to copy the whole folder for my friend to use it as end-user.
But my friend does not have python installed in the computer and I don't want to make them install it since he is not a developer.
In my project I have set up the virtualenv with python.exe inside it but without the site-packages, and I copy the virtualenv together with the project folder.
Is it possible to do this kind of setup so the application in the other end runs without python installed? | Copy Python project folder to computer without python, can I run it? | 0.197375 | 0 | 0 | 950 |
35,763,357 | 2016-03-03T04:42:00.000 | 3 | 1 | 0 | 1 | python,datetime,unix,timestamp,epoch | 35,763,677 | 5 | false | 0 | 0 | Well, there are 946684800 seconds between 2000-01-01T00:00:00Z and 1970-01-01T00:00:00Z. So, you can just set a constant for 946684800 and add or subtract from your Unix timestamps.
The variation you are seeing in your numbers has to do with the delay in sending and receiving the data, and could also be due to clock synchronization, or lack thereof. Since these are whole seconds, and your numbers are 3 to 4 seconds off, then I would guess that the clocks between your computer and your device are also 3 to 4 seconds out of sync. | 1 | 9 | 0 | I am trying to interact with an API that uses a timestamp that starts at a different time than UNIX epoch. It appears to start counting on 2000-01-01, but I'm not sure exactly how to do the conversion or what the name of this datetime format is.
When I send a message at 1456979510 I get a response back saying it was received at 510294713.
The difference between the two is 946684796 (sometimes 946684797) seconds, which is approximately 30 years.
Can anyone let me know the proper way to convert between the two? Or whether I can generate them outright in Python?
Thanks
Edit
An additional detail I should have mentioned is that this is an API to a Zigbee device. I found the following datatype entry in their documentation:
1.3.2.7 Absolute time
This is an unsigned 32-bit integer representation for absolute time. Absolute time is measured in seconds
from midnight, 1st January 2000.
I'm still not sure the easiest way to convert between the two | Conversion from UNIX time to timestamp starting in January 1, 2000 | 0.119427 | 0 | 0 | 13,703 |
35,769,905 | 2016-03-03T10:43:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,py2exe,cx-freeze | 35,783,366 | 1 | true | 0 | 0 | The main difference between the two is that py2exe is limited to Windows whereas cx_Freeze is cross-platform and works on Windows, Linux, Mac OS X, etc. If you are only planning on supplying your application to those on Windows then py2exe is a valid option. It also has more Windows specific features that may be of benefit in that case. | 1 | 0 | 0 | I want to compile my python program to exe. I know two modules, with the help of which I can do that, they are py2exe and cx-freeze. Can somebody tell me the difference (Does cx-freeze have more features that py2exe because it is from an external source)? Which one is more common among python users? Which one works quicker and more reliably with python 2.7? | Which is more optimal for compiling python 2.7 programs to exe - py2exe or cx-freeze? | 1.2 | 0 | 0 | 84 |
35,773,582 | 2016-03-03T13:31:00.000 | 0 | 0 | 1 | 0 | python | 35,773,683 | 4 | false | 0 | 0 | The % (modulo) sign should help you here:
new = old - (old % 10) | 1 | 1 | 0 | I have the number 67.14, for example.
I need to set another variable as the next multiple of 10 down (60, in this case).
Would it be possible to just get the "7.14" from "67.14" and take it away? | Truncate float to multiple of 10 | 0 | 0 | 0 | 106 |
35,775,144 | 2016-03-03T14:39:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,m2crypto | 35,782,735 | 1 | false | 1 | 0 | Device Farm requires your python test should be able to execute on Linux_X64 platform.
You could create and package your test bundle on linux_x64 platform, then try to run it on Device Farm. | 1 | 0 | 0 | I'm working on Python Image Recognition test for Android devices. It works on local; but when I try to build it for AWS, I always get the following error:
copying M2Crypto\SSL__init__.py -> build\lib.win32-2.7\M2Crypto\SSL
running build_ext building 'M2Crypto.__m2crypto' extension
swigging SWIG/_m2crypto.i to SWIG/_m2crypto_wrap.c swig.exe -python
-Ic:\python27\include -Ic:\python27\PC -Ic:\pkg\include -includeall -modern -builtin -outdir build\lib.win32-2.7\M2Crypto -o SWIG/_m2crypto_wrap.c SWIG/_m2crypto.i error: command 'swig.exe'
failed: No such file or directory
I've already tried almost every solution I found on Internet but nothing changed. I'm using Windows 8.1 and Python 2.7
What should I do? How should I fix this problem?
Thank you in advance. | Python Mobile Test on AWS Device Farm (M2Crypto Issue) | 0 | 0 | 0 | 96 |
35,779,145 | 2016-03-03T17:37:00.000 | 1 | 0 | 1 | 0 | python,kivy | 41,786,561 | 5 | false | 0 | 1 | i have encountered a simillar setback while trying to install kivy on python.
in python version 3, py is used instead of python for launching python in the command prompt i.e
py (to launch python3)
py -m pip install kivy (to install kivy)
N.b: before installing kivy it is recommended to install cython and other python dependencies using py -m pip install cython | 1 | 0 | 0 | I have tried following the steps on the website but they're not working for me. At least I must be doing something for them not to work.
Firstly, before the steps, the site tells me to open the command line and type python --version. So I typed on the windows search bar 'command line'. Python (command line) came up and I went ahead and typed python --version and I got the error
python is not defined
along with other stuff. So I decided to try the command prompt, the second option to come up when I typed 'command line' in on my windows search. It returned saying that it was not recognized as an internal or external file.
I attempted step 1, just to try my luck, on both the Python command line and the command prompt and nothing really happened. I'm not really sure what to do now. | How to install kivy on Windows 10 | 0.039979 | 0 | 0 | 12,655 |
35,783,883 | 2016-03-03T21:59:00.000 | 0 | 0 | 1 | 1 | python | 35,783,966 | 1 | false | 0 | 0 | A very good solution would be to build a web app. You can use django, bottle or flask for example.
Your users just connect to your url with a browser. You are in complete control of the code, and can update whenever you want without any action on their part.
They also do not need to install anything in the first place, and browser nowadays provide a lot of flexibility and dynamic content. | 1 | 0 | 0 | How can I update a python script remotely. I have a program which I would like to share, however it will be frequently updates, therefore I want to be able to remotely update it so that the users do not have to re-install it every day. I have already searched StackOverflow for an answer but I did not find anything I could understand. Any help will be mentioned in the projects credit! | How can I remotely update a python script | 0 | 0 | 0 | 214 |
35,784,155 | 2016-03-03T22:15:00.000 | 1 | 0 | 0 | 0 | python,sql,database,libreoffice,libreoffice-calc | 66,788,273 | 2 | false | 0 | 0 | You can ofcourse use python for this task but it might be an overkill.
The CSV export / import sequence is likely much faster, less error prone and needs less ongoing maintainance (e.g if you change the spreadsheet columns). The sequence is roughly as follows:
select the sheet that you want to import into a DB
select Files / Save as.. and then text/csv
select a column separator that will not interfere with your data (e.g. |)
The import sequence into a database depends on your choice of db but today many IDE's and database GUI environments will automatically import / introspect your CSV file and create the table / insert the data for you. Things to be double check:
You may have to indicate that the first row is a header
The assigned datatype may need fine tuning if the automated guesses are not optimal | 2 | 0 | 0 | What's the best way to switch to a database management software from LibreOffice Calc?
I would like to move everything from a master spreadsheet to a database with certain conditions. Is it possible to write a script in Python that would do all of this for me?
The data I have is well structured I have about 300 columns of assets and under every asset there is 0 - ~50 filenames. The asset names are uniform as well as the filenames.
Thank you all! | How to import data from LibreOffice Calc to a SQL database? | 0.099668 | 1 | 0 | 1,516 |
35,784,155 | 2016-03-03T22:15:00.000 | 0 | 0 | 0 | 0 | python,sql,database,libreoffice,libreoffice-calc | 35,784,265 | 2 | true | 0 | 0 | You can create a python script that will read this spreadsheet row by row and then run insert statements in a database. In fact, would be even better if you save the spreadsheet as CSV for example, if you only need the data there. | 2 | 0 | 0 | What's the best way to switch to a database management software from LibreOffice Calc?
I would like to move everything from a master spreadsheet to a database with certain conditions. Is it possible to write a script in Python that would do all of this for me?
The data I have is well structured I have about 300 columns of assets and under every asset there is 0 - ~50 filenames. The asset names are uniform as well as the filenames.
Thank you all! | How to import data from LibreOffice Calc to a SQL database? | 1.2 | 1 | 0 | 1,516 |
35,785,178 | 2016-03-03T23:30:00.000 | 0 | 0 | 0 | 0 | python,image,matplotlib,plot | 35,886,350 | 1 | true | 0 | 0 | The array is plotted upside-down, meaning the index (0, 0) is at the bottom left. | 1 | 0 | 1 | Every time I go to plot a 2D array in matplotlib using, for example, pcolormesh, I have the same question: Is the resultant image showing the array rightside-up or upside-down? That is, is index (0, 0) at the top left of the plot or the bottom left?
It's tedious to write a test every six months to remind myself. This should be clearly documented in an obvious place, like SO. | Does pcolormesh plot the 2D array rightside-up or upside-down? | 1.2 | 0 | 0 | 335 |
35,786,499 | 2016-03-04T01:42:00.000 | 5 | 0 | 1 | 0 | python,objectinstantiation | 35,786,569 | 3 | true | 0 | 0 | It doesn't matter how complex the class is; when you create an instance, you only store a reference to the class with the instance. All methods are accessed via this one reference. | 1 | 3 | 0 | I have a program that must continuously create thousands of objects off of a class that has about 12–14 methods. Will the fact that they are of a complex class cause a performance hit over creating a simpler object like a list or dictionary, or even another object with fewer methods?
Some details about my situation:
I have a bunch of “text” objects that continuously create and refresh “prints” of their contents. The print objects have many methods but only a handful of attributes. The print objects can’t be contained within the text objects because the text objects need to be “reusable” and make multiple independent copies of their prints, so that rules out just swapping out the print objects’ attributes on refresh.
Am I better off,
Continuously creating the new print objects with all their methods as the application refreshes?
Unraveling the class and turning the print objects into simple structs and the methods into independent functions that take the objects as arguments?
This I assume would depend on whether or not there is a large cost associated with generating new objects with all the methods included in them, versus having to import all the independent functions to wherever they would have been called as object methods. | python: will an class with many methods take longer to initialize? | 1.2 | 0 | 0 | 505 |
35,786,884 | 2016-03-04T02:24:00.000 | 0 | 0 | 1 | 0 | ipython,ibm-cloud,egg,jupyter-notebook | 36,672,569 | 1 | true | 0 | 0 | Create a zip file with setup script. Put the zip file in object storage. Download zip using http or curl from object storage into bluemix notebook environment. Do pip install for zip file. | 1 | 0 | 0 | How can I add the .egg file to my Jupyter IPython Notebook on Bluemix? I see documentation for adding jars for scala notebooks, but nothing for python egg files. | Add .egg file to Jupyter IPython Notebook on Bluemix | 1.2 | 0 | 0 | 313 |
35,788,298 | 2016-03-04T04:58:00.000 | 1 | 0 | 0 | 0 | android,python,optimization,kivy | 35,795,097 | 1 | false | 0 | 1 | The immediate possibility is that you're just seeing the android processor be slower than the desktop one. I'm not sure what the benchmark comparisons are nowadays, but I've seen this be a problem in the past. That said, I'd have guessed the same as you that the difference shouldn't be that big.
I don't know if it would make a difference, but one general thing to try might be to compile for armeabi-v7a (rather than the default armeabi). This enables hardware floating point calculation, amongst other things. I don't know if it makes a difference in generic apps, but it certainly could. You can target this using the python-for-android master branch with --arch=armeabi-v7a, or the android_new target in the buildozer master branch (the rest of buildozer operation is the same, and it automatically uses v7a).
Another question would be, do you have access to a more efficient xml parser? If you can find one in e.g. cython rather than python (I don't know what you're using right now), this could make a difference. I see the other alternative of using a more efficient data structure has already been raised in a comment.
Sorry that neither of these suggestions are very specific. If you ask on the kivy support channels you may find someone who's found and resolved similar issues. | 1 | 1 | 0 | I'm fairly far along developing a Kivy app. Its targeted for android, but will also work (simultaneously with a different skin) on desktops and hopefully iOs eventually.
The basic dependencies I'm heavily using are:-
twisted - using this as an IPC, my app has a server/client
relationship between the data manipulation and the UI
Whoosh - for text search
xmltodict - for easy XML manipulation
I'm having REALLY long app startup times on android, on a relatively recent phone, which doesn't bode well. From my rough timings (based on time.time() and subtracting from my App's init time):-
My app gets control from kivy startup at about 1 second in
My initialization of custom classes etc. is done at the 2.4 second mark
At the 14.4 second mark, I finally complete the bulk of my data loading
At the 17 second mark, I start sending the data out to the client UI using twisted
At the 22 second mark, the UI receives the data
There's multiple points I want to address there. For example the roughly 5 second gap for data send can be easily broken into pieces and updated in the UI piecemeal, so I'm keeping that for later, but I need to ask about the long 12 second gap for data loading. This data loading involves creating about a 1000 instances of a custom class, with the following steps (cumulative time over 1000 instances):-
Reading data from 1000 text XML files (0.734 seconds)
Parsing the XML in the read data (9.198 seconds)
Filling the object's variables based on the parsed XML (0.585 seconds)
Directory tree traversal (use this to locate a certain base folder, 0.0824 seconds)
mtime measurement for the xml files (0.12 seconds)
The measured timings surprised me, because the equivalent timings for running the same code on my laptop are 0.041, 0.9, 0.062, 0.009, and 0.016). Everything's about 10 times slower.
What, if anything, can I do about this? The phone being used for testing has 3GB of RAM and a Snapdragon 801 processor, so I'm quite worried about using this app on slower/older models. My initial thoughts were that the slow-down was due to sd cards being inherently slower than my laptop's hard disk drive, but the fact that xml parsing (non IO related) took so long seems to indicate processing problems.
Suggestions/criticisms welcome. | Python on android (kivy) - speed bottlenecks for certain operations? | 0.197375 | 0 | 0 | 1,330 |
35,794,831 | 2016-03-04T11:24:00.000 | 1 | 0 | 0 | 0 | javascript,python,ajax,django,django-templates | 35,795,049 | 1 | false | 1 | 0 | The django.test.Client is not a browser, it just makes HTTP requests, thus doesn't know anything about ajax/javascript.
One of the following should help you
use django.test.Client and assert that
the template has the ajax call to the correct url, i.e.: assert that the response contains <script>myAjaxCallTo('/some/url/')</script>
test the ajax endpoint in isolation that it returns to correct response
use selenium (together with the django.test.LiveServerTestCase) | 1 | 0 | 0 | I've faced with some problem during implementing test for template rendering.
There are two views:
General
Block with required data
General(1) template displays some data. This template contains ajax GET call for retrieving data from additional view (2). I want to check these data in my template(1) with test.
I use client.get(url) for invoking my template (1). But seems that ajax GET request isn't invoking during test and I can't figure out why. | Testing Django template with ajax GET request - ajax request isn't invoking | 0.197375 | 0 | 0 | 277 |
35,798,844 | 2016-03-04T14:44:00.000 | 0 | 1 | 1 | 0 | python | 35,801,829 | 1 | true | 0 | 0 | Install them into a subdirectory of your package directory in site-packages. If the subdirectory doesn't have an __init__.py file or if it's name has a dash (-) or other character that isn't valid in a Python identifier it can't be imported using the import statement nor can any Python file located under it.
So for example, if your package name is mypackage, you could use site-packages/mypackage/data-files as the location to store your data. | 1 | 0 | 0 | I'm working on a package that uses data from external git repository. When it's doing its job, it first cloning git repository and then copies files from it to some other location.
Where should I save this repository (and other non-python files) in my filesystem? Is there any standard place for that?
Sure, I could just use site-packages/ directory for my files. But the problem is, git repository could contain python packages too, and I don't want them to be importable.
Is there, maybe, some way to specifically exclude some folder from site-packages/? I think *.dist-info folders are excluded, should I create a fake one for my package?
Thank you very much. | Standard directory for installed Python package's data | 1.2 | 0 | 0 | 76 |
35,800,558 | 2016-03-04T16:03:00.000 | 1 | 0 | 0 | 0 | python,django | 35,800,668 | 1 | false | 1 | 0 | The approach what you are following is not correct.
If you want to force the user to change the password, set the flag and if flag is true redirect the user to change the password.
If you kill the user will be redirected to login page by default in any web application.
Thanks. | 1 | 0 | 0 | Periodically, I force some users to log out (because their passwords have to be changed) and for that I delete their Session cookie. What's happening next is that these users are redirected to the login screen.
How could I implement their redirection to a template that the only thing they can do is to change their password?
Nikos | Change password form while session is deleted | 0.197375 | 0 | 0 | 30 |
35,800,893 | 2016-03-04T16:17:00.000 | 3 | 0 | 0 | 0 | python,deep-learning,face-recognition,torch | 35,881,615 | 2 | true | 0 | 0 | As I posted in the comments, this segfault was caused by compiling dlib with one Python version and running it with another. This was resolved by manually installing dlib rather than using their pip package. | 1 | 2 | 1 | I am new to deeplearning and face recognition. After searching, I found this python package about deeplearning applied to face recognition called OpenFace. From its documentation, I think it is build on top of Torch for the neural nets computation.
I want to install the package in a virtual environment, so basically these are steps I made:
brew install the necessary system requirements: bash,coreutils,curl,findutils,opencv, python and boost-python
Make a virtual environment and install dlib, numpy, scipy, pandas, scikit-learn, scikit-image
Cloned the openface github repository
Install Torch
curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash
git clone https://github.com/torch/distro.git torch --recursive
cd torch
./install.sh
source install/bin/torch-activate
luarocks install csvigo
luarocks install dpnn
luarocks install nn
cd to cloned openface repo and run
python setup.py install
However when I run python:
>>>import openface
I get:
Segmentation Fault: 11
How do I fix this? Also, are there any other tutorials for using openface?
How to properly install OpenFace? | Trouble Installing OpenFace in Python | 1.2 | 0 | 0 | 4,278 |
35,801,424 | 2016-03-04T16:41:00.000 | 0 | 0 | 1 | 0 | python,queue | 35,889,996 | 1 | false | 0 | 0 | I am not sure, but hope if this could help.
make a priority queue.
priority will be on the time-stamp on which it was accessed.
(if it was accessed sooner it will have less priority).
{"ip": "time-stamp when it was accessed"}
keep the queue sorted. On top you would have the least recently used ones.
so for 50 days you will have unique ip and after that it would repeat.
you could keep a list of time-stamp against each ip so you know when it accessed
{"ip": ["time-stamp when it was accessed", "time-stamp when it was accessed"], ...}
you can use pickle or mongodb to dump and get back the data. | 1 | 0 | 0 | I am trying to build a library of about 3000 IP Addresses. They will each be run through a program I have already written separately. It will scan 60 of them a day, so it needs to keep track of which is scanned and put them at the back of the queue.
I'm not looking for you to write the code, just a little bit of a push in the right direction. | Building a queue in Python | 0 | 0 | 1 | 59 |
35,801,638 | 2016-03-04T16:52:00.000 | 4 | 0 | 0 | 0 | python,pandas,scikit-learn | 35,953,956 | 1 | true | 0 | 0 | Nearly all scikit-learn estimators will convert input data to float before running the algorithm, regardless of the original types in the array. This holds for the random forest implementation. | 1 | 4 | 1 | I was trying to train a randomforest classifier in python. However, in my original pandas.dataframe, there are float64, object, datetime64, int64 and bool dtypes(nearly all kinds of dtypes allowed in pandas).
Is it necessary to convert a bool to float or int?
For a two-value object column, should I convert it to bool, int, or float? Which one would perform better? Or does it not matter?
Thanks! | Which dtype performs better when training a randomforest in python? | 1.2 | 0 | 0 | 1,673 |
35,805,904 | 2016-03-04T20:58:00.000 | 1 | 0 | 0 | 1 | python,visual-studio,opencv,cmake,cmake-gui | 35,806,266 | 1 | false | 0 | 1 | You should look for Python related variables in the CMake GUI. There may be some variables you could set to force paths to the python2.7 interpreter, libs and include dirs. | 1 | 0 | 0 | Recently , I wanted to install OpenCV (in Win10 64 bit ) using Cmake 3.5.0-rc3 and Visual studio 2015 . I have python 3.5 as root and 2.7 as python2 . The issue is while configuring it recognizes python 3.5 as main interpreter but i want it to be 2.7.Is there a possible way to make cmake recognize 2.7 as my main python while maintaining python 3,5 in my PC . I can probably do it by deleting python 3.5 but i dont want that . Help is very much appreciated.Thanking you ,
P.S. If there is a simpler way to install OpenCV along with extramodules in WIndows ,please do tell me thanks in advance | Managing OpenCV python Cmake Studio Windows | 0.197375 | 0 | 0 | 102 |
35,809,076 | 2016-03-05T01:57:00.000 | 1 | 0 | 1 | 0 | python,temp | 35,809,112 | 2 | true | 1 | 0 | tempfile.mkstemp creates a file that is normally visible in the filesystem and returns you the path as well. You should be able to use this to create your input and output files - assuming javac will atomically overwrite the output file if it exists there should be no race condition if other processes on your system don't misbehave. | 1 | 1 | 0 | I have a string of Java source code in Python that I want to compile, execute, and collect the output (stdout and stderr). Unfortunately, as far as I can tell, javac and java require real files, so I have to create a temporary directory.
What is the best way to do this? The tempfile module seems to be oriented towards creating files and directories that are only visible to the Python process. But in this case, I need Java to be able to see them too. However, I also want the other stuff to be handled intelligently if possible (such as deleting the folder when done or using the appropriate system temp folder) | Python temporary directory to execute other processes? | 1.2 | 0 | 0 | 1,712 |
35,809,944 | 2016-03-05T04:18:00.000 | 0 | 0 | 0 | 0 | python,statistics | 35,810,321 | 1 | false | 0 | 0 | Check wls_prediction_std from statsmodels.sandbox.regression.predstd. | 1 | 0 | 0 | After spending 2 hours of research to no avail, I decided to pose my question here. What is the code to find CI of mean response in python?
I know how to do it in R, but I just don't know what I need to do for Python. I assume statsmodel has a function for that. If so, what is it? | How do I find CI of Mean Response using Python? | 0 | 0 | 0 | 40 |
35,810,213 | 2016-03-05T04:57:00.000 | 1 | 1 | 1 | 0 | python-sphinx | 66,365,397 | 3 | false | 0 | 0 | It is also possible to add this option in the conf.py file.
Search in conf.py for the line where the string containing the sphinx-apidoc command is (located in a try section) and add the "--module-first" option.
The new line will look like this:
cmd_line_template = "sphinx-apidoc --module-first -f -o {outputdir} {moduledir}" | 1 | 12 | 0 | I typically put the high-level documentation for a Python package into the docstring of its __init__.py file. This makes sense to me, given that the __init__.py file represents the package's interface with the outside world. (And, really, where else would you put it?)
So, I was really quite surprised when I fired up Sphinx for the first time and saw this content buried near the very end of the package documentation, after the content for all of the submodules.
This seems backward to me. The very first thing the user will see when he visits the page for a package is the documentation of the submodule that just happens to come first alphabetically, and the thing he should see first is right near the bottom.
I wonder if there is a way to fix this, to make the stuff inside of __init__.py come out first, before all of the stuff in the submodules. And if I am just going about this in the wrong way, I want to know that. Thanks! | Can Sphinx emit the 'module contents' first and the 'submodules' last? | 0.066568 | 0 | 0 | 3,613 |
35,811,941 | 2016-03-05T08:34:00.000 | 0 | 0 | 0 | 0 | python-2.7,anaconda,theano | 42,725,755 | 1 | false | 0 | 1 | get rid of theano and reinstall. If that doesn't work, reinstall all of python | 1 | 0 | 0 | New to Theano when I tried to use the package I keep getting the following error:
ImportError: ('The following error happened while compiling the node', Dot22(, ), '\n', 'dlopen(/Userdir/.theano/compiledir_Darwin-14.3.0-x86_64-i386-64bit-i386-2.7.11-64/tmpEBdQ_0/eb163660e6e45b373cd7909e14efd44a.so, 2): Library not loaded: libmkl_intel_lp64.dylib\n Referenced from: /Userdir/.theano/compiledir_Darwin-14.3.0-x86_64-i386-64bit-i386-2.7.11-64/tmpEBdQ_0/eb163660e6e45b373cd7909e14efd44a.so\n Reason: image not found', '[Dot22(, )]')
Can someone tell me how to fix this issue? Thanks. | Running Theano on Python 2.7 | 0 | 0 | 0 | 46 |
35,813,667 | 2016-03-05T11:43:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,new-window,tui | 35,813,975 | 4 | false | 0 | 0 | You may not like this, it's a bit higher level than a basic two player board game, but there is always using some sort of GUI.
I personally like tkinter myself.
You don't want the option of people scrolling up to see printed text, but you can't remove what has been printed, that's like asking a printer to remove ink off a page. It's going to stay there.
Research a GUI interface, and try and make the game in that. Otherwise, you could let me take a stab at creating a explanatory piece of code that shows you how to use tkinter. If you do, link me the game you have so I can understand what you want. | 1 | 1 | 0 | I'm creating a simple two-player board game where each player must place pieces on their own boards. What I would like to do is by either:
opening a new terminal window (regardless which OS the program is run on) for both players so that the board is saved within a variable but the other player cannot scroll up to see where they placed their pieces.
clearing the current terminal completely so that neither player could scroll and see the other player's board. I am aware of the unix 'clear' command but it doesn't achieve the effect I'm after and doesn't work with all OS's (though this might be something that I'll have to sacrifice to get a working solution)
I have tried clearing the screen but haven't been able to completely remove all the text. I don't have a preference; whichever method is easier. Also, if it would be easier to use a different method that I haven't thought of, all other suggestions are welcome. Thanks in advance!
EDIT: Other solutions give the appearance that text has been cleared but a user could still scroll up and see the text that was cleared. I'd like a way to remove any way that a user could see this text.
EDIT 2: Please read the other answers and the comments as they provide a lot of information about the topic as a whole. In particular, thanks to @zondo. | Python hide already printed text | 0 | 0 | 0 | 6,322 |
35,816,195 | 2016-03-05T15:51:00.000 | 4 | 0 | 0 | 0 | android,python,kivy | 35,817,419 | 1 | true | 0 | 1 | Yes, It'll work. os works on windows, linux and mac and android is well.. linux. If python runs on it, os will too.
For simple storing you can use os.path.dirname(os.path.abspath(__file__)) and it'll store data in your data/<app>/ or data/data/<app>/ on android, so common user will not access it. Of course if your app is built that way. It's nice to make yourself a folder for such files, because it can become messy when you use too much files.
If it's built the way it will use sdcard, it'll place data to your sdcard/<app>/ if I remember correctly. | 1 | 2 | 0 | I am using kivy to create an android app. However some important pieces of code are based on the python os module. Now since the os is supposed to be system-dependant, i was wondering if it would work on a mobile device running android. If it doesn't work is there some other way to achieve the same results?
Also, the app needs to save and retrieve data based on the user's actions. Currently, i am reading and writing to plain .txt files to achieve that, but will it work on an android device? Is there a more flexible alternative? | Will python os module work on android? | 1.2 | 0 | 0 | 2,556 |
35,818,003 | 2016-03-05T18:26:00.000 | 1 | 1 | 0 | 1 | python,linux,startup,intel-galileo | 35,972,324 | 1 | true | 0 | 0 | I made the myprogram.py run in background with python myprogram.py & and it worked. The & is used to run whatever process you want in background. | 1 | 2 | 0 | I have a python program that is an infinity loop and send some data to my database.
I want this python script to run when I power my Intel Galileo. I tried to make a sh script python myprogram.py and made it run on startup in etc/init.d. When I restarted my Galileo, nothing happened-Linux didn't load, Arduino sketch didn't load and even my computer didn't recognize it.
I guess this happened because the python program was an infinity loop.
Is there a way that I can run my system without problems and run my python script on startup? | Run python program on startup in background on Intel Galileo | 1.2 | 0 | 0 | 259 |
35,820,328 | 2016-03-05T21:48:00.000 | 0 | 0 | 0 | 0 | python,django,apache,lxml,libxslt | 35,821,295 | 1 | false | 1 | 0 | Fixed by removing libexslt.so files from usr/lib64/. | 1 | 0 | 0 | I have a django app that requires Python (3.4) lxml package. I had a fair amount of trouble building the c shared libraries libxslt and libxml2 that lxml depends on in my red hat server environment. However, pip install lxml now completes successfully and I can import and use lxml in the command line interpreter.
When I restart apache, importing lxml within my django app causes the error:
ImportError: /usr/local/lib/python3.4/site-packages/lxml/etree.cpython-34m.so: undefined symbol: exsltMathXpathCtxtRegister
I have checked that my LD_LIBRARY_PATH is set the same in both environments (/usr/lib).
I notice that when I reinstall lxml through pip, pip tells me that it is building against libxml2/libxslt found at /usr/lib64. I have removed all libxml2.so and libxslt.so files found at /usr/lib64/ and been confounded to find that pip continues to tell me that it is building against lib64, that the install completes successfully, and that lxml still works correctly at command line but not through apache.
pip also says that the detected version of libxslt that it's using in the install is 1.1.23. However, I've used strace to see that when I import using the interpreter, the library that is loaded is /usr/lib/libxslt.so.1.1.28. I don't know of any tool or technique to find out what library is being loaded through apache..
Does anyone have any theories as to what is going on or how to debug the issue? Thanks in advance! | lxml runs in interpreter but not through apache/mod_wsgi | 0 | 0 | 0 | 362 |
35,820,336 | 2016-03-05T21:49:00.000 | 0 | 0 | 0 | 0 | python,pygame,gravity | 35,822,375 | 2 | false | 0 | 1 | Quick dislaimer: I do not know multiple ways to incorporate gravity, so I can not say which is "best". But, if you're fighting the performance battle in Python, you've probably fighting the wrong battle.
For gravity, you can use a vector system. Say a character jumps off the ground and has initial velocity of [5, -15] (negative y because positive y is down!), you can move your character's rect by this velocity every frame to simulate movement. To throw gravity into this, you need to add 9.8 to your y velocity component value every second. So 1 second in, the velocity will be about [5, -5]. This will have your character slow to a stop, and begin moving down.
For key pressed movement, I recommend using booleans. An example, upon pressing k_U , a variable that says you are moving up becomes True. Then, if this variable is True, you move him, say, [0, -5]. Upon keyup, set variable to false. Do this for north/east/south/west, and then you have a movement system in 4 directions, that moves you while you hold the key down. | 1 | 3 | 0 | I'm in the process of making a simple game in pygame. Its looking to be a platformer RPG. But that is neither final or relevant per this question. So far i have very little functionality in the game. Its just a skeleton at this point if that. My question is kind of two fold:
Whats the best (in terms of performance and flexibility) way to add gravity to classes in pygame?
What are the best practices for adding gravity in general? For example, do you just simply do a "if keyPressed == k_W then subtract 2pixels per tick from player-y for 20 ticks" or something with velocity in the up or negative-y direction?
I've seen other posts on adding gravity to games after the fact, where adding it really wasn't thought about during initial development. I want to add it in as early as possible so instead of adding gravity to other things, i can add other things to gravity. I'm going to continue to read up on this, so if you prefer to point me in the direction of some online resources, I'd much appreciate that as well! | Pygame - Gravity Methods | 0 | 0 | 0 | 238 |
35,823,618 | 2016-03-06T05:47:00.000 | 1 | 0 | 1 | 0 | python,matplotlib,neural-network,ipython,jupyter-notebook | 35,827,519 | 1 | true | 0 | 0 | It doesn't matter. It just runs a Python kernel in the background which is no different from one you would run from the command line.
The only thing you should avoid, obviously, is displaying huge amounts of data in your notebook (like plotting your whole image set at once). | 1 | 3 | 1 | I'm working on a project involving Neural Networks (using theano) with a big data set 50,000 images of 3072 pixels. The computational process gets expensive when training the Neural Network as you may expect.
I was using PyCharm to debug and write the code but since I had some trouble using matplotlib and other libraries I decided to go for iPython Notebook. So far I'm just using it to do dummy plots etc but my main concern is : Is it a good idea to use iPython Notebook to run this kind of expensive computationally projects? Is there any drawbacks when using the notebook instead of just running a python script from the terminal?
I researched about good IDE's for Data Analysis and Scientific computation for python and I found that iPtyhon Notebook is the best but any other recommendations are very appreciated. | iPython Notebook for big / complex analysis. Good idea or not? | 1.2 | 0 | 0 | 265 |
35,825,802 | 2016-03-06T10:34:00.000 | 10 | 0 | 1 | 0 | python,arrays,list,numpy | 35,825,863 | 1 | true | 0 | 0 | Numpy arrays is a typed array, the array in memory stores a homogenous, densely packed numbers.
Python list is a heterogeneous list, the list in memory stores references to objects rather than the number themselves.
This means that Python list requires dereferencing a pointer every time the code needs to access the number. While numpy array can be processed directly by numpy vector operations, which makes these vector operations much faster than anything you can code with list.
The drawback of numpy array is that if you need to access single items in the array, numpy will need to box/unbox the number into a python numeric object, which can make it slow in certain situations; and that it can't hold heterogeneous data. | 1 | 13 | 1 | Why do we use numpy arrays in place of lists in python? What is the main difference between them? | What is the difference between a NumPy array and a python list? | 1.2 | 0 | 0 | 17,191 |
35,826,912 | 2016-03-06T12:38:00.000 | 1 | 0 | 0 | 0 | python,pandas,scikit-learn | 35,827,413 | 7 | false | 0 | 0 | IMO the opposite strategy, identifying categoricals is better because it depends on what the data is about. Technically address data can be thought of as unordered categorical data, but usually I wouldn't use it that way.
For survey data, an idea would be to look for Likert scales, e.g. 5-8 values, either strings (which might probably need hardcoded (and translated) levels to look for "good", "bad", ".agree.", "very .*",...) or int values in the 0-8 range + NA.
Countries and such things might also be identifiable...
Age groups (".-.") might also work. | 4 | 28 | 1 | I've been developing a tool that automatically preprocesses data in pandas.DataFrame format. During this preprocessing step, I want to treat continuous and categorical data differently. In particular, I want to be able to apply, e.g., a OneHotEncoder to only the categorical data.
Now, let's assume that we're provided a pandas.DataFrame and have no other information about the data in the DataFrame. What is a good heuristic to use to determine whether a column in the pandas.DataFrame is categorical?
My initial thoughts are:
1) If there are strings in the column (e.g., the column data type is object), then the column very likely contains categorical data
2) If some percentage of the values in the column is unique (e.g., >=20%), then the column very likely contains continuous data
I've found 1) to work fine, but 2) hasn't panned out very well. I need better heuristics. How would you solve this problem?
Edit: Someone requested that I explain why 2) didn't work well. There were some tests cases where we still had continuous values in a column but there weren't many unique values in the column. The heuristic in 2) obviously failed in that case. There were also issues where we had a categorical column that had many, many unique values, e.g., passenger names in the Titanic data set. Same column type misclassification problem there. | What is a good heuristic to detect if a column in a pandas.DataFrame is categorical? | 0.028564 | 0 | 0 | 13,722 |
35,826,912 | 2016-03-06T12:38:00.000 | 5 | 0 | 0 | 0 | python,pandas,scikit-learn | 35,827,781 | 7 | false | 0 | 0 | There's are many places where you could "steal" the definitions of formats that can be cast as "number". ##,#e-# would be one of such format, just to illustrate. Maybe you'll be able to find a library to do so.
I try to cast everything to numbers first and what is left, well, there's no other way left but to keep them as categorical. | 4 | 28 | 1 | I've been developing a tool that automatically preprocesses data in pandas.DataFrame format. During this preprocessing step, I want to treat continuous and categorical data differently. In particular, I want to be able to apply, e.g., a OneHotEncoder to only the categorical data.
Now, let's assume that we're provided a pandas.DataFrame and have no other information about the data in the DataFrame. What is a good heuristic to use to determine whether a column in the pandas.DataFrame is categorical?
My initial thoughts are:
1) If there are strings in the column (e.g., the column data type is object), then the column very likely contains categorical data
2) If some percentage of the values in the column is unique (e.g., >=20%), then the column very likely contains continuous data
I've found 1) to work fine, but 2) hasn't panned out very well. I need better heuristics. How would you solve this problem?
Edit: Someone requested that I explain why 2) didn't work well. There were some tests cases where we still had continuous values in a column but there weren't many unique values in the column. The heuristic in 2) obviously failed in that case. There were also issues where we had a categorical column that had many, many unique values, e.g., passenger names in the Titanic data set. Same column type misclassification problem there. | What is a good heuristic to detect if a column in a pandas.DataFrame is categorical? | 0.141893 | 0 | 0 | 13,722 |
35,826,912 | 2016-03-06T12:38:00.000 | 1 | 0 | 0 | 0 | python,pandas,scikit-learn | 35,828,098 | 7 | false | 0 | 0 | I think the real question here is whether you'd like to bother the user once in a while or silently fail once in a while.
If you don't mind bothering the user, maybe detecting ambiguity and raising an error is the way to go.
If you don't mind failing silently, then your heuristics are ok. I don't think you'll find anything that's significantly better. I guess you could make this into a learning problem if you really want to. Download a bunch of datasets, assume they are collectively a decent representation of all data sets in the world, and train based on features over each data set / column to predict categorical vs. continuous.
But of course in the end nothing can be perfect. E.g. is the column [1, 8, 22, 8, 9, 8] referring to hours of the day or to dog breeds? | 4 | 28 | 1 | I've been developing a tool that automatically preprocesses data in pandas.DataFrame format. During this preprocessing step, I want to treat continuous and categorical data differently. In particular, I want to be able to apply, e.g., a OneHotEncoder to only the categorical data.
Now, let's assume that we're provided a pandas.DataFrame and have no other information about the data in the DataFrame. What is a good heuristic to use to determine whether a column in the pandas.DataFrame is categorical?
My initial thoughts are:
1) If there are strings in the column (e.g., the column data type is object), then the column very likely contains categorical data
2) If some percentage of the values in the column is unique (e.g., >=20%), then the column very likely contains continuous data
I've found 1) to work fine, but 2) hasn't panned out very well. I need better heuristics. How would you solve this problem?
Edit: Someone requested that I explain why 2) didn't work well. There were some tests cases where we still had continuous values in a column but there weren't many unique values in the column. The heuristic in 2) obviously failed in that case. There were also issues where we had a categorical column that had many, many unique values, e.g., passenger names in the Titanic data set. Same column type misclassification problem there. | What is a good heuristic to detect if a column in a pandas.DataFrame is categorical? | 0.028564 | 0 | 0 | 13,722 |
35,826,912 | 2016-03-06T12:38:00.000 | 1 | 0 | 0 | 0 | python,pandas,scikit-learn | 38,108,924 | 7 | false | 0 | 0 | I've been thinking about a similar problem and the more that I consider it, it seems that this itself is a classification problem that could benefit from training a model.
I bet if you examined a bunch of datasets and extracted these features for each column / pandas.Series:
% floats: percentage of values that are float
% int: percentage of values that are whole numbers
% string: percentage of values that are strings
% unique string: number of unique string values / total number
% unique integers: number of unique integer values / total number
mean numerical value (non numerical values considered 0 for this)
std deviation of numerical values
and trained a model, it could get pretty good at inferring column types, where the possible output values are: categorical, ordinal, quantitative.
Side note: as far as a Series with a limited number of numerical values goes, it seems like the interesting problem would be determining categorical vs ordinal; it doesn't hurt to think a variable is ordinal if it turns out to be quantitative right? The preprocessing steps would encode the ordinal values numerically anyways without one-hot encoding.
A related problem that is interesting: given a group of columns, can you tell if they are already one-hot encoded? E.g in the forest-cover-type-prediction kaggle contest, you would automatically know that soil type is a single categorical variable. | 4 | 28 | 1 | I've been developing a tool that automatically preprocesses data in pandas.DataFrame format. During this preprocessing step, I want to treat continuous and categorical data differently. In particular, I want to be able to apply, e.g., a OneHotEncoder to only the categorical data.
Now, let's assume that we're provided a pandas.DataFrame and have no other information about the data in the DataFrame. What is a good heuristic to use to determine whether a column in the pandas.DataFrame is categorical?
My initial thoughts are:
1) If there are strings in the column (e.g., the column data type is object), then the column very likely contains categorical data
2) If some percentage of the values in the column is unique (e.g., >=20%), then the column very likely contains continuous data
I've found 1) to work fine, but 2) hasn't panned out very well. I need better heuristics. How would you solve this problem?
Edit: Someone requested that I explain why 2) didn't work well. There were some tests cases where we still had continuous values in a column but there weren't many unique values in the column. The heuristic in 2) obviously failed in that case. There were also issues where we had a categorical column that had many, many unique values, e.g., passenger names in the Titanic data set. Same column type misclassification problem there. | What is a good heuristic to detect if a column in a pandas.DataFrame is categorical? | 0.028564 | 0 | 0 | 13,722 |
35,827,446 | 2016-03-06T13:30:00.000 | 2 | 0 | 0 | 0 | python,scikit-learn,random-forest,subsampling | 35,847,976 | 4 | false | 0 | 0 | Certainly not all samples are selected for each tree. Be default each sample has a 1-((N-1)/N)^N~0.63 chance of being sampled for one particular tree and 0.63^2 for being sampled twice, and 0.63^3 for being sampled 3 times... where N is the sample size of the training set.
Each bootstrap sample selection is in average enough different from other bootstraps, such that decision trees are adequately different, such that the average prediction of trees is robust toward the variance of each tree model. If sample size could be increased to 5 times more than training set size, every observation would probably be present 3-7 times in each tree and the overall ensemble prediction performance would suffer. | 1 | 6 | 1 | In the documentation of SciKit-Learn Random Forest classifier , it is stated that
The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default).
What I dont understand is that if the sample size is always the same as the input sample size than how can we talk about a random selection. There is no selection here because we use all the (and naturally the same) samples at each training.
Am I missing something here? | How can SciKit-Learn Random Forest sub sample size may be equal to original training data size? | 0.099668 | 0 | 0 | 3,033 |
35,834,903 | 2016-03-07T01:39:00.000 | 5 | 1 | 0 | 0 | python,audio | 36,206,031 | 4 | false | 0 | 0 | Hello After alot of trial and error I final solved it on the pip install sounddevice --user.
You need to remove --user part so that the command is: pip install sounddevice . This installs it through out the entire system and works. | 2 | 4 | 0 | I have python code that is running on raspberry pi B++ that uses the sounddevice library that lets you play and record sounds with python. I have successfully installed the modules. I can confirm through the python command line and enter import sounddevice as sd is works without errors. I have also confirmed by typing help ('modules') in python command line and sounddevice module appears. Only when I am running this code in an independent python program does the ImportError: No module name sounddevice appear.
Hope some one can help.
Here is the included code:
import sounddevice as sd
The error:
ImportError: No module name sounddevice | Python import sounddevice as sd (ImportError: No module name sounddevice) | 0.244919 | 0 | 0 | 14,295 |
35,834,903 | 2016-03-07T01:39:00.000 | 0 | 1 | 0 | 0 | python,audio | 62,490,236 | 4 | false | 0 | 0 | I had this same problem on Windows 10 even after eliminating the --user part of the pip install command. For some reason, installing pyaudio first resolved the problem with sounddevice. Sounddevice continues to work even after uninstalling pyaudio. They're both based on Portaudio so perhaps there's something shared in there, but I am not sure. | 2 | 4 | 0 | I have python code that is running on raspberry pi B++ that uses the sounddevice library that lets you play and record sounds with python. I have successfully installed the modules. I can confirm through the python command line and enter import sounddevice as sd is works without errors. I have also confirmed by typing help ('modules') in python command line and sounddevice module appears. Only when I am running this code in an independent python program does the ImportError: No module name sounddevice appear.
Hope some one can help.
Here is the included code:
import sounddevice as sd
The error:
ImportError: No module name sounddevice | Python import sounddevice as sd (ImportError: No module name sounddevice) | 0 | 0 | 0 | 14,295 |
35,835,274 | 2016-03-07T02:32:00.000 | 2 | 0 | 1 | 0 | python,virtualenv,anaconda | 59,508,454 | 3 | true | 0 | 0 | In case anyone is coming back to this now, for conda 4.7.12, entering export PYTHONNOUSERSITE=True before the conda activate call successfully isolated the conda environment from global/user site packages for me.
On the other hand, entering export PYTHONNOUSERSITE=0 allows the conda environment to be reintroduced to the global/user site packages.
Note: This is instead of the previously suggested export PYTHONNOUSERSITE=1. | 1 | 10 | 0 | I have a project called ABC, I have a conda env just for it in the fold ~/anaconda/envs/ABC, I believe it is a venv, and I want to use some specific packages from the global site packages.
For normal Python installation it can be done be removing the no-global-site-package.txt from the venv folder, or by setting the venv to use global-site-packages, but I didn't find any equivalent approach to do this in Anaconda. The online documentation does not have answer either.
How to do this for Anaconda? | how to reuse global site-packages in conda env | 1.2 | 0 | 0 | 14,872 |
35,835,787 | 2016-03-07T03:39:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,svm,libsvm,prediction | 35,836,274 | 1 | false | 0 | 0 | The file "trainingdata.svm.prediction" is predicting the labels 1 and 0 for your set (1 means the sample was predicted to be male, 0 is female).
It assumes all the labels belong to class index 0, I believe. | 1 | 0 | 1 | I am using the LIBSVM for the first time.
I was able to train a data(for images) and my model is ready "trainingdata.svm.model"
Now, when I run my classification against an unknown test data it is giving me two files:
1. trainingdata.svm.prediction (This file contains 1's and 0's) against my each of test data.
2. It is giving me Accuracy = 8 %
THE QUESTION:
1. How do I interpret the 1s and 0s in my "trainingdata.svm.prediction". Note: I am classifying genders where 1 could be male and 0 could be female.
How is Accuracy calculated? How can a program calculate accuracy since the test data is an unknown entity and we do not know the labels yet.
Thanks | How is "accuracy" calculated using Libsvm - SVM_Predict.exe | 0 | 0 | 0 | 190 |
35,837,243 | 2016-03-07T06:09:00.000 | 6 | 0 | 0 | 0 | python,python-2.7,hdf5,h5py,hdf | 46,668,033 | 1 | false | 0 | 0 | Declaration up front: I help maintain h5py, so I probably have a bias etc.
The wikipedia page has changed since the question was posted, here's what I see:
Criticism
Criticism of HDF5 follows from its monolithic design and lengthy specification.
Though a 150-page open standard, the only other C implementation of HDF5 is just a HDF5 reader.
HDF5 does not enforce the use of UTF-8, so client applications may be expecting ASCII in most places.
Dataset data cannot be freed in a file without generating a file copy using an external tool (h5repack).
I'd say that pretty much sums up the problems with HDF5, it's complex (but people need this complexity, see the virtual dataset support), it's got a long history with backwards compatibly as it's focus, and it's not really designed to allow for massive changes in files. It's also not the best on Windows (due to how it deals with filenames).
I picked HDF5 for my research because of the available options, it had decent metadata support (HDF5 at least allows UTF-8, formats like FITS don't even have that), support for multidimensional arrays (which formats like Protocol Buffers don't really support), and it supports more than just 64 bit floats (which is very rare).
I can't comment about known bugs, but I have seen corruption (this happened when I was writing to a file and linux OOM'd my script). However, this shouldn't be a concern as long as you have proper data hygiene practices (as mentioned in the hackernews link), which in your case would be to not continuously write to the same file, but for each run create a new file. You should also not modify the file, instead any data reduction should produce new files, and you should always backup the originals.
Finally, it is worth pointing out there are alternatives to HDF5, depending on what exactly your requirements are: SQL databases may fit you needs better (and sqlite comes with Python by default, so it's easy to experiment with), as could a simple csv file. I would recommend against custom/non-portable formats (e.g. pickle and similar), as they're neither more robust than HDF5, and more complex than a csv file. | 1 | 8 | 1 | On wikipedia one can read the following criticism about HDF5:
Criticism of HDF5 follows from its monolithic design and lengthy
specification. Though a 150-page open standard, there is only a single
C implementation of HDF5, meaning all bindings share its bugs and
performance issues. Compounded with the lack of journaling, documented
bugs in the current stable release are capable of corrupting entire
HDF5 databases. Although 1.10-alpha adds journaling, it is
backwards-incompatible with previous versions. HDF5 also does not
support UTF-8 well, necessitating ASCII in most places. Furthermore
even in the latest draft, array data can never be deleted.
I am wondering if this is just applying to the C implementation of HDF5 or if this is a general flaw of HDF5?
I am doing scientific experiments which sometimes generate Gigabytes of data and in all cases at least several hundred Megabytes of data. Obviously data loss and especially corruption would be a huge disadvantage for me.
My scripts always have a Python API, hence I am using h5py (version 2.5.0).
So, is this criticism relevant to me and should I be concerned about corrupted data? | HDF5 possible data corruption or loss? | 1 | 0 | 0 | 1,980 |
35,841,555 | 2016-03-07T10:38:00.000 | 2 | 1 | 0 | 0 | python,ionic-framework,backend,hybrid-mobile-app | 35,842,202 | 2 | false | 0 | 0 | Yes, you can use python using django rest framework as a backend for your ionic app.... | 2 | 8 | 0 | Can I use Python as a backend for my ionic app? i am new to ionic as well as backend development. If not python suggest some good language for backend development. I am working on a hybrid app. | Can I Use Python in Ionic for Backend work | 0.197375 | 0 | 0 | 12,424 |
35,841,555 | 2016-03-07T10:38:00.000 | 6 | 1 | 0 | 0 | python,ionic-framework,backend,hybrid-mobile-app | 35,842,136 | 2 | true | 0 | 0 | You can certainly work with Python. There is an awesome framework called Django which will easen up your development.
However, if you are new to backend development and are already developing the ionic app I strongly recomend using NodeJS.
It is Javascript running on the server machine. The reason is that you will be developing on the same languages on both sides, simplifying thelearning curve. NODEJS is a magnificent language that works a little different than others since it runs on the same process using an event loop to handle incoming requests. It is worth taking a look, you will be making serious functionality in very little time. Take a look at Sequelize to work with SQL databases in an abstracted ORM way (I dont know if you are familiar with databases but it brings clases and objects to talk to DB, so you for get about sql commands like select, join...).
In NodeJS there are a lot of modules that you can just import like libraries in Java or C and call complex functionality through simple javascript code.
Take a loop at Express framework for Node to make the server as a rest api.
Your question was a little broad so I dont know what else you would like to know, if you have any further question I can certainly help you. | 2 | 8 | 0 | Can I use Python as a backend for my ionic app? i am new to ionic as well as backend development. If not python suggest some good language for backend development. I am working on a hybrid app. | Can I Use Python in Ionic for Backend work | 1.2 | 0 | 0 | 12,424 |
35,842,899 | 2016-03-07T11:45:00.000 | 1 | 0 | 0 | 0 | python,django-models | 35,843,074 | 1 | false | 1 | 0 | Apps are logical modules. One app can contain several models. Your project could have users and blog apps. users would have User and Group models, blog would have Post, Tag and PostTag models.
Views within single app usually have same URL prefix and their own URL routing.
Within app all database migrations are executed consecutively whereas it's your responsibility to specify dependencies between migrations from different modules.
Try to keep logical bounds between apps as weak as possible. | 1 | 1 | 0 | I want to know if I've understand the main point of django app usage.
Every app has a models.py file which create tables in our database, correst?
For example I want to create a personal CMS. I should create an app, to create tables for my posts details, and should create an other app to creating tables for my users that want to sign up into my blog, in order to keep their username and password in the database, and I also can create an other app to create a separate tables to save other data..... Do I think correctly?! What are django apps exactly for? | What is django app usage? | 0.197375 | 0 | 0 | 73 |
35,844,303 | 2016-03-07T12:54:00.000 | 1 | 0 | 0 | 0 | python,mysql,django | 35,844,490 | 3 | false | 1 | 0 | The best way to do this is to store the images in your server in some specific, general folder for this images. After that you store a string in your DB with the path to the image that you want to load. This will be a more efficient way to do this. | 1 | 3 | 0 | I am trying to create my personal web page. So in that I needed to put in the recommendations panel , which contains recommendations by ex employees/friends etc.
So I was planning to create a model in django with following attributes:-
author_name
author_designation
author_image
author_comments
I have following questions in my mind related to image part:-
Is it good practice to store images in the backend database?(database is for structured information from what i understand)
How to store images so that scaling the content and managing it becomes really easy? | Is it a good practice to save images in the backend database in mysql/django? | 0.066568 | 1 | 0 | 1,276 |
35,847,399 | 2016-03-07T15:20:00.000 | 4 | 0 | 0 | 0 | python,neo4j,py2neo | 35,849,399 | 2 | false | 0 | 0 | You don't. That is, I've not written a way to do that.
The watch function is intended only as a debugging utility for an interactive console session. You shouldn't need to use it in an application. | 1 | 0 | 0 | I run this earlier in the code
watch("httpstream")
Subsequently, any py2neo commands that triggers HTTP traffic will result in verbose logging. How can I stop the effect of watch() logging without creating a new Graph instance? | How do I stop py2neo watch()? | 0.379949 | 0 | 1 | 183 |
35,849,754 | 2016-03-07T17:09:00.000 | 0 | 0 | 1 | 0 | python,list,csv | 35,880,616 | 2 | false | 0 | 0 | OK, I should close this. My comment resolved the question above. I reformatted my input csv files to be values, each on a separate "line", I was able to read the lines in one by one and append them to a list. This seems really sloppy and wasteful. I was hoping for a method to read a csv file and in one line assign it to a single list – not a list of lists, a single list. | 1 | 0 | 1 | I know this has been asked many times, but when I try this, I always get a list of lists.
The data in my input file (col.csv) looks like:
1,2,"black", "orange"
There are NO hard returns in the data (\n)(it's a csv file, right?). When I use the csv module in python to import to a list using reader, I end up with list of lists, with the first entry, list [0][0] containing all the data.
How do I import the data into a list such that each comma separated value is a single list entry? The typical method I see uses for row in..., but I don't have rows – there are no returns in the data. Sorry for such a rank amateur question. | I am trying to read several csv files in python 2.7 and assign to a list variable | 0 | 0 | 0 | 55 |
35,851,455 | 2016-03-07T18:43:00.000 | 0 | 0 | 1 | 0 | python,function,variables,closures,global | 35,851,658 | 3 | false | 0 | 0 | Global variables are discouraged because they make it hard to keep track of the state of the program. If I'm debugging a 1,000-line file, and somewhere in the middle of a function I see some_well_named_flag = False, I'm going to have a lot of hunting to do to see how else it affects what else in the program.
Functions don't have state. The places where they can modify the program are more or less limited to the parameters and return value.
If you're still concerned about controlling access to functions, there are other languages like Java or C++ that can help you do that. One convention with Python is to prefix functions that shouldn't be used outside of the class with an underscore, and then trust people not to call them from outside the class. | 2 | 5 | 0 | I was writing some Python code and, as usual, I try to make my functions small and give them a clear name (although sometimes a little too long). I get to the point where there are no global variables and everything a function needs is passed to it.
But I thought, in this case, every function has access to any other function. Why not limit their access to other functions just like we limit the access to other variables.
I was thinking to use nested functions but that implies closures and that's even worse for my purpose.
I was also thinking about using objects and I think this is the point of OOP, although it'll be a little too much boilerplate in my case.
Has anyone got this problem on her/his mind and what's the solution. | Why it's not ok for variables to be global but it's ok for functions? | 0 | 0 | 0 | 77 |
35,851,455 | 2016-03-07T18:43:00.000 | 7 | 0 | 1 | 0 | python,function,variables,closures,global | 35,851,566 | 3 | false | 0 | 0 | It is not a good idea to have global mutable data, e.g. variables. The mutability is the key here. You can have constants and functions to your hearts content.
But as soon as you write functions that rely on globally mutable state it limits the reusability of your functions - they're always bound to that one shared state. | 2 | 5 | 0 | I was writing some Python code and, as usual, I try to make my functions small and give them a clear name (although sometimes a little too long). I get to the point where there are no global variables and everything a function needs is passed to it.
But I thought, in this case, every function has access to any other function. Why not limit their access to other functions just like we limit the access to other variables.
I was thinking to use nested functions but that implies closures and that's even worse for my purpose.
I was also thinking about using objects and I think this is the point of OOP, although it'll be a little too much boilerplate in my case.
Has anyone got this problem on her/his mind and what's the solution. | Why it's not ok for variables to be global but it's ok for functions? | 1 | 0 | 0 | 77 |
35,851,862 | 2016-03-07T19:04:00.000 | 0 | 0 | 1 | 0 | python | 35,851,962 | 5 | false | 0 | 0 | You could use a regular expression such as "hours.*minutes", or you could use a simple string search that looks for "hours", notes the location where it is found, then does another search for "minutes" starting at that location. | 1 | 2 | 0 | I'm wondering how to detect if two substrings match a main string in a specific order. For example if we're looking for "hours" and then "minutes" anywhere at all in a string, and the string is "what is 5 hours in minutes", it would return true. If the string was "what is 5 minutes in hours", it would return false. | If multiple substrings match string in specific order | 0 | 0 | 0 | 63 |
35,858,245 | 2016-03-08T03:13:00.000 | 7 | 0 | 0 | 0 | python | 35,858,291 | 1 | true | 0 | 0 | Python dynamically sizes the string; it's not vulnerable to an overflow (though if the input is huge, it could raise a MemoryError when it can't expand the buffer further).
Python reads the input in chunks, and grows the buffer if it fills the buffer without finding a newline before reading another chunk. | 1 | 2 | 0 | Since CPython is implemented in C, when it reads a line from stdin, if the line exceeds whatever is the default size given to the string being read by the interpreter, would it cause a buffer overflow or does Python handle it? | Is Python's raw_input() vulnerable to a buffer overflow? | 1.2 | 0 | 0 | 787 |
35,858,853 | 2016-03-08T04:17:00.000 | 1 | 0 | 1 | 0 | c#,python,visual-studio | 35,859,134 | 2 | false | 0 | 0 | is there any way through which I could create a Visual Studio Project, add files to it
Whilst there is a .NET API for creating/manipulating project files, it's a bit on the undocumented side (I have used it in the past though) and I don't know if you can call it from Python. If you want to see the .NET API just look at the IronPython Custom Project Extension project.
However, VS project files are just XML files so if you know the schema, you can just write to the files from Python using your API of choice. VS won't know any better.
and build it from within the python program
Ultimately you can just spawn a process to invoke msbuild. Works for Jenkins. | 1 | 0 | 0 | I have a few questions:
Is there any way through which I could create a Visual Studio Project, add files to it, and build it from within a Python program?
Are there built-in commands to do this? If not any commands which could be run in command line?
Thanks for the help. | create a visual studio project and add files to it from a python program | 0.099668 | 0 | 0 | 1,145 |
35,859,441 | 2016-03-08T05:07:00.000 | 1 | 0 | 1 | 1 | python,linux,scripting | 35,860,000 | 3 | false | 0 | 0 | use python 3. the number of packages that don't support python 3 is shrinking every day, and the vast majority of large/important frameworks out there already support both. there are even some projects which have dropped python 2 entirely, albeit those tend not to be large (since enterprise inertia tends to hold projects back).
starting a new project today on python 2, especially as a beginner, is just opening yourself to more pain imo than running into a package that doesn't support python 3.
considering the versatility of python and the size of the vibrant python community, there are often multiple packages that solve the same problem. that means even if you find one that doesn't support python 3, it's often possible to find a similar project that does support python 3.
once you get confident enough w/python 3, and you do run into a package that only supports python 2, you always have the source and can start contributing patches back! :D | 1 | 0 | 0 | I am a Java developer with more than 10 years of experience.
I started using python few months back when I had a requirement to create a script which pulls data from a REST service and then generates a report using this data. The fact that python is a multi purpose language (scripting, web applications, REST services etc) coupled with very fast development speed has ignited a deep interested of mine in this language. In fact this is the only language I use when I am in Linux world.
Currently I am trying to port my (powershell/shell) automation scripts, developed for fully automating release process of Piston (an open source Java based micro portal technology), to python. However a major challenge in front of me is which version (2 or 3) of python should I use? Ideally I would prefer 3 as I believe this has many improvements over version 2 and I would like to use this version of all the new development. However my concern is there could be some packages which may not a version for python 3 yet. This is what has been mentioned on python.org site too -
However, there are some key issues that may require you to use Python 2 rather than Python 3.
Firstly, if you're deploying to an environment you don't control, that may impose a specific version, rather than allowing you a free selection from the available versions.
Secondly, if you want to use a specific third party package or utility that doesn't yet have a released version that is compatible with Python 3, and porting that package is a non-trivial task, you may choose to use Python 2 in order to retain access to that package.
One popular module that don't yet support Python 3 is Twisted (for networking and other applications). Most actively maintained libraries have people working on 3.x support. For some libraries, it's more of a priority than others: Twisted, for example, is mostly focused on production servers, where supporting older versions of Python is important, let alone supporting a new version that includes major changes to the language. (Twisted is a prime example of a major package where porting to 3.x is far from trivial.)
So I don't want to be in a situation where there is a package which I think can be very useful for my automation scripts but does not have a version for python 3. | Choosing between Python 2(.7.x) and Python 3(.5.x) | 0.066568 | 0 | 0 | 102 |
35,866,229 | 2016-03-08T11:32:00.000 | -1 | 0 | 0 | 0 | python,svg,pygal | 36,960,515 | 5 | false | 0 | 0 | You need to include <script type="text/javascript" src="/js/pygal-tooltips.js"></script> in your html. | 3 | 3 | 0 | Following my python book, I made a bar graph using pygal. I rendered the information to an .svg file and opened it up in my web browser. My book says that the plot is interactive and will show you the value of each bar if you hover over it. However, whenever I hover my mouse over the graph, nothing happens. I am using a mac and google chrome to view the file.
Thanks! | Tooltips are not working in my pygal bar graph? | -0.039979 | 0 | 0 | 1,833 |
35,866,229 | 2016-03-08T11:32:00.000 | 0 | 0 | 0 | 0 | python,svg,pygal | 66,307,881 | 5 | false | 0 | 0 | Is this issue solved?
The interaction is done using a script online with th href in the svg file.
If you don t have an internet link it will not work.
If you have solved this problem please let us know | 3 | 3 | 0 | Following my python book, I made a bar graph using pygal. I rendered the information to an .svg file and opened it up in my web browser. My book says that the plot is interactive and will show you the value of each bar if you hover over it. However, whenever I hover my mouse over the graph, nothing happens. I am using a mac and google chrome to view the file.
Thanks! | Tooltips are not working in my pygal bar graph? | 0 | 0 | 0 | 1,833 |
35,866,229 | 2016-03-08T11:32:00.000 | 2 | 0 | 0 | 0 | python,svg,pygal | 36,469,495 | 5 | false | 0 | 0 | I'm having the same problem on my Windows 10 machine. Both MS Edge and Google Chrome don't render the charts as interactive.
There seems to be something happening in between the script being executed in python and the final render. The reason I note this point is because the interactive examples on the pygal site work with no problem in both browsers but when the example script is pasted into python and executed, it doesn't work as it should. | 3 | 3 | 0 | Following my python book, I made a bar graph using pygal. I rendered the information to an .svg file and opened it up in my web browser. My book says that the plot is interactive and will show you the value of each bar if you hover over it. However, whenever I hover my mouse over the graph, nothing happens. I am using a mac and google chrome to view the file.
Thanks! | Tooltips are not working in my pygal bar graph? | 0.07983 | 0 | 0 | 1,833 |
35,866,453 | 2016-03-08T11:44:00.000 | 1 | 0 | 0 | 0 | python,multithreading,flask | 35,866,759 | 1 | false | 1 | 0 | You can render a simple html file at your default route that make an Ajax request to a specific route which will start your script and when the script is finished return the data and catch them in your ajax request to display the data on your page. During the process of your script you can display a loader to show that something is happening | 1 | 0 | 0 | I am writing an app using Flask that runs a shell script and displays its output in a web page. This works fine. The thing is when I run the script, it takes a long time and the page is loading during the whole time that the script is executed. What I want is that the script runs in the background and when it ends it displays the result.
Is there a way to do that? | Run a script in background in flask | 0.197375 | 0 | 0 | 1,153 |
35,866,967 | 2016-03-08T12:07:00.000 | 0 | 0 | 1 | 0 | python,anaconda,vpython | 39,803,190 | 1 | false | 0 | 0 | The graph functions now live in the main vpython library when using Jupyter. So,
from vpython import *
should be sufficient. (P.S. I'd recommend not importing * but rather importing the functions you plan to use or just import vpython.)
Note however that some functions change name in the Jupyter-compatible version of VPython, so display becomes canvas and gdisplay becomes graph, and you have to explicitly use vector(x,y,z) rather than (x,y,z) and have to use obj.pos.x rather than obj.x | 1 | 4 | 1 | I try to import vpython into anaconda. It seems to work so far, but if I call
from visual import * it gives me an error.
However, it does work when I type from vpython import * ,which is really weird since in all programs I only see the from visual import * command.
Now to the real problem: I can't draw graphs. I have to call from visual.graph import * but this does not work (from vpython.graph import * doesn't work either).
I am receiving the error below:
ImportError Traceback (most recent call last)
in () ----> 1 from visual
import * ImportError: No module named visual | Issues with importing vpython for anaconda | 0 | 0 | 0 | 3,831 |
35,869,226 | 2016-03-08T13:52:00.000 | 0 | 0 | 0 | 0 | python,rabbitmq,messages,bytestream,cloudamqp | 35,869,733 | 1 | false | 0 | 0 | The message body is a buffer, you can put what you prefer inside.
JSON, ANS1, XML or buffer audio. | 1 | 0 | 0 | How to send an audio file as message in Cloudamqp?
I'm guessing I need its byte stream and send it as a JSON. But I'm not sure if that is possible. Or do I just send the link of the location of the audio file for download? | RabbitMQ and Audiofiles | 0 | 0 | 1 | 404 |
35,869,666 | 2016-03-08T14:12:00.000 | 0 | 0 | 1 | 0 | python,logging | 35,869,928 | 1 | true | 0 | 0 | If the logger is not named, it just means it is the default logger. You can get it by calling logging.getLogger()
So to set the log level, do this:
logging.getLogger.setLevel(logging.INFO) | 1 | 1 | 0 | To change the logging level of a dependent package that properly names its logger log = logging.getLogger(__name__) is easy: logging.getLogger("name.of.package").setLevel(logging.WARNING).
But if the 3rd party package doesn't name their logger and just logs messages using logging.info("A super loud annoying message!"), how do I change that level? Adding the getLogger(...).setLevel(..) doesn't seem to work because the logger isn't named. Is it possible to change the logging level output of just one package without changing the level for the entire logging module? | How to change python log level for unnamed logger? | 1.2 | 0 | 0 | 133 |
35,871,850 | 2016-03-08T15:49:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,anaconda,conda | 35,872,466 | 1 | false | 0 | 0 | if you already used pip and virtual env, conda is like both at the same time. It's a package manager and also creates virtual environments.
To answer your question, conda creates a new environement, exporting python paths for this environment and installing all packages here. You can always switch between environments, but after reboot, all your virtual environments would be desactivated and you'll have your default system python path (2.7). | 1 | 2 | 0 | I have installed anconda with python 3.5, but i am curious to know how conda is managing between system python(2.7.6) and python3.5(installed with anaconda).
Particularly If I make a new environment with conda help containing python 3.5 and don't switch to my root env in conda while restarting the system. Does system start with python3 as default or python 2.7.6?
I am in need of answer to this as one of my friend installed Anaconda with python3.5 as default to system which broke the system dependencies and It did not start.
I am using Ubuntu 14.04. | How conda manages the environment with system python and python installed with this | 0 | 0 | 0 | 642 |
35,872,623 | 2016-03-08T16:24:00.000 | 0 | 0 | 1 | 1 | python,linux,ubuntu,installation,environment-variables | 35,872,702 | 1 | false | 0 | 0 | Try installing the 2.7 version : apt-get install python2.7-dev | 1 | 0 | 0 | I have the newest version of python (2.7.11) installed on my home director. To compile the YouCompleteMe plugin, I need the python-dev to be installed. However, the global python of my environment is 2.7.11, which means that if I install python-dev via apt-get, it would incompatible with python 2.7.11, because it is used for python 2.6.
I re-compiled python 2.7.11 with --enable-shared flag, but failed to know how to add its lib and header files to system's default search path (if there exist such a path environment variable).
So, my question is, how to manually install the locally compiled python library to system? | how to manually install the locally compiled python library (shared python library) to system? | 0 | 0 | 0 | 170 |
35,873,048 | 2016-03-08T16:43:00.000 | 1 | 0 | 0 | 0 | python,dictionary,report,execution,quickfix | 35,889,301 | 1 | false | 0 | 0 | It sounds like what you've changed in the data dictionary makes the ExecID optional rather than mandatory. If you wanted to remove "the requirement" altogether then you'd have to remove the ExecID from the fields making up an execution report in the data dictionary. However, if you did that and your cpty still sent it in the exec report (because it's still configured in their data dictionary) then it would (provided you're using your own DD validation) fail validation.
Why don't you want the ExecID field?
Why can't you ignore it if it's sent to you? | 1 | 1 | 0 | I am connecting to an order session but I get execution reports without ExecID field. I change the requirement to no for ExecID field in ExecutionReport messages from the data dictionary but quickfix still sends reject message. Thanks for any help. | Quickfix python data dictionary | 0.197375 | 0 | 0 | 601 |
35,877,333 | 2016-03-08T20:25:00.000 | 0 | 0 | 0 | 0 | python,flask,url-routing,werkzeug | 35,878,062 | 1 | true | 1 | 0 | It turns out that 9000 was right: the '@' sign is a perfectly legal character in the URL. As such, that shouldn't be what Flask was complaining about. Much less obvious than the conversion of '@' to '%40' in the redirected URL is that the trailing slash was missing from the initial request. When writing my question, I was so focused on the change from '@' to '%40' (which is, as it turns out, the same thing in URL terms) that I didn't notice the missing trailing slash at the end of the first URL and mistakenly included it when writing this question.
Adding the trailing slash to the POST URL, regardless of whether that URL contained '@' or '%40', fixed the issue. If Flask replaces '@' with '%40' when redirecting, that is nothing to worry about. The real problem is likely caused by something else entirely. | 1 | 1 | 0 | I am re-implementing a legacy system as a Flask app and must keep url patterns as they are. One of the urls includes a user's full email address directly (in other words, the email address is part of the url and not as a GET parameter).
When I send requests to this url, Flask automatically responds with a redirect to the same url except that the '@' sign in the email address is replaced with '%40'. For example, a request to /users/new/[email protected]/ is redirected to /users/new/user%40example.com/. I even receive this response from Flask when I send up POST requests directly to the second url, so I'm assuming that the '%40' is automatically translated into an '@' character when processed for the request.
How do I get Flask to accept requests to urls that include the '@' sign without redirecting? This may be Werkzeug's fault, as Flask's URL resolving system is built on Werkzeug.
EDIT: I Incorrectly included a trailing slash in the initial request URL listed in this question. My problem was in fact caused by the absence of the slash, not the replacement of '@' with '%40'. | Flask redirecting requests to urls containing '@' | 1.2 | 0 | 0 | 511 |
35,879,103 | 2016-03-08T22:15:00.000 | 1 | 0 | 1 | 0 | python,collections | 35,879,173 | 2 | false | 0 | 0 | You could have a data structure that maps interval start or end points to positions. In order to compute the interval you need to look up, either do some appropriate rounding on the time value in question (if the intervals can be considered regular enough for that), or use the bisect module to look up the closest start or end point in the list of all occurring intervals. | 1 | 0 | 0 | I've got a situation where I've got finer time granularity than I do position granularity. Let's say that I'm measuring position at 10 Hz, but am making other measurements at 100 Hz. I'm wondering if anyone is aware of a clever/efficient way of associating a position with a time interval? That is, given a time that falls within that interval the lookup would return an appropriate position. It may just be that a straightforward implementation involving a list of tuples (start_time, end_time, position) and looping won't be disastrous, but I'm curious to know how other people have dealt with this kind of problem. | Efficiently associating a single value with an interval | 0.099668 | 0 | 0 | 17 |
35,879,106 | 2016-03-08T22:15:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,random | 35,879,315 | 5 | false | 0 | 0 | You could declare a fixed list of approximately 1000 strings (i.e. ['000', '001', ..., '999'], omitting whatever values you like, then call random.choice() on that list.
If you don't want to type 1000 strings, you could programatically generate the list using something like range() and then .remove() your banned values. | 1 | 0 | 0 | In Python 2.7, how do I most efficiently produce a unique, random string of len=3 - formed only of digits - where values contained in a list (called exclude_me) are not considered while the random string is being calculated?
E.g. exclude_me=['312','534','434','999',...........,'123'] | Generating a random string of fixed length where certain values are prohibited | 0 | 0 | 0 | 344 |
35,880,417 | 2016-03-08T23:56:00.000 | 0 | 0 | 0 | 0 | python,video,ffmpeg,video-streaming,http-live-streaming | 35,886,856 | 2 | false | 1 | 0 | You could use FFmpeg to mux the video stream in to H.264 in a mp4 container and then that can be directly used in a HTML5 video element. | 1 | 6 | 0 | I am trying to show live webcam video stream on webpage and I have a working draft. However, I am not satisfied with the performance and looking for a better way to do the job.
I have a webcam connected to Raspberry PI and a web server which is a simple python-Flask server. Webcam images are captured by using OpenCV and formatted as JPEG. Later, those JPEGs are sent to one of the server's UDP ports. What I did up to this point is something like a homemade MJPEG(motion-jpeg) streaming.
At the server-side I have a simple python script that continuously reads UDP port and put JPEG image in the HTML5 canvas. That is fast enough to create a perception of a live stream.
Problems:
This compress the video very little. Actually it does not compress the video. It only decreases the size of a frame by formatting as JPEG.
FPS is low and also quality of the stream is not that good.
It is not a major point for now but UDP is not a secure way to stream video.
Server is busy with image picking from UDP. Needs threaded server design.
Alternatives:
I have used FFMPEG before to convert video formats and also stream pre-recorded video. I guess, it is possible to encode(let say H.264) and stream WebCam live video using ffmpeg or avconv. (Encoding)
Is this applicable on Raspberry PI ?
VLC is able to play live videos streamed on network. (Stream)
Is there any Media Player to embed on HTML/Javascript to handle
network stream like the VLC does ?
I have read about HLS (HTTP Live Stream) and MPEG-DASH.
Does these apply for this case ? If it does,how should I use them ?
Is there any other way to show live stream on webpage ?
RTSP is a secure protocol.
What is the best practice for transport layer protocol in video
streaming ? | Live Video Encoding and Streaming on a Webpage | 0 | 0 | 1 | 8,409 |
35,881,832 | 2016-03-09T02:30:00.000 | 0 | 0 | 1 | 0 | python,arrays,numpy,pandas,dataframe | 35,882,190 | 3 | false | 0 | 0 | Here's a general overview (partial credit to online documentation and Mark Lutz and Wes McKinney O'Reilly books):
list: General selection object available in Python's standard
library. Lists are positionally ordered collections of arbitrarily
typed objects, and have no fixed size. They are also mutable (str
for example, are not).
numpy.ndarray: Stores a collection of items of the same type. Every
item takes up the same size block of memory (not necessarily the case
in a list). How each item in the array is to be interpreted is
specified by a separate data-type object (dtype, not to be confused
with type). Also, differently from lists, ndarrays can't be have
items appended in place (i.e. the .append method returns a new
array with the appended items, differently from lists).
A single ndarray is a vector, an ndarray of same-sized ndarrays is a 2-d array (a.k.a matrix) and so on. You can make arbitrary n-dimensional objects by nesting.
pandas.Series: A one-dimensional array-like object containing an
array of data (of any dtype) and an associated array of data
labels, called its index. It's basically a glorified numpy.ndarray,
with labels (stored inside a Series as an Index object) for each
items and some handy extra functionality. Also, a Series can
contain multiple objects of different dtypes (more like a list).
pandas.DataFrame: A collection of multiple Series, forming a
table-like object, with a lot of very handy functionality for data
analysis. | 3 | 1 | 1 | I am a Python beginner and getting confused by these different forms of storing data? When should one use which. Also which of these is suitable to store a matrix (and a vector)? | Can someone consolidate the definition and the differences between a list, an array, a numpy array, a pandas dataframe , series? | 0 | 0 | 0 | 80 |
35,881,832 | 2016-03-09T02:30:00.000 | 0 | 0 | 1 | 0 | python,arrays,numpy,pandas,dataframe | 35,882,181 | 3 | false | 0 | 0 | list - the original Python way of storing multiple values
array - a little used Python module (let's ignore it)
numpy array - the closest thing in Python to the arrays, matrices and vectors used in mathematics and languages like MATLAB
dataframe, datseries - pandas structures, generally built on numpy, better suited for the kind of data found in tables and databases.
To be more specific, you need to give us an idea of what kinds of problems you need to solve. What kind of data are you using, and what do you need to do with it?
lists can change in size, and can contain a wide mix of elements.
numpy.array is fixed in size, and contains a uniform type of elements. It is multidimensional, and has implemented many mathematical functions. | 3 | 1 | 1 | I am a Python beginner and getting confused by these different forms of storing data? When should one use which. Also which of these is suitable to store a matrix (and a vector)? | Can someone consolidate the definition and the differences between a list, an array, a numpy array, a pandas dataframe , series? | 0 | 0 | 0 | 80 |
35,881,832 | 2016-03-09T02:30:00.000 | 0 | 0 | 1 | 0 | python,arrays,numpy,pandas,dataframe | 35,882,258 | 3 | false | 0 | 0 | Lists: lists are very flexible and can hold completely heterogeneous, arbitrary data, and they can be appended to very efficiently.
Array: The array.array type, on the other hand, is just a thin wrapper on C arrays. It can hold only homogeneous data, all of the same type, and so it uses only sizeof(one object) * length bytes of memory.
Numpy arrays: However, if you want to do math on a homogeneous array of numeric data, then you're much better off using NumPy, which can automatically vectorize operations on complex multi-dimensional arrays.
Pandas: Pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool.
Pandas provides a bunch of C or Cython optimized routines that can be faster than numpy "equivalents" (e.g. reading text). For something like a dot product, pandas DataFrames are generally going to be slower than a numpy array
FYI: Taken from different web sources | 3 | 1 | 1 | I am a Python beginner and getting confused by these different forms of storing data? When should one use which. Also which of these is suitable to store a matrix (and a vector)? | Can someone consolidate the definition and the differences between a list, an array, a numpy array, a pandas dataframe , series? | 0 | 0 | 0 | 80 |
35,881,949 | 2016-03-09T02:45:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,keras | 43,664,585 | 1 | false | 0 | 0 | Keras assumes that if you are using tensorflow, you are going with (samples, channels, rows, cols) | 1 | 1 | 1 | I have a question about the 4D tensor on keras about Convolution2D Layers.
The Keras doc says:
4D tensor with shape: (samples, channels, rows, cols) if dim_ordering='th' or 4D tensor with shape: (samples, rows, cols, channels) if dim_ordering='tf'.
I use 'tf', how about my input? When I use (samples, channels, rows, cols), it is ok, but when I use (samples, rows, cols, channels) as input, it has some problems. | the input shape of array about Keras on Tensorflow | 0 | 0 | 0 | 1,225 |
35,882,062 | 2016-03-09T02:58:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn | 35,889,624 | 1 | false | 0 | 0 | In the test phase you should use the same model names as you used in the trainings phase. In this way you will be able to use the model parameters which are derived in the training phase. Here is an example below;
First give a name to your vectorizer and to your predictive algoritym (It is NB in this case)
vectorizer = TfidVectorizer()
classifier = MultinomialNB()
Then, use these names to vectorize and predict your data
trainingdata_counts = vectorizer.fit_transform(trainingdata.values)
classifier.fit(trainingdata_counts, trainingdatalabels)
testdata_counts = vectorizer.transform(testdata.values)
predictions=classifier.predict(testdata_counts)
By this way, your code will be able to process training and the test phases continuously. | 1 | 0 | 1 | in text mining/classification when a vectorizer is used to transform a text into numerical features, in the training TfidfVectorizer(...).fit_transform(text) or TfidfVectorizer(...).fit(text) is used. In testing it supposes to utilize former training info and just transform the data following the training fit.
In general case the test run(s) is completely separate from train run. But it needs some info regarding the fit obtained during the training stage otherwise the transformation fails with error sklearn.utils.validation.NotFittedError: idf vector is not fitted . It's not just a dictionary, it's something else.
What should be saved after the training is done, to make the test stage passing smoothly?
In other words train and test are separated in time and space, how to make test working, utilizing training results?
Deeper question would be what 'fit' means in scikit-learn context, but it's probably out of scope | Vectorizer where or how fit information is stored? | 0 | 0 | 0 | 172 |
35,886,892 | 2016-03-09T08:51:00.000 | 2 | 0 | 1 | 0 | python,command-line | 35,887,027 | 3 | true | 0 | 0 | if you are using windows, add the Python3 folder to the PATH variable. And then rename the python.exe to python3.exe and then you can easily use it from command line.
Also you might observe that you will be having two IDLE editors, so you can select the one which uses version you want then run code as you usually do.
If you have linux then you already have python and python3 in the system, | 3 | 0 | 0 | I started writing code in python 2, but am now doing a course that runs with python 3, so I have both installed on my Windows computer. Python 2 is my default.
Is there a way to launch python 3 from the command line if python 2 is my default?
Thanks! | Running python 3 from command line when I have python 2 & 3 both installed | 1.2 | 0 | 0 | 208 |
35,886,892 | 2016-03-09T08:51:00.000 | 0 | 0 | 1 | 0 | python,command-line | 35,887,098 | 3 | false | 0 | 0 | Try:
cd C:\Python34\
python.exe [path_to_you_script]
example:
cd C:\Python34\
python.exe "C:\Python34\000\my_script.py" | 3 | 0 | 0 | I started writing code in python 2, but am now doing a course that runs with python 3, so I have both installed on my Windows computer. Python 2 is my default.
Is there a way to launch python 3 from the command line if python 2 is my default?
Thanks! | Running python 3 from command line when I have python 2 & 3 both installed | 0 | 0 | 0 | 208 |
35,886,892 | 2016-03-09T08:51:00.000 | 1 | 0 | 1 | 0 | python,command-line | 35,886,996 | 3 | false | 0 | 0 | This may be helpful for others encountering the same problem.
You can type:
py -3
to launch python 3 if you have python 2 installed as your default. | 3 | 0 | 0 | I started writing code in python 2, but am now doing a course that runs with python 3, so I have both installed on my Windows computer. Python 2 is my default.
Is there a way to launch python 3 from the command line if python 2 is my default?
Thanks! | Running python 3 from command line when I have python 2 & 3 both installed | 0.066568 | 0 | 0 | 208 |
35,887,212 | 2016-03-09T09:09:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,deep-learning | 35,887,508 | 1 | true | 0 | 0 | for deep learning to work on this you would have to develop a large dataset, most likely manually. the largest natural language processing dataset was, in fact, created manually.
BUT even if you were able to find a dataset which a model could learn off. THEN a model such as gradient boosted trees would be one, amongst others, that would be well suited to multi-class classification like this. A classic library for this is xgboost. | 1 | 0 | 1 | I have about 3,000 words and I would like to group them into about 20-50 different categories. My words are typical phrases you might find in company names. "Face", "Book", "Sales", "Force", for example.
The libraries I have been looking at so far are pandas and scikit-learn. I'm wondering if there is a machine-learning or deep-learning algorithm that would be well suited for this?
The topics I have been looking are Classification: identifying which category an object belongs to, and Dimensionality Reduction: reducing the random number of variables to consider.
When I search for putting words into categories on Google, it brings up kids puzzles such as "things you do with a pencil" - draw. Or "parts of a house" - yard, room. | Sorting words into categories in Python | 1.2 | 0 | 0 | 717 |
35,894,125 | 2016-03-09T14:19:00.000 | 3 | 0 | 0 | 1 | python | 35,894,316 | 1 | true | 0 | 0 | Instead of calling the Powershell script from inside the Python script, you should run both the scripts using the task scheduler itself.
Assuming that the command you gave to the scheduler was something like python script.py, you should change it to cmd_script.cmd where the contents of the cmd_script.cmd would be python script.py & powershell.exe script.ps1 | 1 | 0 | 0 | I have two scripts one is Python based and other is powershell based.
My requirement is that I need to first run the Python script and then the powershell script on startup.
Using Task Scheduler I can run the Python script but I need to find a way to run powershell script after the python script finishes.
Some research online shows that I can add something like:
os.system ("powershell.exe script.ps1") in my Python script
but that is throwing an error: (unicode error) 'unicodeescape' codec can't decode bytes in position.....
Any suggestions? | How to make a Python script run a powershell script after it executes | 1.2 | 0 | 0 | 163 |
35,900,622 | 2016-03-09T19:15:00.000 | 5 | 0 | 0 | 1 | python,windows,scheduled-tasks | 35,901,175 | 2 | true | 0 | 0 | Simply save your script with .pyw extension.
As far as I know, .pyw extension is the same as .py, only difference is that .pyw was implemented for GUI programs and therefore console window is not opened.
If there is more to it than this I wouldn't know, perhaps somebody more informed can edit this post or provide their own answer. | 1 | 3 | 0 | Windows 7 Task Scheduler is running my Python script every 15 minutes. Command line is something like c:\Python\python.exe c:\mypath\myscript.py. It all works well, script is called every 15 minues, etc.
However, the task scheduler pops up a huge console window titled taskeng.exe every time, blocking the view for a few seconds until the script exits.
Is there a way to prevent the pop-up? | Windows Task Scheduler running Python script: how to prevent taskeng.exe pop-up? | 1.2 | 0 | 0 | 3,605 |
35,900,628 | 2016-03-09T19:16:00.000 | 0 | 1 | 0 | 1 | python,windows,python-3.x,atom-editor | 35,900,933 | 2 | true | 0 | 0 | Right click the start menu, and select System. Then, hit "Advanced system settings" > "Environment Variables". Click on path, and hit edit. Select "New" and add the folder that your python executable is in. That should fix the problem.
Your other option is to reinstall python and select "add PYTHON to PATH" as Carpetsmoker suggested. | 2 | 0 | 0 | I am trying to run simple python code in atom using atom-runner package, but I am getting following error:
Unable to find command: python
Are you sure PATH is configured correctly?
How can I configure PATH. (path to my python is C:\Python34) | Run python3 in atom with atom-runner | 1.2 | 0 | 0 | 4,265 |
35,900,628 | 2016-03-09T19:16:00.000 | 0 | 1 | 0 | 1 | python,windows,python-3.x,atom-editor | 37,861,176 | 2 | false | 0 | 0 | If this does not work guys uninstall Python and Atom. While reinstalling Python make sure you click on "Add Python to Path" so you will not have any problems with setting the paths at all! | 2 | 0 | 0 | I am trying to run simple python code in atom using atom-runner package, but I am getting following error:
Unable to find command: python
Are you sure PATH is configured correctly?
How can I configure PATH. (path to my python is C:\Python34) | Run python3 in atom with atom-runner | 0 | 0 | 0 | 4,265 |
35,901,246 | 2016-03-09T19:48:00.000 | 2 | 0 | 0 | 0 | python,linux,opencv,terminal,raspberry-pi | 35,901,608 | 1 | false | 0 | 1 | You need to use a windowing system to display images using imshow.
(That can be enabled in settings running sudo raspi-config)
If you absolutely, positively need to display images without using a windowing system, consider providing an html/web interface. Two options that come to mind when serving a web interface are:
Creating an HTTP video stream (serve the output image as if it's an IP camera, kind of)
Stream the output matrix as a jpg blob via websockets | 1 | 2 | 1 | I get a gtk-WARNING when trying:
cv2.imshow("WindowName", image)
I'm using this to watch a live stream one frame at a time. Are there any alternative libraries I could use? I tried several other options like PIL and Tkinter as well as wand, but could get none of them to work for various different reason. | Show image using OpenCV in Python on a Raspberry Pi terminal | 0.379949 | 0 | 0 | 1,687 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.