Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
31,196,818 | 2015-07-03T00:40:00.000 | 23 | 0 | 1 | 0 | 0 | python,debugging,ipdb | 0 | 32,097,568 | 0 | 4 | 0 | false | 0 | 0 | You can use j <line number> (jump) to go to another line.
for example, j 28 to go to line 28. | 2 | 25 | 0 | 0 | Is there a command to step out of cycles (say, for or while) while debugging on ipdb without having to use breakpoints out of them?
I use the until command to step out of list comprehensions, but don't know how could I do a similar thing, if possible, of entire loop blocks. | ipdb debugger, step out of cycle | 0 | 1 | 1 | 0 | 0 | 15,509 |
31,225,861 | 2015-07-04T23:17:00.000 | 1 | 0 | 1 | 0 | 0 | python,igraph,anaconda,pkg-config | 0 | 31,227,686 | 0 | 1 | 0 | false | 0 | 0 | I faced some installation issues with Anaconda and my fix was to download manually the components of the Anaconda package.
If you use sudo apt-get python3-numpy
for example, it will download as well as all the dependencies.
So all you have to do is download the major libraries.
Although I don't believe pkg-config causes conflicts with Anaconda. Give it a shot, should be easy to resolve issues if any at all. | 1 | 1 | 0 | 0 | I'm using anaconda python 2.7, and keep finding problems installing python libraries using pip that seem to rely on pkg-config. In particular, python-igraph (although the author of that library kindly added a patch to help conda users) and louvain (which I have yet to fix).
Would installing pkg-config lead to conflicts with anaconda? Is there a way to set them up to play nice?
Thanks! | Anaconda and pkg-config on osx 10.10: how to prevent pip installation problems? | 0 | 0.197375 | 1 | 0 | 0 | 365 |
31,235,059 | 2015-07-05T21:14:00.000 | -1 | 0 | 0 | 1 | 0 | python,python-2.7,centos,sha | 0 | 31,235,259 | 1 | 2 | 0 | true | 0 | 0 | You can always install a different version of Python using the -altinstall argument, and then run it either in a virtual environment, or just run the commands with python(version) command.
A considerable amount of CentOS is written in Python so changing the core version will most likely break some functions. | 1 | 1 | 0 | 0 | I have a dedicated web server which runs CentOS 6.6
I am running some script that uses Python SHA module and I think that this module is deprecated in the current Python version.
I am consider downgrading my Python installation so that I can use this module.
Is there a better option? If not, how should I do it?
These are my Python installation details:
rpm-python-4.8.0-38.el6_6.x86_64
dbus-python-0.83.0-6.1.el6.x86_64
gnome-python2-2.28.0-3.el6.x86_64
gnome-python2-canvas-2.28.0-3.el6.x86_64
libreport-python-2.0.9-21.el6.centos.x86_64
gnome-python2-applet-2.28.0-5.el6.x86_64
gnome-python2-gconf-2.28.0-3.el6.x86_64
gnome-python2-bonobo-2.28.0-3.el6.x86_64
python-urlgrabber-3.9.1-9.el6.noarch
python-tools-2.6.6-52.el6.x86_64
newt-python-0.52.11-3.el6.x86_64
python-ethtool-0.6-5.el6.x86_64
python-pycurl-7.19.0-8.el6.x86_64
python-docs-2.6.6-2.el6.noarch
gnome-python2-libegg-2.25.3-20.el6.x86_64
python-iwlib-0.1-1.2.el6.x86_64
libxml2-python-2.7.6-17.el6_6.1.x86_64
gnome-python2-gnome-2.28.0-3.el6.x86_64
python-iniparse-0.3.1-2.1.el6.noarch
gnome-python2-libwnck-2.28.0-5.el6.x86_64
libproxy-python-0.3.0-10.el6.x86_64
python-2.6.6-52.el6.x86_64
gnome-python2-gnomevfs-2.28.0-3.el6.x86_64
gnome-python2-desktop-2.28.0-5.el6.x86_64
gnome-python2-extras-2.25.3-20.el6.x86_64
abrt-addon-python-2.0.8-26.el6.centos.x86_64
at-spi-python-1.28.1-2.el6.centos.x86_64
python-libs-2.6.6-52.el6.x86_64
python-devel-2.6.6-52.el6.x86_64 | How to downgrade python version on CentOS? | 0 | 1.2 | 1 | 0 | 0 | 8,690 |
31,263,032 | 2015-07-07T08:04:00.000 | 0 | 0 | 0 | 0 | 1 | python,django,python-3.x,internationalization | 0 | 31,278,044 | 0 | 1 | 0 | false | 1 | 0 | I just needed to add'django.middleware.locale.LocaleMiddleware' to my settings.py file in the MIDDLEWARE_CLASSES section. I figured if internationalization was already on that this wouldn't be necessary. | 1 | 0 | 0 | 0 | I have a Django 1.8 project that I would like to internationalize. I have added the code to do so in the application, and when I change the LANGUAGE_CODE tag, I can successfully see the other language used, but when I leave it on en-us, no other languages show up. I have changed my computer's language to the language in question (German), but calls to the site are still in English. What am I doing wrong?
Other things:
USE_I18N = true
LOCALE_PATHS works correctly (since changing the
LANGUAGE_CODE works)
I have also tried settings the LANGUAGES attribute although I don't think I have to anyway.
EDIT: I have also confirmed that the GET call has the header: Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4, which contains de like I want. My locale folder has a folder de in it. | Django i18n Problems | 0 | 0 | 1 | 0 | 0 | 59 |
31,281,119 | 2015-07-07T23:29:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,django-templates | 0 | 31,282,108 | 0 | 1 | 1 | true | 1 | 0 | This type of logic does not belong in a template tag. It belongs in a view that will respond to AJAX requests and return a JSONResponse. You'll need some javascript to handle making the request based on the input as well. | 1 | 0 | 0 | 0 | Is it possible to modify data through custom template tag in Django? More specifically, I have a model named Shift whose data I want to display in a calendar form. I figured using a custom inclusion tag is the best way to go about it, but I also want users to be able to click on a shift and buy/sell the shift (thus modifying the database). My guess is that you can't do this with an inclusion tag, but if I were to write a different type of custom template tag from the ground up, would this be possible? If so, can you direct me to a few resources that address how to write such a tag?
Thank you in advance. | Django: modifying data with user input through custom template tag? | 1 | 1.2 | 1 | 0 | 0 | 333 |
31,281,539 | 2015-07-08T00:17:00.000 | 4 | 0 | 1 | 0 | 0 | python,opencv,virtualenv | 0 | 31,281,670 | 0 | 1 | 0 | false | 0 | 0 | I'm not sure I got your question right, but probably your virtualenv has been created without specifying the option --system-site-packages, which gives your virtualenv access to the packages you installed system-wise.
If you run virtualenv --system-site-packages tutorial_venv instead of just virtualenv tutorial_venv when creating your tutorial virtualenv, you might be fine.
Fyi, using a virtualenv with only local dependencies it's a fairly widespread practice, which:
gives you isolation and reproducibility in production scenarios
makes possible for users without the privilege of installing packages system-wide to run and develop a python application
The last benefit might be the reason why your tutorial suggested a virtualenv based approach. | 1 | 1 | 0 | 0 | I've recently installed opencv3 on ubuntu 14.04. The tutorial I followed was for some reason using a virtualenv. Now I want to move opencv from the virtual to my global environment. The reason for this is that I can't seem to use the packages that are installed on my global environment which is getting on my nerves. So how can I do that? | I want my already created virtualenv to have access to system packages | 0 | 0.664037 | 1 | 0 | 0 | 895 |
31,283,419 | 2015-07-08T04:14:00.000 | 0 | 0 | 0 | 0 | 1 | python,pdf,graphicsmagick | 0 | 31,310,493 | 0 | 1 | 0 | false | 1 | 0 | Future readers of this, if you're experiencing the same dilemma in GraphicsMagick. Here's the easy solution:
Simply write a big number to represent the "last page".
That is: something like:
convert file.pdf[4-99999] +adjoin file%02d.jpg
will work to convert from the 5th pdf page to the last pdf page, into jpgs.
Note: "+adjoin" & "%02d" have to do with getting all the images rather than just the last. You'll see what i mean if you try it. | 1 | 1 | 0 | 0 | To convert a range of say the 1st to 5th page of a multipage pdf into single images is fairly straight forward using:
convert file.pdf[0-4] file.jpg
But how do i convert say the 5th to the last page when i dont know the number of pages in the pdf?
In ImageMagick "-1"represents the last page, so:
convert file.pdf[4--1] file.jpg works, great stuff,
but it doesnt work in GraphicsMagick.
Is there a way of doing this easily or do i need to find the number of pages?
PS: need to use graphicsmagick instead of imagemagick.
Thank you so much in advance. | Convert to PDF's Last Page using GraphicsMagick with Python | 0 | 0 | 1 | 0 | 0 | 191 |
31,284,225 | 2015-07-08T05:31:00.000 | 6 | 0 | 0 | 0 | 0 | python,flask,flask-httpauth | 0 | 31,305,421 | 0 | 1 | 0 | true | 1 | 0 | The way I intended that to be handled is by creating two HTTPAuth objects. Each gets its own verify_password callback, and then you can decorate each route with the decorator that is appropriate. | 1 | 1 | 0 | 0 | Working on a Flask application which will have separate classes of routes to be authenticated against: user routes and host routes(think Airbnb'esque where users and hosts differ substantially).
Creating a single verify_password callback and login_required combo is extremely straightforward, however that isn't sufficient, since some routes will need host authentication and others routes will necessitate user authentication. Essentially I will need to have one verify_password/login_required for user and one for host, but I can't seem to figure out how that would be done since it appears that the callback is global in respect to auth's scope. | Multiple verify_password callbacks on flask-httpauth | 0 | 1.2 | 1 | 0 | 0 | 228 |
31,295,352 | 2015-07-08T14:15:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,postgresql,python-2.7,django-1.8 | 0 | 31,309,910 | 0 | 1 | 0 | true | 1 | 0 | I think you have misunderstood what inspectdb does. It creates a model for an existing database table. It doesn't copy or replicate that table; it simply allows Django to talk to that table, exactly as it talks to any other table. There's no copying or auto-fetching of data; the data stays where it is, and Django reads it as normal. | 1 | 0 | 0 | 0 | I'm making an application that will fetch data from a/n (external) postgreSQL database with multiple tables.
Any idea how I can use inspectdb only on a SINGLE table? (I only need that table)
Also, the data in the database would by changing continuously. How do I manage that? Do I have to continuously run inspectdb? But what will happen to junk values then? | Django 1.8 and Python 2.7 using PostgreSQL DB help in fetching | 1 | 1.2 | 1 | 1 | 0 | 78 |
31,295,836 | 2015-07-08T14:34:00.000 | 0 | 0 | 1 | 0 | 0 | python,ncurses,getstring,python-curses | 0 | 31,301,635 | 0 | 1 | 0 | false | 0 | 0 | Not with getstr(), but it's certainly possible with curses. You just have to read each keypress one at a time, via getch() -- and, if you want an editable buffer, you have to recreate something like the functionality of getstr() yourself. (I'd post an example, but what I have is in C rather than Python.) | 1 | 0 | 0 | 0 | I am reading user input text with getstr(). Instead of waiting for the user to press enter, I would like to read the input each time it is changed and re-render other parts of the screen based on the input.
Is this possible with getstr()? How? If not, what's the simplest/easiest alternative? | Python ncurses - how to trigger actions while user is typing? | 1 | 0 | 1 | 0 | 0 | 145 |
31,331,862 | 2015-07-10T03:15:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-2.7,python-3.x,python-internals | 0 | 31,334,203 | 0 | 2 | 1 | true | 0 | 0 | Python has two basic modes: normal and interactive. The normal mode is the mode where the scripted and finished .py files are run in the Python interpreter. Interactive mode is a command line shell which gives immediate feedback for each statement, while running previously fed statements in active memory. As new lines are fed into the interpreter, the fed program is evaluated both in part and in whole.
The same occurs with the .cpy files. Interactive mode basically doing the entire process for each line. I highly doubt that there's a more efficient way to do so.
The iPython notebook works in a similar way. | 1 | 3 | 0 | 0 | I want to know how Python interactive mode works. Usually when you run Python script on CPython it will go trough the process of lexical analysis, parsing, gets compiled into .pyc file, and finally the .pyc file is interpreted.
Does this 4-step process happen while using interactive mode also, r is there a more efficient way of implementing? | How does Python's interactive mode work? | 1 | 1.2 | 1 | 0 | 0 | 1,991 |
31,365,093 | 2015-07-12T06:36:00.000 | 0 | 0 | 1 | 0 | 0 | python,ipython,pycharm,ipython-notebook | 0 | 42,631,771 | 0 | 1 | 0 | true | 0 | 0 | You must start IPython Notebook from Pycharm's run
Find the IPython path (ex which ipython on linux). Copy the resulting path, we will need it!
On PyCharm go to Run > Edit Configuration > + button on top left most corner (add configuration) > Choose Python.
Give your configuration a name.
On the configuration tab, in the Script textbox, paste the path from step 1. On the script, parameters write notebook.
Apply then Ok.
This is essentially like calling ipython notebook from the terminal
Now place your brakepoints and run the notebook from PyCarm (Shift+F10 or click the playbutton). | 1 | 2 | 0 | 0 | I'd like to know how to set breakpoints in IPython notebook on PyCharm.
If it's possible, please let me know. | Can PyCharm set breakpoints on ipython notebook? | 0 | 1.2 | 1 | 0 | 0 | 447 |
31,369,633 | 2015-07-12T15:49:00.000 | 2 | 0 | 0 | 0 | 0 | python,ssl,https,python-requests,client-certificates | 0 | 31,370,879 | 0 | 1 | 0 | false | 0 | 0 | This is actually trivial... CA_BUNDLE can be any file that you append certificates to, so you can simply append the output of ssl.get_server_certificate() to that file and it works. | 1 | 2 | 0 | 0 | I'm using requests to communicate with remote server over https. At the moment I'm not verifying SSL certificate and I'd like to fix that.
Within requests documentation, I've found that:
You can pass verify the path to a CA_BUNDLE file with certificates of
trusted CAs. This list of trusted CAs can also be specified through
the REQUESTS_CA_BUNDLE environment variable.
I don't want to use system's certs, but to generate my own store.
So far I'm grabbing server certificate with ssl.get_server_certificate(addr), but I don't know how to create my own store and add it there. | Adding server certificates to CA_BUNDLE in python | 0 | 0.379949 | 1 | 0 | 1 | 5,838 |
31,370,534 | 2015-07-12T17:25:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,oauth-2.0,google-oauth,django-allauth | 0 | 31,370,619 | 0 | 2 | 0 | false | 1 | 0 | One option is that the primary form pops up social auth in a new window then uses AJAX to poll for whether the social auth has completed. As long as you are fine with the performance characteristics of this (it hammers your server slightly), then this is probably the simplest solution. | 2 | 2 | 0 | 0 | I'm using django 1.8.3 and django-allauth 0.21.0 and I'd like the user to be able to log in using e.g. their Google account without leaving the page. The reason is that there's some valuable data from the page they're logging in from that needs to be posted after they've logged in. I've already got this working fine using local account creation, but I'm having trouble with social because many of the social networks direct the user away to a separate page to ask for permissions, etc. Ideally, I'd have all this happening in a modal on my page, which gets closed once authentication is successful.
The only possible (though not ideal) solution I can think of at the moment is to force the authentication page to open up in another tab (e.g. using target="_blank" in the link), then prompting the user to click on something back in the original window once the authentication is completed in the other tab.
However, the problem here is that I can't think of a way for the original page to know which account was just created by the previously-anonymous user without having them refresh the page, which would cause the important data that needs to be posted to be lost.
Does anyone have any ideas about how I could accomplish either of the two solutions I've outlined above? | Social login in using django-allauth without leaving the page | 1 | 0 | 1 | 0 | 0 | 610 |
31,370,534 | 2015-07-12T17:25:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,oauth-2.0,google-oauth,django-allauth | 0 | 32,250,705 | 0 | 2 | 0 | true | 1 | 0 | I ended up resolving this by using Django's session framework. It turns out that the session ID is automatically passed through the oauth procedure by django-allauth, so anything that's stored in request.session is accessible on the other side after login is complete. | 2 | 2 | 0 | 0 | I'm using django 1.8.3 and django-allauth 0.21.0 and I'd like the user to be able to log in using e.g. their Google account without leaving the page. The reason is that there's some valuable data from the page they're logging in from that needs to be posted after they've logged in. I've already got this working fine using local account creation, but I'm having trouble with social because many of the social networks direct the user away to a separate page to ask for permissions, etc. Ideally, I'd have all this happening in a modal on my page, which gets closed once authentication is successful.
The only possible (though not ideal) solution I can think of at the moment is to force the authentication page to open up in another tab (e.g. using target="_blank" in the link), then prompting the user to click on something back in the original window once the authentication is completed in the other tab.
However, the problem here is that I can't think of a way for the original page to know which account was just created by the previously-anonymous user without having them refresh the page, which would cause the important data that needs to be posted to be lost.
Does anyone have any ideas about how I could accomplish either of the two solutions I've outlined above? | Social login in using django-allauth without leaving the page | 1 | 1.2 | 1 | 0 | 0 | 610 |
31,377,196 | 2015-07-13T07:01:00.000 | 0 | 1 | 0 | 0 | 1 | python,frameworks,erpnext,frappe | 0 | 31,379,032 | 0 | 3 | 0 | false | 1 | 0 | bench clear-cache will clear the cache. After doing this, refresh and check. | 3 | 0 | 0 | 0 | ERPNext + frappe need to change layout(footer & header) front-end. I tried to change base.html(frappe/templates/base.html) but nothing happened. Probably this is due to the fact that the html files need to somehow compile. Maybe someone have info how to do it?
UPDATE:
No such command "clear-cache".
Commands:
backup
backup-all-sites
config
get-app
init
migrate-3to4
new-app
new-site
patch
prime-wheel-cache
release
restart
set-default-site
set-mariadb-host
set-nginx-port
setup
shell
start
update | Customize frappe framework html layout | 0 | 0 | 1 | 0 | 0 | 1,663 |
31,377,196 | 2015-07-13T07:01:00.000 | 0 | 1 | 0 | 0 | 1 | python,frameworks,erpnext,frappe | 0 | 58,808,805 | 0 | 3 | 0 | false | 1 | 0 | It seems you're not in your bench folder.
When you create a new bench with, for example : bench init mybench it creates a new folder : mybench.
All bench commands must be run from this folder.
Could you try to run bench --help in this folder ? You should see the clear-cache command. | 3 | 0 | 0 | 0 | ERPNext + frappe need to change layout(footer & header) front-end. I tried to change base.html(frappe/templates/base.html) but nothing happened. Probably this is due to the fact that the html files need to somehow compile. Maybe someone have info how to do it?
UPDATE:
No such command "clear-cache".
Commands:
backup
backup-all-sites
config
get-app
init
migrate-3to4
new-app
new-site
patch
prime-wheel-cache
release
restart
set-default-site
set-mariadb-host
set-nginx-port
setup
shell
start
update | Customize frappe framework html layout | 0 | 0 | 1 | 0 | 0 | 1,663 |
31,377,196 | 2015-07-13T07:01:00.000 | 0 | 1 | 0 | 0 | 1 | python,frameworks,erpnext,frappe | 0 | 68,402,268 | 0 | 3 | 0 | false | 1 | 0 | If anyone stumbles on this. The command needed is bench build. That will compile any assets related to the build.json file in the public folder. (NOTE: You usually have to create build.json yourself). | 3 | 0 | 0 | 0 | ERPNext + frappe need to change layout(footer & header) front-end. I tried to change base.html(frappe/templates/base.html) but nothing happened. Probably this is due to the fact that the html files need to somehow compile. Maybe someone have info how to do it?
UPDATE:
No such command "clear-cache".
Commands:
backup
backup-all-sites
config
get-app
init
migrate-3to4
new-app
new-site
patch
prime-wheel-cache
release
restart
set-default-site
set-mariadb-host
set-nginx-port
setup
shell
start
update | Customize frappe framework html layout | 0 | 0 | 1 | 0 | 0 | 1,663 |
31,387,762 | 2015-07-13T15:44:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,django,timer | 0 | 31,387,864 | 0 | 3 | 0 | true | 1 | 0 | The only secure way would be to put the logic on the server that checks the time. Make an Ajax call to the server. If the time is under 5 seconds, do not return the HTML, if it is greater than , than return the html to show.
Other option is to have the link point to your server and if the time is less than five seconds it redirects them to a different page. If it is greater than 5, it will redirect them to the correct content.
Either way, it requires you to keep track of session time on the server and remove it from the client. | 2 | 0 | 0 | 0 | I know how to do that with javascript but I need a secure way to do it.
Anybody can view page source, get the link and do not wait 5 seconds.
Is there any solution? I'm working with javascript and django.
Thanks! | Wait 5 seconds before download button appear | 0 | 1.2 | 1 | 0 | 0 | 1,293 |
31,387,762 | 2015-07-13T15:44:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,django,timer | 0 | 31,388,190 | 0 | 3 | 0 | false | 1 | 0 | Use server side timeout.. whenever there is (AJAX) request from client for download link with timestamp, compare the client sent timestamp with currenttime and derive how much time is required to halt the request at server side to make up ~5 seconds. So by comparing timestamp you can almost achieve accuracy of waiting time as the network delays would be taken into account automatically. | 2 | 0 | 0 | 0 | I know how to do that with javascript but I need a secure way to do it.
Anybody can view page source, get the link and do not wait 5 seconds.
Is there any solution? I'm working with javascript and django.
Thanks! | Wait 5 seconds before download button appear | 0 | 0 | 1 | 0 | 0 | 1,293 |
31,392,285 | 2015-07-13T19:56:00.000 | 3 | 0 | 0 | 0 | 0 | python,postgresql,sqlalchemy,alembic | 0 | 31,392,595 | 0 | 3 | 0 | false | 0 | 0 | This works for me:
1) Access your session, in the same way you did session.create_all, do session.drop_all.
2) Delete the migration files generated by alembic.
3) Run session.create_all and initial migration generation again. | 1 | 7 | 0 | 0 | Everything I found about this via searching was either wrong or incomplete in some way. So, how do I:
delete everything in my postgresql database
delete all my alembic revisions
make it so that my database is 100% like new | Clear postgresql and alembic and start over from scratch | 0 | 0.197375 | 1 | 1 | 0 | 7,991 |
31,417,091 | 2015-07-14T20:51:00.000 | 0 | 0 | 1 | 0 | 0 | python,list,pickle | 1 | 31,429,440 | 0 | 1 | 0 | false | 0 | 0 | If you really want to keep it simple and use something like pickle, the best thing is to use cPickle. This library is written in C and can handle bigger files and is faster than pickle. | 1 | 0 | 1 | 0 | I have a very large list that I want to write to file. My list is 2 dimensional, and each element of the list is a 1 dimensional list. Different elements of the 2 dimensional list has 1 dimensional lists of varying size.
When my 2D list was small, pickle dump worked great. But now it just gives me memory error.
Any suggestions on how to store and reload such arrays to disk?
Thanks! | Python pickle dump memory error | 0 | 0 | 1 | 0 | 0 | 1,136 |
31,420,095 | 2015-07-15T01:39:00.000 | 1 | 0 | 0 | 0 | 0 | python,numpy,pandas | 0 | 31,422,570 | 0 | 1 | 0 | true | 0 | 0 | It seems that you have a list of int in your data frame. To convert it to you need to select the value inside and form data frame.
I suggest you this code to convert
for col in df:
df[col] = df[col].apply(lambda x: x[0]) | 1 | 0 | 1 | 0 | I am learning the book Python for Data Analysis, after running the code from the book I got a pandas dataframe diversity like this:
sex F M
year
1880 [38] [14]
1881 [38] [14]
When I want to use diversity.plot() to draw some pictures, there is TypeError:
Empty 'DataFrame': no numeric data to plot
So, my question is how to deal with this dataframe to make it as numeric? | How to make dataframe in pandas as numeric? | 0 | 1.2 | 1 | 0 | 0 | 352 |
31,441,307 | 2015-07-15T21:18:00.000 | 0 | 1 | 0 | 0 | 0 | java,android,python,android-studio,admob | 0 | 31,441,510 | 0 | 1 | 0 | false | 1 | 0 | To access Java already implemented version you can use pyjnius. I tried to use it for something else and I didn't succeed. Well, I yielded pretty quickly because it wasn't necessary for my project.
Otherwise, I am afraid, you will have to implement it yourself from scratch.
I never heard about a finished solution for your problem.
If you succeeded to use PGU, it wouldn't be so hard.
If not, well, I wish you luck, and put your solution online for others.
There is an Eclipse plug-in for Python. I think that Android studio does not support PGS4A. Never needed it. Console is the queen. | 1 | 0 | 0 | 0 | I'd like to have advertisements in an android App I've written and built using PGS4A. I've done my research and all, but there doesn't seem to be any online resources that explains how to do that just yet. I haven't much knowledge on Java either, which is clearly why I've written that in Python. Has anyone found a way to achieve that? If not, how difficult would it be to convert the project files into an Android Studio (or even an Eclipse) project? (so then one can just implement the ads following the Java Admob documentation found everywhere)
Thank you in advance. | Admob Ads with Python Subset For Android (PGS4A) | 0 | 0 | 1 | 0 | 0 | 199 |
31,443,662 | 2015-07-16T00:51:00.000 | -1 | 0 | 1 | 0 | 0 | python,macos,python-2.7 | 0 | 31,443,715 | 0 | 2 | 0 | false | 0 | 0 | When creating your virtual environment (you are using a virtual environment, right?) use pyvenv <foo> instead of virtualenv <foo>, and that will create a Python 3 virtual environment, free of Python 2. Then you are free to use pip and it will install the modules into that venv. | 2 | 0 | 0 | 0 | I'm fairly new to python and want to start doing some more advanced programming in python 3. I installed some modules using pip on the terminal (I'm using a mac) only to find out that the modules only installed for python 2. I think that it's because I only installed it to the python 2 path, which I think is because my system is running python 2 by default.
But I have no idea how to get around this. Any ideas? | Python Modules Only Installing For Python 2 | 0 | -0.099668 | 1 | 0 | 0 | 33 |
31,443,662 | 2015-07-16T00:51:00.000 | 0 | 0 | 1 | 0 | 0 | python,macos,python-2.7 | 0 | 31,443,681 | 0 | 2 | 0 | true | 0 | 0 | You need to use pip3. OS X will default to Python 2 otherwise. | 2 | 0 | 0 | 0 | I'm fairly new to python and want to start doing some more advanced programming in python 3. I installed some modules using pip on the terminal (I'm using a mac) only to find out that the modules only installed for python 2. I think that it's because I only installed it to the python 2 path, which I think is because my system is running python 2 by default.
But I have no idea how to get around this. Any ideas? | Python Modules Only Installing For Python 2 | 0 | 1.2 | 1 | 0 | 0 | 33 |
31,447,971 | 2015-07-16T07:30:00.000 | 0 | 1 | 0 | 1 | 0 | python,debian,remote-server,directory-structure | 1 | 31,448,678 | 0 | 1 | 0 | false | 0 | 0 | Basically you're stuffed.
Your problem is:
You have a script, which produces no error messages, no logging, and no other diagnostic information other than a single timestamp, on an output file.
Something has gone wrong.
In this case, you have no means of finding out what the issue was. I suggest any of the following:
either adding logging or diagnostic information to the script.
Contacting the developer of the script and getting them to find a way of determining the issue.
Delete the evidently worthless script if you can't do either option 1, or 2, above, and consider an alternative way of doing your task.
Now, if the script does have logging, or other diagnostic data, but you delete or throw them away, then that's your problem and you need to stop discarding this useful information.
EDIT (following comment).
At a basic level, you should print to either stdout, or to stderr, that alone will give you a huge amount of information. Just things like, "Discovered 314 records, we need to save 240 records", "Opened file name X.csv, Open file succeeded (or failed, as the case may be)", "Error: whatever", "Saved 2315 records to CSV". You should be able to determine if those numbers make sense. (There were 314 records, but it determined 240 of them should be saved, yet it saved 2315? What went wrong!? Time for more logging or investigation!)
Ideally, though, you should take a look at the logging module in python as that will let you log stack traces effectively, show line numbers, the function you're logging in, and the like. Using the logging module allows you to specify logging levels (eg, DEBUG, INFO, WARN, ERROR), and to filter them or redirect them to file or the console, as you may choose, without changing the logging statements themselves.
When you have a problem (crash, or whatever), you'll be able to identify roughly where the error occured, giving you information to either increase the logging in that area, or to be able to reason what must have happened (though you should probably then add enough logging so that the logging will tell you what happened clearly and unambiguously). | 1 | 0 | 0 | 0 | I have written a python script that is designed to run forever. I load the script into a folder that I made on my remote server which is running debian wheezy 7.0. The code runs , but it will only run for 3 to 4 hours then it just stops, I do not have any log information on it stopping.I come back and check the running process and its not there. Is this a problem in where I am running the python file from? The script simply has a while loop and writes to an external csv file. The file runs from /var/pythonscript. The folder is a custom folder that I made. There is not error that I receive and the only way I know how long the code runs is by the time stamp on the csv file. I run the .py file by ssh to the server and sudo python scriptname.I also would like to know the best place in the linux debian directory to run python files from and limitations concerning that. Any help would be much appreciated. | Where to run python file on Remote Debian Sever | 0 | 0 | 1 | 0 | 0 | 85 |
31,466,493 | 2015-07-17T00:02:00.000 | 0 | 0 | 0 | 0 | 0 | java,http,post,httpclient,python-requests | 0 | 31,466,587 | 0 | 1 | 0 | false | 0 | 0 | If you try to PUT without any knowledge of the server this request will "fail" (or not - depends on the implementation e.g. it can redirect you to main page).
Failure is indicated by the server response code along with headers. E.g. 405 Method Not Allowed or 400 bad request etc. Or redirect you to main page: 302 Found
You, as a client, must adapt to the server's API.
Moreover different requests to the same API may give you different specs e.g.
One response is gzipped & with ETag & cached, the other one is not.
Or plain GET / will give you HTML and GET /?format=json will give you JSON. | 1 | 0 | 0 | 0 | Please bear with me as I have been reading and trying to understand HTTP and the different requests available in its protocol but there are still a few loose connections here and there.
Specifically, I have been using Apache's HttpClient to send requests, but I'm unsure of a few things. When we make a request to a URI, how can we know before hand how to properly format say a PUT request? You might be trying to transmit data to fill out a form, or send an image, etc. How would you know if the server is capable of receiving that format of request? | How to determine how to format an HTTP request to some server | 0 | 0 | 1 | 0 | 1 | 25 |
31,481,253 | 2015-07-17T17:14:00.000 | 2 | 0 | 1 | 0 | 0 | python,python-2.7 | 0 | 31,481,601 | 0 | 2 | 0 | false | 0 | 0 | The trouble is, strip is not defined in any module. It is not a part of the standard library at all, but a method on str, which in turn is a built in class. So there isn't really any way of iterating through modules to find it. | 1 | 1 | 0 | 0 | Given a method name, how to determine which module(s) in the standard library contain this method?
E.g. If I am told about a method called strip(), but told nothing about how it works or that it is part of str, how would I go and find out which module it belongs to? I obliviously mean using Python itself to find out, not Googling "Python strip" :) | How to determine which Python standard library module(s) contain a certain method? | 0 | 0.197375 | 1 | 0 | 0 | 71 |
31,485,636 | 2015-07-17T22:30:00.000 | 0 | 0 | 0 | 0 | 0 | python,pygame | 0 | 31,485,743 | 0 | 1 | 0 | false | 0 | 1 | There are some answers already here. Anyway, use PGU (Pygame GUI Utilities), it's available on pygame's site. It turns pygame into GUI toolkit. There is an explanation on how to combine it and your game. Otherwise, program it yourself using key events. It's not hard but time consuming and boring. | 1 | 0 | 0 | 0 | I need a user input for my pygame program, but I need it on my GUI(pygame.display.set_mode etc.), not just like: var = input("input something"). Does anybody have suggestions how to do this? | Pygame, user input on a GUI? | 0 | 0 | 1 | 0 | 0 | 113 |
31,495,357 | 2015-07-18T20:39:00.000 | 2 | 0 | 1 | 0 | 0 | python,packages | 1 | 31,495,419 | 0 | 2 | 0 | false | 0 | 0 | Package manager solves things like dependencies and uninstalling.
Additionally, when using pip to install packages, packages are usually being built with setup.py script. While it might not be an issue for pure Python modules, if package contains any extension modules or some other custom stuff, copying files to site-packages just won't work (I'm actually not sure why it worked in your case with numpy, since it does contain C extensions modules). | 2 | 2 | 0 | 0 | Whenever I google 'importing X package/module' I always see a bunch of tutorials about using pip or the shell commands. But I've always just taken the downloaded file and put it in the site-packages folder, and when I just use 'import' in PyCharm it has worked just fine.
The reason I was wondering was because I was downloading NumPy today, and when I just copied the file the same way I'd been doing, PyCharm didn't show any errors. I was just wondering if I'm misunderstanding this whole concept of installing packages.
EDIT: Thank you for your answers! I am off to learn how to use pip now. | Installing Packages in Python - Pip/cmd vs Putting File in Lib/site-packages | 0 | 0.197375 | 1 | 0 | 0 | 308 |
31,495,357 | 2015-07-18T20:39:00.000 | 2 | 0 | 1 | 0 | 0 | python,packages | 1 | 31,495,433 | 0 | 2 | 0 | false | 0 | 0 | One of the points of using a package manager (pip) is portability. With pip, you just include a requirements.txt in your project and you can work on it on any machine, be it Windows, Linux, or Mac. When moving to a new environment/OS, pip will take care of installing the packages properly for you; note that packages can have OS-specific steps so your copy-pasted Windows set-up might now work when you move to another OS.
Moreover, with your copy-paste method, you carry the bulk of your dependencies everywhere. I imagine that if you want to switch machines (not necessarily OS), you copy everything from project code to dependencies. With pip, you can keep your working directories leaner, all at the cost of a single requirements.txt. | 2 | 2 | 0 | 0 | Whenever I google 'importing X package/module' I always see a bunch of tutorials about using pip or the shell commands. But I've always just taken the downloaded file and put it in the site-packages folder, and when I just use 'import' in PyCharm it has worked just fine.
The reason I was wondering was because I was downloading NumPy today, and when I just copied the file the same way I'd been doing, PyCharm didn't show any errors. I was just wondering if I'm misunderstanding this whole concept of installing packages.
EDIT: Thank you for your answers! I am off to learn how to use pip now. | Installing Packages in Python - Pip/cmd vs Putting File in Lib/site-packages | 0 | 0.197375 | 1 | 0 | 0 | 308 |
31,506,425 | 2015-07-19T22:06:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,deployment,development-environment | 0 | 31,506,797 | 0 | 2 | 0 | false | 1 | 0 | Sounds like the quickest (if not most elegant) solution would be to call 'python manage.py runserver' at the end of your script. | 1 | 0 | 0 | 0 | There is a set of functions that I need to carry out during the start of my server. Regardless of path whether that be "/", "/blog/, "/blog/post". For developments purposes I'd love for this script to run every time I run python manage.py runserver and for production purposes I would love this script to run during deployment. Anyone know how this can be done?
My script is scraping data off and making a call to Facebook's Graph API with python and some of its libraries. | Django app initialization process | 0 | 0 | 1 | 0 | 0 | 835 |
31,508,612 | 2015-07-20T03:54:00.000 | 3 | 0 | 1 | 1 | 0 | python,linux,pip | 0 | 31,508,671 | 0 | 8 | 0 | false | 0 | 0 | You need to install the development package for libffi.
On RPM based systems (Fedora, Redhat, CentOS etc) the package is named libffi-devel.
Not sure about Debian/Ubuntu systems, I'm sure someone else will pipe up with that. | 3 | 90 | 0 | 0 | I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so how do I go about closing this gap between ffi.h and pip? | PIP install unable to find ffi.h even though it recognizes libffi | 1 | 0.07486 | 1 | 0 | 0 | 93,372 |
31,508,612 | 2015-07-20T03:54:00.000 | 266 | 0 | 1 | 1 | 0 | python,linux,pip | 0 | 31,508,663 | 0 | 8 | 0 | false | 0 | 0 | You need to install the development package as well.
libffi-dev on Debian/Ubuntu, libffi-devel on Redhat/Centos/Fedora. | 3 | 90 | 0 | 0 | I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so how do I go about closing this gap between ffi.h and pip? | PIP install unable to find ffi.h even though it recognizes libffi | 1 | 1 | 1 | 0 | 0 | 93,372 |
31,508,612 | 2015-07-20T03:54:00.000 | 24 | 0 | 1 | 1 | 0 | python,linux,pip | 0 | 38,077,173 | 0 | 8 | 0 | false | 0 | 0 | To add to mhawke's answer, usually the Debian/Ubuntu based systems are "-dev" rather than "-devel" for RPM based systems
So, for Ubuntu it will be apt-get install libffi libffi-dev
RHEL, CentOS, Fedora (up to v22) yum install libffi libffi-devel
Fedora 23+ dnf install libffi libffi-devel
OSX/MacOS (assuming homebrew is installed) brew install libffi | 3 | 90 | 0 | 0 | I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so how do I go about closing this gap between ffi.h and pip? | PIP install unable to find ffi.h even though it recognizes libffi | 1 | 1 | 1 | 0 | 0 | 93,372 |
31,515,583 | 2015-07-20T11:41:00.000 | 4 | 0 | 1 | 0 | 1 | python,debugging,python-3.4,python-idle | 0 | 31,515,714 | 0 | 1 | 0 | true | 0 | 0 | You must be opening the code window not the shell window..
Try opening the shell window..
It has a Debug menu(the shell window) but the code window does not have one.. | 1 | 0 | 0 | 0 | I thought it was coming by default with the IDLE but I don't have it.
By the way, I installed Python 3.4.
A few researches on the net revealed themselves unfruitful. Any idea about what's going on and how to fix this? | My Python IDLE is missing the Debugging menu | 1 | 1.2 | 1 | 0 | 0 | 2,270 |
31,520,331 | 2015-07-20T15:21:00.000 | 1 | 0 | 0 | 0 | 0 | python,animation,pygame,2d,sprite | 0 | 31,527,282 | 0 | 1 | 1 | false | 0 | 1 | You should assume that when blitting a pygame.Surface, the position gets converted to an int via int() | 1 | 3 | 0 | 0 | I use Python 2.x and Pygame to code games. Pygame has a built-in rect (Rectangle) class that only supports ints instead of floats. So I have made my own rect class (MyRect) which supports floats. Now my question is as follows:
A 2D platformer char moves its position (x, y -> both floats). Now when I blit the char onto the screen, is the position rounded to an int (int(round(x))) or just converted into an int (int(x))? I know this might sound a bit stupid, but I've got an issue with this and I'd like to know how this is usually handled. | Movement in 2D Games (round position when blitting?) | 0 | 0.197375 | 1 | 0 | 0 | 68 |
31,527,206 | 2015-07-20T22:10:00.000 | 1 | 0 | 0 | 0 | 0 | python,audio,signal-processing,fft | 0 | 31,528,542 | 0 | 1 | 0 | false | 0 | 0 | FFT data is in units of normalized frequency where the first point is 0 Hz and one past the last point is fs Hz. You can create the frequency axis yourself with linspace(0.0, (1.0 - 1.0/n)*fs, n). You can also use fftfreq but the components will be negative.
These are the same if n is even. You can also use rfftfreq I think. Note that this is only the "positive half" of your frequencies, which is probably what you want for audio (which is real-valued). Note that you can use rfft to just produce the positive half of the spectrum, and then get the frequencies with rfftfreq(n,1.0/fs).
Windowing will decrease sidelobe levels, at the cost of widening the mainlobe of any frequencies that are there. N is the length of your signal and you multiply your signal by the window. However, if you are looking in a long signal you might want to "chop" it up into pieces, window them, and then add the absolute values of their spectra.
"is it correct" is hard to answer. The simple approach is as you said, find the bin closest to your frequency and check its amplitude. | 1 | 3 | 1 | 0 | I have a WAV file which I would like to visualize in the frequency domain. Next, I would like to write a simple script that takes in a WAV file and outputs whether the energy at a certain frequency "F" exceeds a threshold "Z" (whether a certain tone has a strong presence in the WAV file). There are a bunch of code snippets online that show how to plot an FFT spectrum in Python, but I don't understand a lot of the steps.
I know that wavfile.read(myfile) returns the sampling rate (fs) and the data array (data), but when I run an FFT on it (y = numpy.fft.fft(data)), what units is y in?
To get the array of frequencies for the x-axis, some posters do this where n = len(data):
X = numpy.linspace(0.0, 1.0/(2.0*T), n/2)
and others do this:
X = numpy.fft.fftfreq(n) * fs)[range(n/2)]
Is there a difference between these two methods and is there a good online explanation for what these operations do conceptually?
Some of the online tutorials about FFTs mention windowing, but not a lot of posters use windowing in their code snippets. I see that numpy has a numpy.hamming(N), but what should I use as the input to that method and how do I "apply" the output window to my FFT arrays?
For my threshold computation, is it correct to find the frequency in X that's closest to my desired tone/frequency and check if the corresponding element (same index) in Y has an amplitude greater than the threshold? | FFT in Python with Explanations | 0 | 0.197375 | 1 | 0 | 0 | 1,122 |
31,536,863 | 2015-07-21T10:44:00.000 | 0 | 0 | 1 | 1 | 1 | python,qt,io,hard-drive,child-process | 0 | 31,543,489 | 0 | 2 | 0 | false | 0 | 0 | There are no guarantees as to fairness of I/O scheduling. What you're describing seems rather simple: the I/O scheduler, whether intentionally or not, gives a boost to new processes. Since your disk is tapped out, the order in which the processes finish is not under your control. You're most likely wasting a lot of disk bandwidth on seeks, due to parallel access from multiple processes.
TL;DR: Your expectation is unfounded. When I/O, and specifically the virtual memory system, is saturated, anything can happen. And so it does. | 1 | 1 | 0 | 0 | Not sure this is the best title for this question but here goes.
Through python/Qt I started multiple processes of an executable. Each process is writing a large file (~20GB) to disk in chunks. I am finding that the first process to start is always the last to finish and continues on much, much longer than the other processes (despite having the same amount of data to write).
Performance monitors show that the process is still using the expected amount of RAM (~1GB), but the disk activity from the process has slowed to a trickle.
Why would this happen? It is as though the first process started somehow gets its' disk access 'blocked' by the other processes and then doesnt recover after the other processes have finished...
Would the OS (windows) be causing this? What can I do to alleviate this? | Why do multiple processes slow down? | 0 | 0 | 1 | 0 | 0 | 1,550 |
31,545,025 | 2015-07-21T16:49:00.000 | 1 | 0 | 0 | 0 | 1 | python,django,django-models | 0 | 31,545,842 | 1 | 2 | 0 | true | 1 | 0 | You can dump the db directly with mysqldump as allcaps suggested, or run manage.py migrate first and then it should work. It's telling you there are migrations that you have yet to apply to the DB. | 1 | 1 | 0 | 0 | I used to use manage.py sqlall app to dump the database to sql statements. While, after upgrading to 1.8, it doesn't work any more.
It says:
CommandError: App 'app' has migrations. Only the sqlmigrate and
sqlflush commands can be used when an app has migrations.
It seems there is not a way to solve this.
I need to dump the database to sql file, so I can use it to clone the whole database else where, how can I accomplish this? | Django: How to dump the database in 1.8? | 1 | 1.2 | 1 | 1 | 0 | 309 |
31,551,135 | 2015-07-21T23:05:00.000 | 0 | 0 | 0 | 0 | 0 | python,excel,pivot-table,xlsx,openpyxl | 0 | 31,556,316 | 0 | 1 | 0 | false | 0 | 0 | This is currently not possible with openpyxl. | 1 | 0 | 0 | 0 | I'm working with XLSX files with pivot tables and writing an automated script to parse and extract the data. I have multiple pivot tables per spreadsheet with cost categories, their totals, and their values for each month etc. Any ideas on how to use openpyxl to parse each pivot table? | Extracting data from excel pivot tables using openpyxl | 0 | 0 | 1 | 1 | 0 | 1,122 |
31,553,322 | 2015-07-22T03:27:00.000 | 1 | 0 | 1 | 0 | 0 | python,django,directory,pycharm | 0 | 31,554,899 | 0 | 1 | 0 | true | 1 | 0 | Open File > settings menu and then goto project: foo > Project Structure and press Add Content Root, then select destination directory.
and after folder added in list, right click on the folder and set as source, in last step press OK... | 1 | 1 | 0 | 0 | I want to use a folder that is not in the base directory of my django project without adding it in to the base directory. | PyCharm - how to use a folder that is not in the base directory | 0 | 1.2 | 1 | 0 | 0 | 83 |
31,580,478 | 2015-07-23T07:10:00.000 | 1 | 0 | 0 | 0 | 0 | python,file,data-structures,disk | 0 | 31,581,102 | 0 | 2 | 0 | true | 0 | 0 | There's quite a number of problems you have to solve, some are quite straight forward and some are a little bit more elaborate, but since you want to do it yourself I don't think you minding about filling out details yourself (so I'll skip some parts).
First simple step is to serialize and deserialize nodes (in order to be able to store on disk at all). That could be done in an ad hoc manner by having your nodes having an serialize/deserialize method - in addition you might want to have the serialized data to have an type indicator so you can know which class' deserialize you should use to deserialize data. Note that on disk representation of a node must reference other nodes by file offset (either directly or indirectly).
The actual reading or writing of the data is done by ordinary (binary) file operations, but you have to seek to the right position in the file first.
Second step is to have the possibility to allocate space in the file. If you only want to have a write-once-behaviour it's quiete forward to just grow the file, but if you want to modify the data in the file (adding and removing nodes or even replacing them) you will have to cope with situation where regions in the file that are no longer in use and either reuse these or even pack the layout of the file.
Further steps could involve making the update atomic in some sense. One solution is to have a region where you write enough information so that the update can be completed (or abandoned) if it were terminated prematurely in it's most simple form it might just be a list of indempotent operations (operation that yields the same result if you repeat them, fx writing particular data to a particular place in the file).
Note that while (some of) the builtin solutions does indeed handle writing and reading the entire graph to/from disk they do not really handle the situation where you want to read only part of the graph or modifying the graph very efficient (you have to read mostly the whole graph and writing the complete graph in one go). Databases are the exception where you may read/write smaller parts of your data in a random manner. | 1 | 1 | 1 | 0 | I couldn't find any resources on this topic. There are a few questions with good answers describing solutions to problems which call for data stored on disk (pickle, shelve, databases in general), but I want to learn how to implement my own.
1) If I were to create a disk based graph structure in Python, I'd have to implement the necessary methods by writing to disk. But how do I do that?
2) One of the benefits on disk based structures is having the efficiency of the structure while working with data that might not all fit on memory. If the data does not fit in memory, only some parts of it are accessed at once. How does one access only part of the structure at once? | Creating a disk-based data structure | 0 | 1.2 | 1 | 0 | 0 | 1,607 |
31,582,012 | 2015-07-23T08:28:00.000 | 0 | 1 | 0 | 0 | 0 | python,curl | 0 | 31,584,263 | 0 | 1 | 0 | false | 0 | 0 | Can you stay under command line ?
If yes, try the python lib nammed "pexpect". It's pretty useful, and let you run commands like on a terminal, from a python program, and interact with the terminal ! | 1 | 1 | 0 | 0 | Using python 2.7, I need to convert the following curl command to execute in python.
curl -b /tmp/admin.cookie --cacert /some/cert/location/serverapache.crt --header "X-Requested-With: XMLHttpRequest" --request POST "https://www.test.com"
I am relatively new to Python and are not sure how to use the urllib library or if I should use the requests library. The curl options are especially tricky for me to convert. Any help will be appreciated. | curl and curl options to python conversion | 0 | 0 | 1 | 0 | 1 | 205 |
31,585,809 | 2015-07-23T11:18:00.000 | 1 | 0 | 1 | 0 | 0 | python,code-generation,distutils | 0 | 31,618,879 | 0 | 1 | 0 | true | 0 | 0 | I solved this by subclassing build_py instead of build. It turns out build_py has a build_lib attribute that will be the path to the "build" directory.
By looking at the source code I think there is no better way. | 1 | 2 | 0 | 0 | I am generating some Python files in my setup.py as part of the build process. These files should be part of the installation. I have successfully added my code generator as a pre-build step (by implementing my own Command and overriding the default build to include this).
How do I copy my generated files from the temporary directory into the build output? Should I copy it myself using e.g. copy_file? If so, how do I get the path to the build output? Or should I declare it as part of the build somehow?
I'd rather not clutter the source directory tree with my generated files, hence I prefer to avoid copying the files there and then declaring them as part of the package. | Add generated Python file as part of build | 0 | 1.2 | 1 | 0 | 0 | 67 |
31,606,659 | 2015-07-24T09:17:00.000 | 1 | 0 | 0 | 1 | 0 | python,c,unix,gcc | 0 | 31,606,702 | 0 | 1 | 0 | false | 0 | 0 | Answer to your first paragraph: Use MinGW for the compiler (google it, there is a -w64 version if you need that) and MSYS for a minimal environment including shell tools the Makefile could need. | 1 | 0 | 0 | 0 | I have a c-program which includes a make file that works fine on unix systems. Although I would like to compile the program for windows using this make file, how can i go around doing that?
Additionally I have python scripts that call this c-program using ctypes, I don't imagine I will have to much of an issue getting ctypes working on windows but i heard its possible to include all the python and c scripts in one .exe for windows, has anyone heard of that? | Compiling a unix make file for windows | 0 | 0.197375 | 1 | 0 | 0 | 69 |
31,611,089 | 2015-07-24T12:54:00.000 | 0 | 0 | 0 | 0 | 0 | python,tree,kdtree | 0 | 31,646,627 | 0 | 1 | 0 | false | 0 | 0 | A typical KD tree node contains a reference to the data point.
A KD tree that only keeps the coordinates is much less useful.
This way, you can easily identify them. | 1 | 1 | 1 | 0 | I've been studying KD Trees and KNN searching in 2D & 3D space. The thing I cannot seem to find a good explanation of is how to identify which objects are being referenced by each node of the tree.
Example would be an image comparison database. If you generated descriptors for all the images, would you push all the descriptor data on to one tree? If so, how do you know which nodes are related to which original images? If not, would you generate a tree for each image, and then do some type of KD-Tree Random Forest nearest neighbor queries to determine which trees are closest to each other in 3-D space?
The image example might not be a good use case for KD-Trees since it's highly dimensional space, but I'm more using it to help explain the question I'm asking.
Any guidance on practical applications of KD-Tree KNN queries for comparing objects is greatly appreciated.
Thanks! | How to identify objects related to KD Tree data? | 0 | 0 | 1 | 0 | 0 | 245 |
31,612,074 | 2015-07-24T13:39:00.000 | 0 | 0 | 0 | 0 | 0 | python,theano,deep-learning | 0 | 31,778,280 | 0 | 1 | 0 | false | 0 | 0 | When pickling models, it is always better to save the parameters and when loading re-create the shared variable and rebuild the graph out of this. This allow to swap the device between CPU and GPU.
But you can pickle Theano functions. If you do that, pickle all associated function at the same time. Otherwise, they will have each of them a different copy of the shared variable. Each call to load() will create new shared variable if they where pickled. This is a limitation of pickle. | 1 | 0 | 1 | 0 | I am looking for some suggestions about how to do continue training in theano. For example, I have the following:
classifier = my_classifier()
cost = ()
updates = []
train_model = theano.function(...)
eval_model = theano.function(...)
best_accuracy = 0
while (epoch < n_epochs):
train_model()
current_accuracy = eval_model()
if current_accuracy > best_accuracy:
save classifier or save theano functions?
best_accuracy = current_accuracy
else:
load saved classifier or save theano functions?
if we saved classifier previously, do we need to redefine train_model and eval_model functions?
epoch+=1
#training is finished
save classifier
I want to save the current trained model if it has higher accuracy than previously trained models, and load the saved model later if the current trained model accuracy is lower than the best accuracy.
My questions are:
When saving, should I save the classifier, or theano functions?
If the classifier needs to be saved, do I need to redefine theano functions when loading it, since classifier is changed.
Thanks, | Theano continue training | 0 | 0 | 1 | 0 | 0 | 246 |
31,626,044 | 2015-07-25T11:34:00.000 | 0 | 0 | 1 | 0 | 1 | raspberry-pi,ipython,ipython-notebook | 0 | 36,211,430 | 0 | 1 | 0 | false | 0 | 0 | On my Raspberry PI the .json are located in /home/<username>/.config/ipython/profile_default/security/ | 1 | 1 | 0 | 0 | I'm trying to setup a remote kernel on my Raspberry Pi right now, using IPython as my remote kernel and try to connect to this kernel using Spyder.
Using Spyder to create local kernels and use them to interpret code is working perfectly fine. Starting a kernel on my Raspberry Pi also works well using ipython kernel.
As described by many other users before, the .JSON file for the connection details I have to hand to Spyder, is located at /home/<username>/.ipython/profile_default/security/kernel-<id>.json. Unfortunately I can't find this .JSON file on my Raspberry Pi, but if I try to connect an existing kernel on my local PC I can find all local kernels.
What is the problem with the kernels on my Raspberry Pi? Why aren't they saved as .JSON files?
Another question: I accidently created another profile in IPython, how can I remove this profile? | Can't find IPython Kernel .JSON file | 0 | 0 | 1 | 0 | 0 | 809 |
31,630,636 | 2015-07-25T20:09:00.000 | 3 | 0 | 0 | 0 | 1 | python,http | 0 | 31,630,829 | 0 | 3 | 0 | false | 0 | 0 | Is it possible for a HTTP request to be that big ?
Yes it's possible but it's not recommended and you could have compatibility issues depending on your web server configuration. If you need to pass large amounts of data you shouldn't use GET.
If so how do I fix the OptionParser to handle this input?
It appears that OptionParser has set its own limit well above what is considered a practical implementation. I think the only way to 'fix' this is to get the Python source code and modify it to meet your requirements. Alternatively write your own parser.
UPDATE: I possibly mis-interpreted the question and the comment from Padraic below may well be correct. If you have hit an OS limit for command line argument size then it is not an OptionParser issue but something much more fundamental to your system design that means you may have to rethink your solution. This also possibly explains why you are attempting to use GET in your application (so you can pass it on the command line?) | 3 | 3 | 0 | 0 | I am debugging a test case. I use Python's OptionParser (from optparse) to do some testing and one of the options is a HTTP request.
The input in this specific case for the http request was 269KB in size.
So my python program fails with "Argument list too long" (I verified that there was no other arguments passed, just the request and one more argument as expected by the option parser. When I throw away some of the request and reduce its size, things work fine. So I have a strong reason to believe the size of the request is causing my problems here.)
Is it possible for a HTTP request to be that big ?
If so how do I fix the OptionParser to handle this input? | Is a HTTP Get request of size 269KB allowed? | 1 | 0.197375 | 1 | 0 | 1 | 82 |
31,630,636 | 2015-07-25T20:09:00.000 | 0 | 0 | 0 | 0 | 1 | python,http | 0 | 31,630,668 | 0 | 3 | 0 | true | 0 | 0 | Typical limit is 8KB, but it can vary (like, be even less). | 3 | 3 | 0 | 0 | I am debugging a test case. I use Python's OptionParser (from optparse) to do some testing and one of the options is a HTTP request.
The input in this specific case for the http request was 269KB in size.
So my python program fails with "Argument list too long" (I verified that there was no other arguments passed, just the request and one more argument as expected by the option parser. When I throw away some of the request and reduce its size, things work fine. So I have a strong reason to believe the size of the request is causing my problems here.)
Is it possible for a HTTP request to be that big ?
If so how do I fix the OptionParser to handle this input? | Is a HTTP Get request of size 269KB allowed? | 1 | 1.2 | 1 | 0 | 1 | 82 |
31,630,636 | 2015-07-25T20:09:00.000 | 2 | 0 | 0 | 0 | 1 | python,http | 0 | 31,630,678 | 0 | 3 | 0 | false | 0 | 0 | A GET request, unlike a POST request, contains all its information in the url itself. This means you have an URL of 269KB, which is extremely long.
Although there is no theoretical limit on the size allowed, many servers don't allow urls of over a couple of KB long and should return a 414 response code in that case. A safe limit is 2KB, although most modern software will allow a bit more than that.
But still, for 269KB, use POST (or PUT if that is semantically more correct), which can contain larger chunks of data as the content of a request rather than the url. | 3 | 3 | 0 | 0 | I am debugging a test case. I use Python's OptionParser (from optparse) to do some testing and one of the options is a HTTP request.
The input in this specific case for the http request was 269KB in size.
So my python program fails with "Argument list too long" (I verified that there was no other arguments passed, just the request and one more argument as expected by the option parser. When I throw away some of the request and reduce its size, things work fine. So I have a strong reason to believe the size of the request is causing my problems here.)
Is it possible for a HTTP request to be that big ?
If so how do I fix the OptionParser to handle this input? | Is a HTTP Get request of size 269KB allowed? | 1 | 0.132549 | 1 | 0 | 1 | 82 |
31,636,454 | 2015-07-26T11:30:00.000 | 2 | 0 | 0 | 1 | 0 | python,celery,python-asyncio | 0 | 43,289,761 | 0 | 2 | 0 | false | 0 | 0 | I implement on_finish function of celery worker to publish a message to redis
then in the main app uses aioredis to subscribe the channel, once got notified, the result is ready | 1 | 2 | 0 | 1 | I am having a Python application which offloads a number of processing work to a set of celery workers. The main application has to then wait for results from these workers. As and when result is available from a worker, the main application will process the results and will schedule more workers to be executed.
I would like the main application to run in a non-blocking fashion. As of now, I am having a polling function to see whether results are available from any of the workers.
I am looking at the possibility of using asyncio get notification about result availability so that I can avoid the polling. But, I could not find any information on how to do this.
Any pointers on this will be highly appreciated.
PS: I know with gevent, I can avoid the polling. However, I am on python3.4 and hence would prefer to avoid gevent and use asyncio. | Collecting results from celery worker with asyncio | 1 | 0.197375 | 1 | 0 | 0 | 2,413 |
31,639,596 | 2015-07-26T16:59:00.000 | 3 | 0 | 0 | 0 | 0 | python,django,heroku,scrapy | 0 | 31,639,931 | 0 | 1 | 0 | false | 1 | 0 | It's impossible for apps in the same project to be on different Python versions; the server has to run on one or the other. But it would be possible to have two projects, with your models in a shared app that is installed in both models, and the configuration pointing to the same database. | 1 | 1 | 0 | 0 | I have been building a project on Ubuntu 15.04 with Python 3.4 and django 1.7. Now I want to use scrapy djangoitem, but that only runs on python 2.7. It's easy enough to have separate virtualenvs to do the developing in, but how can i put these different apps together in a single project, not only on my local machine, but later on heroku?
If it was just content, I could move the scrapy items over once the work was done, but the idea of djangoitem is that it uses the django model. Does that mean the django model has to be on python 2.7 also in order for djangoitem to access it? Even that is not insurmountable if I then port it to python 3, but it isn't very DRY, especially when i have to run scrapy for frequent updates. Is there a more direct solution, such as a way to have one app be 2.7 and another be 3.4 in the same project? Thanks. | multiple versions of django/python in a single project | 1 | 0.53705 | 1 | 0 | 0 | 67 |
31,642,940 | 2015-07-26T23:19:00.000 | 1 | 0 | 1 | 0 | 0 | python,regex,string | 0 | 53,998,316 | 0 | 6 | 0 | false | 0 | 0 | You could split the string and check to see if it contains at least one first/last name that is correct. | 1 | 22 | 1 | 0 | I want to find out if you strings are almost similar. For example, string like 'Mohan Mehta' should match 'Mohan Mehte' and vice versa. Another example, string like 'Umesh Gupta' should match 'Umash Gupte'.
Basically one string is correct and other one is a mis-spelling of it. All my strings are names of people.
Any suggestions on how to achieve this.
Solution does not have to be 100 percent effective. | Finding if two strings are almost similar | 0 | 0.033321 | 1 | 0 | 0 | 17,280 |
31,644,834 | 2015-07-27T04:08:00.000 | 0 | 1 | 1 | 0 | 0 | python-2.7,module | 0 | 36,118,903 | 0 | 4 | 0 | false | 0 | 0 | To make this work consistently, you can put the module into the lib folder inside the python folder, then you can import it regardless of what directory you are in | 2 | 0 | 0 | 0 | I know how to import a module I have created if the script I am working on is in the same directory. I would like to know how to set it up so I can import this module from anywhere. For example, I would like to open up Python in the command line and type "import my_module" and have it work regardless of which directory I am in. | Importing a python module I have created | 0 | 0 | 1 | 0 | 0 | 31 |
31,644,834 | 2015-07-27T04:08:00.000 | 0 | 1 | 1 | 0 | 0 | python-2.7,module | 0 | 36,118,957 | 0 | 4 | 0 | false | 0 | 0 | You could create pth file with path to your module and put it into your Python site-packages directory. | 2 | 0 | 0 | 0 | I know how to import a module I have created if the script I am working on is in the same directory. I would like to know how to set it up so I can import this module from anywhere. For example, I would like to open up Python in the command line and type "import my_module" and have it work regardless of which directory I am in. | Importing a python module I have created | 0 | 0 | 1 | 0 | 0 | 31 |
31,649,314 | 2015-07-27T09:21:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,wordpress | 0 | 31,649,614 | 0 | 2 | 0 | false | 1 | 0 | There are many ways to do this. You will have to provide more info about what you are trying to accomplish to give the right advise.
make a page with a redirect (this is an ugly solution in seo and user perspective)
handle this on server level.
load your Django data with an ajax call | 1 | 0 | 0 | 0 | I need to do such thing, but I don't even know if it is possible to accomplish and if so, how to do this.
I wrote an Django application which I would like to 'attach' to my wordpress blog. However, I need a permalink (but no page in wordpress pages section) which would point to Django application on the same server. Is that possible? | How to create empty wordpress permalink and redirect it into django website? | 1 | 0.197375 | 1 | 0 | 0 | 138 |
31,660,214 | 2015-07-27T18:07:00.000 | 1 | 0 | 0 | 0 | 0 | python,gnuradio,gnuradio-companion | 0 | 31,667,553 | 0 | 1 | 0 | true | 0 | 0 | The "QT GUI Frequency Sink" block will display the frequency domain representation of a signal. You can save a static image of the spectrum by accessing the control panel using center-click and choosing "Save". | 1 | 1 | 1 | 0 | I have generated the spectrogram with GNU Radio and want to save the output graph but have no idea how to do it. | How to save a graph that is generated by GNU Radio? | 0 | 1.2 | 1 | 0 | 0 | 791 |
31,661,138 | 2015-07-27T18:58:00.000 | 0 | 0 | 0 | 0 | 0 | python,wxpython | 0 | 31,680,789 | 0 | 1 | 0 | false | 0 | 1 | I don't think the regular wx.PopupMenu will work that way. However if you look at the wxPython demo, you will see a neat widget called wx.PopupWindow that claims it can be used as a menu and it appears to work the way you want. The wx.PopupTransientWindow might also work. | 1 | 1 | 0 | 0 | I am using wxPython to write an app. I have a menu that pops up. I would like to know how to keep it on the screen after the user clicks an item on the menu. I only want it to go away after the click off it or if I tell it to in the programming. Does anyone know how to do this?
I am using RHEL 6 and wxPython 3.01.1 | Keep menu up after clicking in wxPython | 0 | 0 | 1 | 0 | 0 | 55 |
31,675,839 | 2015-07-28T12:05:00.000 | 0 | 0 | 0 | 0 | 0 | python,odoo | 0 | 31,680,404 | 0 | 2 | 0 | false | 1 | 0 | It's pretty basic and simple any python class can be called from it's name space, so call your class from namespace and instanciate the class.
Even Model class or any class inherited from Model can be called and instanciated like this.
Self.pool is just orm cache to access framework persistent layer.
Bests | 1 | 0 | 0 | 0 | I am aware that you can get a reference to an existing model from within another model by using self.pool.get('my_model')
My question is, how can I get a reference to a model from a Python class that does NOT extend 'Model'? | Access ORM models from different classes in Odoo/OpenERP | 0 | 0 | 1 | 0 | 0 | 714 |
31,684,375 | 2015-07-28T18:29:00.000 | 0 | 0 | 1 | 0 | 0 | python,dependencies,python-import,requirements.txt | 0 | 72,116,250 | 0 | 21 | 0 | false | 0 | 0 | To help solve this problem, always run requirements.txt on only local packages. By local packages I mean packages that are only in your project folder. To do this do:
Pip freeze —local > requirements.txt
Not pip freeze > requirements.txt.
Note that it’s double underscore before local.
However installing pipreqs helps too.
Pip install pipreqs.
The perfect solution though is to have a pipfile. The pipfile updates on its own whenever you install a new local package. It also has a pipfile.lock similar to package.json in JavaScript.
To do this always install your packages with pipenv not pip.
So we do pipenv | 1 | 778 | 0 | 0 | Sometimes I download the python source code from github and don't know how to install all the dependencies. If there is no requirements.txt file I have to create it by hands.
The question is:
Given the python source code directory is it possible to create requirements.txt automatically from the import section? | Automatically create requirements.txt | 0 | 0 | 1 | 0 | 0 | 795,098 |
31,685,048 | 2015-07-28T19:05:00.000 | 2 | 0 | 1 | 0 | 0 | python-2.7,web2py,anaconda,gensim | 1 | 31,687,769 | 0 | 1 | 0 | true | 1 | 0 | The Windows binary includes it's own Python interpreter and will therefore not see any packages you have in your local Python installation.
If you already have Python installed, you should instead run web2py from source. | 1 | 0 | 0 | 0 | I am new to Web2Py and Python stack. I need to use a module in my Web2Py application which uses "gensim" and "nltk" libraries. I tried installing these into my Python 2.7 on a Windows 7 environment but came across several errors due to some issues with "numpy" and "scipy" installations on Windows 7. Then I ended up resolving those errors by uninstalling Python 2.7 and instead installing Anaconda Python which successfully installed the required "gensim" and "nltk" libraries.
So, at this stage I am able to see all these "gensim" and "nltk" libraries resolving properly without any error in "Spyder" and "PyCharm". However, when I run my application in Web2Py, it still complains about "gensim" and gives this error: <type 'exceptions.ImportError'> No module named gensim
My guess is if I can configure Web2Py to use the Anaconda Python then this issue would be resolved.
I need to know if it's possible to configure Web2Py to use Anaconda Python and if it is then how do I do that?
Otherwise, if someone knows of some other way resolve that "gensim" error in Web2Py kindly share your thoughts.
All your help would be highly appreciated. | Configure Web2Py to use Anaconda Python | 0 | 1.2 | 1 | 0 | 0 | 712 |
31,711,555 | 2015-07-29T21:34:00.000 | 0 | 0 | 0 | 0 | 0 | python,r,machine-learning,random-forest | 1 | 31,742,947 | 0 | 2 | 0 | false | 0 | 0 | You can do a grid serarch over the 'regularazation' parameters to best match your target behavior.
Parameters of interest:
max depth
number of features | 1 | 1 | 1 | 0 | Using the randomForest package in R, I was able to train a random forest that minimized overall error rate. However, what I want to do is train two random forests, one that first minimizes false positive rate (~ 0) and then overall error rate, and one that first maximizes sensitivity (~1), and then overall error. Another construction of the problem would be: given a false positive rate and sensitivity rate, train two different random forests that satisfy one of the rates respectively, and then minimize overall error rate. Does anyone know if theres an r package or python package, or any other software out there that does this and or how to do this? Thanks for the help. | random forest with specified false positive and sensitivity | 0 | 0 | 1 | 0 | 0 | 1,366 |
31,760,059 | 2015-08-01T08:53:00.000 | 0 | 0 | 0 | 0 | 0 | android,python,ios,django | 0 | 31,760,094 | 0 | 2 | 0 | false | 1 | 0 | Sure. I've done this for my first app and others then. The backend technology is totally up to you, so feel free to take whatever you like.
The connection between backend and your apps should (but don't have to be) be something JSON-based. Standard REST works fine, Websockets also but have some issues on iOS. | 1 | 1 | 0 | 0 | I want to develop a online mobile app. I am thinking about using native languages to develop the front-ends, so Java for Android and Objective-C for iOS. However, for the back-end, can I use something like Django?
I have used django for a while, but the tutorials are really lacking, so can anyone point me to something that will help me understand how to show data handled by Django models on a front-end developed by Java for an android device (that is, by using XML I suppose). | Is it possible to develop the back-end of a native mobile app using the python powered framework Django? | 1 | 0 | 1 | 0 | 0 | 3,574 |
31,799,087 | 2015-08-04T01:01:00.000 | 2 | 1 | 0 | 1 | 0 | python,macos,vim,osx-yosemite | 1 | 31,800,107 | 0 | 1 | 0 | true | 0 | 0 | Vim doesn't check Python syntax out of the box, so a plugin is probably causing this issue.
Not sure why an OS upgrade would make a Vim plugin suddenly start being more zealous about things, of course, but your list of installed plugins (however you manage them) is probably the best place to start narrowing down your problem. | 1 | 1 | 0 | 0 | Overview
After upgrading to 10.11 Yosemite, I discovered that vim (on the terminal) highlights a bunch of errors in my python scripts that are actually not errors.
e.g.
This line:
from django.conf.urls import patterns
gets called out as an [import-error] Unable to import 'django.conf.urls'.
This error is not true because I can open up a python shell from the command line and import the supposedly missing module. I'm also getting a bunch of other errors all the way through my python file too: [bad-continuation] Wrong continued indentation, [invalid-name] Invalid constant name, etc.
All of these errors are not true.
Question
Anyway, how do I turn off these python error checks?
vim Details
vim --version:
VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Nov 5 2014 21:00:28)
Compiled by [email protected]
Normal version without GUI. Features included (+) or not (-):
-arabic +autocmd -balloon_eval -browse +builtin_terms +byte_offset +cindent
-clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments
-conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs
-dnd -ebcdic -emacs_tags +eval +ex_extra +extra_search -farsi +file_in_path
+find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv
+insert_expand +jumplist -keymap -langmap +libcall +linebreak +lispindent
+listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape
-mouse_dec -mouse_gpm -mouse_jsbterm -mouse_netterm -mouse_sysmouse
+mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg -osfiletype
+path_extra -perl +persistent_undo +postscript +printer -profile +python/dyn
-python3 +quickfix +reltime -rightleft +ruby/dyn +scrollbind +signs
+smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary
+tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title
-toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo
+vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp
-xterm_clipboard -xterm_save
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
user exrc file: "$HOME/.exrc"
fall-back for $VIM: "/usr/share/vim"
Compilation: gcc -c -I. -D_FORTIFY_SOURCE=0 -Iproto -DHAVE_CONFIG_H -arch i386 -arch x86_64 -g -Os -pipe
Linking: gcc -arch i386 -arch x86_64 -o vim -lncurses | How Do I Turn Off Python Error Checking in vim? (vim terminal 7.3, OS X 10.11 Yosemite) | 0 | 1.2 | 1 | 0 | 0 | 421 |
31,849,867 | 2015-08-06T07:50:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,django-admin,django-grappelli | 0 | 31,850,151 | 0 | 4 | 0 | false | 1 | 0 | If you want to change the appearance of the admin in general you should override admin templates. This is covered in details here: Overriding admin templates. Sometimes you can just extend the original admin file and then overwrite a block like {% block extrastyle %}{% endblock %} in django/contrib/admin/templates/admin/base.html as an example.
If your style is model specific you can add additional styles via the Media meta class in your admin.py. See an example here:
class MyModelAdmin(admin.ModelAdmin):
class Media:
js = ('js/admin/my_own_admin.js',)
css = {
'all': ('css/admin/my_own_admin.css',)
} | 1 | 1 | 0 | 0 | In Django grappelli, how can I add my own css files to all the admin pages? Or is there a way to extend admin's base.html template? | Django (grappelli): how add my own css to all the pages or how to extend admin's base.html? | 1 | 0.099668 | 1 | 0 | 0 | 3,792 |
31,855,794 | 2015-08-06T12:28:00.000 | 24 | 0 | 1 | 0 | 0 | python,jupyter-notebook,jupyter | 0 | 33,249,008 | 0 | 6 | 0 | false | 0 | 0 | Michael's suggestion of running your own nbviewer instance is a good one I used in the past with an Enterprise Github server.
Another lightweight alternative is to have a cell at the end of your notebook that does a shell call to nbconvert so that it's automatically refreshed after running the whole thing:
!ipython nbconvert <notebook name>.ipynb --to html
EDIT: With Jupyter/IPython's Big Split, you'll probably want to change this to !jupyter nbconvert <notebook name>.ipynb --to html now. | 1 | 179 | 0 | 0 | I am trying to wrap my head around what I can/cannot do with Jupyter.
I have a Jupyter server running on our internal server, accessible via VPN and password protected.
I am the only one actually creating notebooks but I would like to make some notebooks visible to other team members in a read-only way. Ideally I could just share a URL with them that they would bookmark for when they want to see the notebook with refreshed data.
I saw export options but cannot find any mention of "publishing" or "making public" local live notebooks. Is this impossible? Is it maybe just a wrong way to think about how Jupyter should be used? | How can I share Jupyter notebooks with non-programmers? | 1 | 1 | 1 | 0 | 0 | 147,878 |
31,860,630 | 2015-08-06T16:05:00.000 | 2 | 0 | 1 | 1 | 1 | python,hdfs,race-condition,ioerror | 0 | 31,934,576 | 0 | 1 | 0 | true | 0 | 0 | (Setting aside that it sounds like HDFS might not be the right solution for your use case, I'll assume you can't switch to something else. If you can, take a look at Redis, or memcached.)
It seems like this is the kind of thing where you should have a single service that's responsible for computing/caching these results. That way all your processes will have to do is request that the resource be created if it's not already. If it's not already computed, the service will compute it; once it's been computed (or if it already was), either a signal saying the resource is available, or even just the resource itself, is returned to your process.
If for some reason you can't do that, you could try using HDFS for synchronization. For example, you could try creating the resource with a sentinel value inside which signals that process A is currently building this file. Meanwhile process A could be computing the value and writing it to a temporary resource; once it's finished, it could just move the temporary resource over the sentinel resource. It's clunky and hackish, and you should try to avoid it, but it's an option.
You say you want to avoid expensive recalculations, but if process B is waiting for process A to compute the resource, why can't process B (and C and D) be computing it as well for itself/themselves? If this is okay with you, then in the event that a resource doesn't already exist, you could just have each process start computing and writing to a temporary file, then move the file to the resource location. Hopefully moves are atomic, so one of them will cleanly win; it doesn't matter which if they're all identical. Once it's there, it'll be available in the future. This does involve the possibility of multiple processes sending the same data to the HDFS cluster at the same time, so it's not the most efficient, but how bad it is depends on your use case. You can lessen the inefficiency by, for example, checking after computation and before upload to the HDFS whether someone else has created the resource since you last looked; if so, there's no need to even create the temporary resource.
TLDR: You can do it with just HDFS, but it would be better to have a service that manages it for you, and it would probably be even better not to use HDFS for this (though you still would possibly want a service to handle it for you, even if you're using Redis or memcached; it depends, once again, on your particular use case). | 1 | 5 | 0 | 0 | So I have some code that attempts to find a resource on HDFS...if it is not there it will calculate the contents of that file, then write it. And next time it goes to be accessed the reader can just look at the file. This is to prevent expensive recalculation of certain functions
However...I have several processes running at the same time on different machines on the same cluster. I SUSPECT that they are trying to access the same resource and I'm hitting a race condition that leads a lot of errors where I either can't open a file or a file exists but can't be read.
Hopefully this timeline will demonstrate what I believe my issue to be
Process A goes to access resource X
Process A finds resource X exists and begins writing
Process B goes to access resource X
Process A finishes writing resource X
...and so on
Obviously I would want Process B to wait for Process A to be done with Resource X and simply read it when A is done.
Something like semaphores come to mind but I am unaware of how to use these across different python processes on separate processors looking at the same HDFS location. Any help would be greatly appreciated
UPDATE: To be clear..process A and process B will end up calculating the exact same output (i.e. the same filename, with the same contents, to the same location). Ideally, B shouldn't have to calculate it. B would wait for A to calculate it, then read the output once A is done. Essentially this whole process is working like a "long term cache" using HDFS. Where a given function will have an output signature. Any process that wants the output of a function, will first determine the output signature (this is basically a hash of some function parameters, inputs, etc.). It will then check the HDFS to see if it is there. If it's not...it will write calculate it and write it to the HDFS so that other processes can also read it. | Sharing a resource (file) across different python processes using HDFS | 1 | 1.2 | 1 | 0 | 0 | 132 |
31,866,429 | 2015-08-06T21:51:00.000 | 0 | 0 | 0 | 0 | 0 | python,scrapy,virtualenv | 0 | 33,439,385 | 0 | 3 | 0 | true | 1 | 0 | It's not possible to do what I wanted to do on the GoDaddy plan I had. | 1 | 1 | 0 | 0 | Here's my problem,
I have a shared hosting (GoDaddy Linux Hosting package) account and I'd like to create .py file to do some scraping for me. To do this I need the scrapy module (scrapy.org). Because of the shared account I can't install new modules so I installed VirtualEnv and created a new virtual env. that has pip, wheel, etc. preinstalled.
Running pip install scrapydoes NOT complete successfully because scrapy has lot of dependencies like libxml2 and it also needs python-dev tools. If I had access to 'sudo apt-get ...' this would be easy but I dont'. I can only use pip and easy_install.
So How do I install the python dev tool? And how do I install the dependencies? Is this even possible?
Cheers | Installing Scrapy on Python VirtualEnv | 1 | 1.2 | 1 | 0 | 0 | 2,964 |
31,866,507 | 2015-08-06T21:57:00.000 | 13 | 0 | 0 | 0 | 0 | python,windows | 0 | 31,866,538 | 0 | 2 | 0 | true | 0 | 0 | As long as the computer doesn't get put to sleep, your process should continue to run. | 2 | 14 | 0 | 0 | I am running a Python script that uses the requests library to get data from a service.
The script takes a while to finish and I am currently running it locally on my Windows 7 laptop. If I lock my screen and leave, will the script continue to run (for ~3 hours) without Windows disconnecting from the internet or halting any processes? The power settings are already set up to keep the laptop from sleeping.
If it will eventually halt anything, how do I keep this from happening? Thanks. | Keep Python script running after screen lock (Win. 7) | 0 | 1.2 | 1 | 0 | 1 | 27,171 |
31,866,507 | 2015-08-06T21:57:00.000 | 7 | 0 | 0 | 0 | 0 | python,windows | 0 | 31,866,586 | 0 | 2 | 0 | false | 0 | 0 | Check "Power Options" in the Control panel. You don't need to worry about the screen locking or turning off as these wont affect running processes. However, if your system is set to sleep after a set amount of time you may need to change this to Never. Keep in mind there are separate settings depending on whether or not the system is plugged in. | 2 | 14 | 0 | 0 | I am running a Python script that uses the requests library to get data from a service.
The script takes a while to finish and I am currently running it locally on my Windows 7 laptop. If I lock my screen and leave, will the script continue to run (for ~3 hours) without Windows disconnecting from the internet or halting any processes? The power settings are already set up to keep the laptop from sleeping.
If it will eventually halt anything, how do I keep this from happening? Thanks. | Keep Python script running after screen lock (Win. 7) | 0 | 1 | 1 | 0 | 1 | 27,171 |
31,870,616 | 2015-08-07T05:52:00.000 | -3 | 0 | 1 | 0 | 0 | python,python-2.7,time | 0 | 31,870,657 | 0 | 4 | 0 | false | 0 | 0 | If the program would know how much data it is getting, you could set it up to function like a progress bar.. | 1 | 1 | 0 | 0 | I created a python file that collect data. After collecting all the data, it will print out "Done.". Sometimes, it might take atleast 3 minutes to collect all the data.
I would like to know how to print something like "Please wait..." for every 30 seconds, and it will stop after collecting all the data.
Can anyone help me please? | Python Priting Out Something While Waiting For Long Output | 0 | -0.148885 | 1 | 0 | 0 | 1,834 |
31,892,531 | 2015-08-08T11:17:00.000 | 1 | 1 | 0 | 0 | 0 | python,unit-testing | 0 | 31,892,566 | 0 | 1 | 0 | true | 0 | 0 | When unit testing, you test a particular unit (function/method...) in isolation, meaning that you don't care if other components that your function uses, work (since there are other unit test cases that cover those).
So to answer your question - it's out of the scope of your unit tests whether an external service like Google oAuth works. You just need to tests that you make a correct call to it, and here's where Mock comes in handy. It remembers the call for you to inspect and make some assertions about it, but it prevents the request for actually going out to the external service / component / library / whatever.
Edit: If you find your code is too complex and difficult to test, that might be an indication that it should be refactored into smaller more manageable pieces. | 1 | 0 | 0 | 0 | I am fairly new to unit testing. And at the moment I have trouble on trying to unit test a Google oAuth Picasa authentication. It involves major changes to the code if I would like to unit tested it (yeah, I develop unit test after the app works).
I have read that Mock Object is probably the way to go. But if I use Mock, how do I know that the functionality (that is Google oAuth Picasa authentication), is really working?
Or, aside that I develop unit testing after the app finished, did I made other mistakes in understanding Mock? | How can mock object replace all system functionality being tested? | 0 | 1.2 | 1 | 0 | 0 | 32 |
31,893,477 | 2015-08-08T13:11:00.000 | 0 | 0 | 0 | 1 | 1 | python,linux,qt,ubuntu,pyqt | 0 | 42,756,312 | 0 | 2 | 0 | false | 0 | 0 | This is a hacky solution.
Install qt-qtconf. sudo apt-get install qt4-qtconfig
Run sudo qtconfig or gksudo qtconfig.
Change GUI Style to GTK+.
Edited. | 1 | 0 | 0 | 0 | Ok the title explains it all. But just to clarify.
I have Ubuntu and programed a GUI app with Qt Designer 4 and PyQt4. The program works fine running python main.py in terminal.
Last week I made an update and now the program needs sudo privelages to start. So I type sudo python main.py.
But Oh my GODDDDDDD. What an ungly inteface came up. O.o
And I don't know how to get the realy nice normal-mode interface in my programm and all of my others programs i'll make. Is there any way to set a vaiable to python? Do I need to execute any command line code?
The program is deployed only in Linux machines.
P.S.
I search a lot in the web and couldn't find a working solution. | How to run PyQt4 app with sudo privelages in Ubuntu and keep the normal user style | 0 | 0 | 1 | 0 | 0 | 1,223 |
31,906,949 | 2015-08-09T17:32:00.000 | 2 | 0 | 1 | 0 | 0 | qpython | 0 | 41,871,252 | 0 | 4 | 0 | false | 0 | 0 | go to settings->input method select word-based | 3 | 2 | 0 | 0 | Very basic question. Im trying to use qpython. I can type things in the console but no obvious way to enter a return (or enter) | in qpython, how do I enter a "return" character | 0 | 0.099668 | 1 | 0 | 0 | 3,088 |
31,906,949 | 2015-08-09T17:32:00.000 | 0 | 0 | 1 | 0 | 0 | qpython | 0 | 32,237,684 | 0 | 4 | 0 | false | 0 | 0 | The console works just like a normally python console. You can use a function if you want to write a script in the console. | 3 | 2 | 0 | 0 | Very basic question. Im trying to use qpython. I can type things in the console but no obvious way to enter a return (or enter) | in qpython, how do I enter a "return" character | 0 | 0 | 1 | 0 | 0 | 3,088 |
31,906,949 | 2015-08-09T17:32:00.000 | 0 | 0 | 1 | 0 | 0 | qpython | 0 | 33,434,430 | 0 | 4 | 0 | false | 0 | 0 | There is no way of doing it.
The console will automatically input a break line when the line of code ends so you can continue inputting in the screen without any scroll bars.
For complex code, you should use the editor. | 3 | 2 | 0 | 0 | Very basic question. Im trying to use qpython. I can type things in the console but no obvious way to enter a return (or enter) | in qpython, how do I enter a "return" character | 0 | 0 | 1 | 0 | 0 | 3,088 |
31,907,080 | 2015-08-09T17:43:00.000 | 0 | 0 | 0 | 0 | 0 | python,algorithm,data-structures,queue | 0 | 31,908,093 | 0 | 1 | 0 | false | 0 | 0 | You can do something like this:
Start timer 60 minutes
Get the pages that people visits
Save pages
If timer is not ended do step 2-3 again if timed is ended:
Count wich one is the most visited
Count wich one is the second most visited
Etc | 1 | 0 | 0 | 0 | If there a data structures likes container/queue, based on time , I could use it this way: add item(may duplicate) into it one by one, pop out those added time ealier then 60 minutes; count the queue; then I got top 10 most added items, in a dymatice period, said, 60min.
How to implement this time based container ? | python, data structures, algorithm: how to rank top 10 most visited pages in latest 60 minutes? | 0 | 0 | 1 | 0 | 1 | 347 |
31,914,900 | 2015-08-10T08:26:00.000 | 0 | 0 | 1 | 0 | 0 | python,garbage-collection | 0 | 35,272,896 | 0 | 1 | 0 | false | 0 | 0 | Unless you are overriding the __del__ methods, you should not worry about circular dependencies, as Python is able to properly cope with them. | 1 | 8 | 0 | 0 | I have some python code where gc.collect() seems to free a lot of memory. Given Python's reference counting nature, I am inclined to think that my program contains a lot of cyclical references. Since some data structures are rather big, I would like to introduce weak references. Now I need to find the circular references, having found a few of the obvious ones, I wonder if one can detect circular references and the objects that form the ring explicitly. So far I have only seen tutorials on how to call gc.collect et. al. | How to find out which specific circular references are present in code | 0 | 0 | 1 | 0 | 0 | 909 |
31,923,606 | 2015-08-10T15:32:00.000 | 1 | 0 | 1 | 0 | 0 | python-2.7,windows-10,spyder | 0 | 32,023,483 | 0 | 1 | 0 | false | 0 | 0 | First, one correction: the problem was with starting Spyder, not running .py or .pyw files. Anyway, things work all right now after de-installing Spyder and Python, and reinstalling the Python(x,y) package (instead of Anaconda's). Then, when starting Spyder from the Python(x,y)start window, it behaves normally. | 1 | 0 | 0 | 0 | Can't open Spyder2 in Windows 10.0 (# 10240): the icon just appears briefly. Python 2.7.10 and Spyder 2.3.1 were loaded with Anaconda 2.3.0 (64-bit). The python console works fine - but I can't get my *.py or *.pyw files running. There is probably some message in the Python console when attemtping to open Spyder, but I don't know how to capture it. | Can't run Spyder or .py(w) scripts with Windows 10 | 0 | 0.197375 | 1 | 0 | 0 | 1,096 |
31,931,087 | 2015-08-11T00:04:00.000 | 1 | 0 | 1 | 1 | 0 | linux,python-2.7,build,compilation,mod-wsgi | 0 | 31,931,647 | 0 | 1 | 0 | false | 0 | 0 | I'll document this here as the fix, also to hopefully get a comment from Graham as to why this might be needed;
Changing
make
to
LD_RUN_PATH=/usr/local/lib make
was the answer, but i had to use this for building both python2.7.10 and mod_wsgi. Without using LD_RUN_PATH on mod_wsgi I still got the dreaded;
[warn] mod_wsgi: Compiled for Python/2.7.10.
[warn] mod_wsgi: Runtime using Python/2.7.3. | 1 | 2 | 0 | 0 | System : SMEServer 8.1 (CentOS 5.10) 64bit, system python is 2.4.3
There is an alt python at /usr/local/bin/python2.7 (2.7.3) which was built some time ago.
Goal : build python2.7.10, mod_wsgi, django. First step is python 2.7.10 to replace the (older and broken) 2.7.3
What happens:
When i build the latest 2.7 python as shared, the wrong executable is built.
cd /tmp && rm -vrf Python-2.7.10 && tar -xzvf Python-2.7.10.tgz && cd Python-2.7.10 && ./configure && make && ./python -V
2.7.10 <- as expected
... but this wont work with mod_wsgi - we have to --enable-shared.
cd /tmp && rm -vrf Python-2.7.10 && tar -xzvf Python-2.7.10.tgz && cd Python-2.7.10 && ./configure --enable-shared && make && ./python -V
2.7.3 <- Wrong version!
I'm deleting the entire build directory each time to isolate things and ensure I'm not polluting the folder with each attempt. Somehow the (years old) install of 2.7.3 is being 'found' by configure but only when '--enable-shared' is on.
cd /tmp && rm -vrf Python-2.7.10 && tar -xzvf Python-2.7.10.tgz && cd Python-2.7.10 && ./configure --prefix=/usr/local/ && make && ./python -V
2.7.10
cd /tmp && rm -vrf Python-2.7.10 && tar -xzvf Python-2.7.10.tgz && cd Python-2.7.10 && ./configure --enable-shared --prefix=/usr/local/ && make && ./python -V
2.7.3 <- ???
Where do I look to find how make is finding old versions? | "make" builds wrong python version | 0 | 0.197375 | 1 | 0 | 0 | 364 |
31,932,218 | 2015-08-11T02:25:00.000 | 3 | 0 | 0 | 0 | 0 | python,django,heroku | 0 | 31,932,292 | 0 | 2 | 0 | false | 1 | 0 | To my knowledge you can not get an ip for a heroku application. You could create a proxy with a known ip that serves as a middleman for the application. Otherwise you might want to look at whether heroku is still the correct solution for you | 1 | 18 | 0 | 0 | So in my django application, i'm running a task that will request from an api some data in the form of json.
in order for me to get this data, i need to give the IP address of where the requests are going to come from (my heroku app)
how do i get the ip address in which my heroku application will request at | what is the IP address of my heroku application | 0 | 0.291313 | 1 | 0 | 0 | 40,929 |
31,939,714 | 2015-08-11T10:48:00.000 | 0 | 0 | 0 | 0 | 1 | python,django,django-admin,virtualenv | 0 | 56,271,468 | 0 | 4 | 0 | false | 1 | 0 | I had the same problem. Could be related to your zsh/bash settings.
I realized that using zsh (my default) I would get django-admin version 1.11 despite the Django version was 2.1! When I tried the same thing with bash I would get django-admin version 2.1 (the correct version). Certainly a misconfiguration.
So, I strongly suggest you check your zsh or bash settings to check for paths you might have. | 2 | 1 | 0 | 0 | I've created a n new directory, a virtualenv and installed a django-toolbelt inside it. The django-version should be 1.8 but when I call 'django-admin.py version' it says 1.6. So when I start a new project it creates a 1.6. I thought virtualenv was supposed to prevent this. What am I doing wrong?
Edit: I think it has to do with the PATH (?). Like it's calling the wrong django-admin version. I'm on Windows 7. Still don't know how to fix it. | Django-admin creates wrong django version inside virtualenv | 0 | 0 | 1 | 0 | 0 | 1,969 |
31,939,714 | 2015-08-11T10:48:00.000 | 3 | 0 | 0 | 0 | 1 | python,django,django-admin,virtualenv | 0 | 47,748,881 | 0 | 4 | 0 | false | 1 | 0 | I came across this problem too. In the official document, I found that, in a virtual environment, if you use the command 'django-admin', it would search from PATH usually in '/usr/local/bin'(Linux) to find 'django-admin.py' which is a symlink to another version of django. This is the reason of what happened finally.
So there are two methods to solve this problem:
re-symlink your current version django-admin(site-packages/django/bin/django-admin.py) to 'usr/local/bin/django-admin' or 'usr/local/bin/django-admin.py'
REMIND: This is a kind of global way so that it will effects your other django projects, so I recommend the second method
cd to your_virtual_env/lib/python3.x/site-packages/django/bin/(of course you should activate your virtual environment), and then use 'python django-admin.py startproject project_name project_full_path' to create django project | 2 | 1 | 0 | 0 | I've created a n new directory, a virtualenv and installed a django-toolbelt inside it. The django-version should be 1.8 but when I call 'django-admin.py version' it says 1.6. So when I start a new project it creates a 1.6. I thought virtualenv was supposed to prevent this. What am I doing wrong?
Edit: I think it has to do with the PATH (?). Like it's calling the wrong django-admin version. I'm on Windows 7. Still don't know how to fix it. | Django-admin creates wrong django version inside virtualenv | 0 | 0.148885 | 1 | 0 | 0 | 1,969 |
31,942,911 | 2015-08-11T13:18:00.000 | 1 | 0 | 0 | 0 | 0 | python,ckan | 0 | 32,591,030 | 0 | 1 | 0 | true | 0 | 0 | I'm not aware of any extensions that do this.
You could write one to add this info in a dataset extra field. You may wish to store it as JSON and record the ratings given by each user.
Alternatively you could try the rating_create API function - this is old functionality which has no UI, but it may just do what you want. | 1 | 0 | 1 | 0 | I have used ckanext-qa but its seems its not as per my requirement I am looking for extension by which Logged in user can be able to rate form 1 to 5 for each dataset over ckan.
Anybody have an idea how to do like that | How to show star rating on ckan for datasets | 0 | 1.2 | 1 | 0 | 0 | 261 |
32,015,987 | 2015-08-14T17:52:00.000 | 0 | 0 | 1 | 0 | 0 | python,pycharm,ipython-notebook | 1 | 32,101,098 | 0 | 1 | 0 | true | 0 | 0 | I spoke to JetBrains support. This is a known issue with Python 2.7.9 via Anaconda (maybe just with Anaconda in general, they did not specify). They said it will be fixed with the next release, which should be coming out in the next few days. | 1 | 0 | 0 | 0 | This is really silly, but it's driving me nuts!
Normally when I run ipython notebook through pycharm, the first time I click on the 'play' button to run a cell, PyCharm asks me if I want to start the kernel. When I say yes, it gives me a nice kernel window that shows me output from commands and errors.
I really like this feature for debugging, but somehow it went away. PyCharm no longer asks if I would like to start the kernel, and I no longer can find the kernel window. My notebook is still running just fine, so the ipython kernel must be started somewhere.
Can someone please tell me how to view the kernel window?
Thanks so much! | PyCharm: lost ipython kernel window | 0 | 1.2 | 1 | 0 | 0 | 371 |
32,027,621 | 2015-08-15T17:57:00.000 | 3 | 1 | 1 | 0 | 0 | python,documentation | 0 | 32,031,704 | 0 | 1 | 0 | true | 0 | 0 | When I do similar things, if it is a small class I will put everything in the same class, but if it is bigger, I typically make a class that only contains the fields, and then a subclass of that with functions. Then you can have a docstring for your fields class and a separate docstring for your simulation functions.
YMMV, but I would never consider adding getters and setters for the sole purpose of making the documentation conform to some real or imaginary ideal. | 1 | 1 | 0 | 0 | I am working on a python module that is a convenience wrapper for a c library. A python Simulation class is simply a ctypes structure with a few additional helper functions. Most parameters in Simulation can just be set using the _fields_ variable. I'm wondering how to document these properly. Should I just add it to the Simulation docstring? Or should I write getter/setter methods just so I can document the variables? | Documenting ctypes fields | 1 | 1.2 | 1 | 0 | 0 | 124 |
32,059,711 | 2015-08-17T21:06:00.000 | 0 | 0 | 0 | 0 | 0 | python,scipy,hierarchical-clustering | 0 | 70,254,029 | 0 | 2 | 0 | false | 0 | 0 | The parameter 'method' is used to measure the similarities between clusters through the hierarchical clustering process. The parameter 'metric' is used to measure the distance between two objects in the dataset.
The 'metric' is closely related to the nature of the data (e.g., you could want to use 'euclidean' distance for objects with the same number of features, or Dynamic Time Warping for time series with different durations).
The thing is that there are two ways of using the linkage function. The first parameter is y, and it can be either the data itself or a distance matrix produced previously by a given measure.
If you choose to feed 'linkage' with a distance matrix, then you won't need the 'metric' parameter, because you have already calculated all the distance between all objects. | 1 | 3 | 1 | 0 | There is one distance function I can pass to pdist use to create the distance matrix that is given to linkage. There is a second distance function that I can pass to linkage as the metric.
Why are there two possible distance functions?
If they are different, how are they used? For instance, does linkage use the distances in the distance matrix for its initial iterations, i.e. to see if any two original observations should be combined into a cluster, and then use the metric function for further combinations, i.e. of two clusters or of a cluster with an original observation? | In scipy, what's the point of the two different distance functions used in hierarchical clustering? | 1 | 0 | 1 | 0 | 0 | 206 |
32,064,356 | 2015-08-18T05:41:00.000 | 5 | 0 | 1 | 0 | 0 | python,decimal,division,modulus | 0 | 32,065,934 | 0 | 2 | 0 | false | 0 | 0 | The tried and true method here is to multiply your divisor and dividend by a power of 10. Effectively, 54.10 becomes 541 and 0.10 becomes 1. Then you can use standard modulo or ceiling and floor to achieve what you need. | 1 | 1 | 0 | 0 | I'm trying to test if a float i.e(54.10) is divisible by 0.10. 54.10 % .10 returns .10 and not 0, why is that and how can I get it to do what I want it to do? | how to test if a number is divisible by a decimal less than 1? (54.10 % .10) | 0 | 0.462117 | 1 | 0 | 0 | 557 |
32,100,003 | 2015-08-19T15:38:00.000 | 0 | 1 | 0 | 1 | 0 | python-2.7 | 0 | 32,107,551 | 0 | 1 | 0 | false | 0 | 0 | Unless you are significantly compressing before download, and decompressing the image after download, the problem is your 115,200 baud transfer rate, not the speed of reading from a file.
At the standard N/8/1 line encoding, each byte requires 10 bits to transfer, so you will be transferring 1150 bytes per second.
In 10 minutes, you will transfer 1150 * 60 * 10 = 6,912,000 bytes. At 3 bytes per pixel (for R, G, and B), this is 2,304,600 pixels, which happens to be the number of pixels in a 1920 by 1200 image.
The answer is to (a) increase the baud rate; and/or (b) compress your image (using something simple to decompress on the FPGA like RLE, if it is amenable to that sort of compression). | 1 | 0 | 0 | 0 | I have a fpga board and I write a VHDL code that can get Images (in binary) from serial port and save them in a SDRAM on my board. then FPGA display images on a monitor via a VGA cable. my problem is filling the SDRAM take to long(about 10 minutes with 115200 baud rate).
on my computer I wrote a python code to send image(in binary) to FPGA via serial port. my code read binary file that saved in my hard disk and send them to FPGA.
my question is if I use buffer to save my images insted of binary file, do I get a better result? if so, can you help me how to do that, please? if not, can you suggest me a solution, please?
thanks in advans, | IS reading from buffer quicker than reading from a file in python | 0 | 0 | 1 | 0 | 0 | 96 |
32,109,319 | 2015-08-20T03:58:00.000 | 1 | 0 | 0 | 0 | 0 | python,numpy,machine-learning,neural-network | 0 | 62,125,080 | 0 | 9 | 0 | false | 0 | 0 | ReLU(x) also is equal to (x+abs(x))/2 | 1 | 91 | 1 | 0 | I want to make a simple neural network which uses the ReLU function. Can someone give me a clue of how can I implement the function using numpy. | How to implement the ReLU function in Numpy | 0 | 0.022219 | 1 | 0 | 0 | 165,706 |
32,146,943 | 2015-08-21T18:19:00.000 | 0 | 0 | 1 | 0 | 0 | python,printing,wxpython,receipt | 0 | 32,464,557 | 0 | 1 | 0 | true | 0 | 0 | Well i figure out some sort of solution :
receipt printing is impossible with wxPython , so, raw printing with escape sequences would be better option
os.system("echo ' some text ' | lpr -o raw" )
first initialize printer
os.system("echo ' \x1B\x40' | lpr -o raw" )
for bold letters with ESC code :
os.system("echo ' \x1BE some text \x1BF ' | lpr -o raw" )
for double width :
os.system("echo ' \x1BW\01 some text ' | lpr -o raw" )
for underline
os.system("echo ' \x1B\035 some text \x1B\034' | lpr -o raw" )
and many more options can be used with ESC code | 1 | 0 | 0 | 0 | I have prepared a small program for retail shop, and have to print out receipt (using tvs msp star 240 dot matrix printer/with paper roll) .
with wx.Printout() class for printing , as print preview is ok but actual printing is different and awkward :
1. i m using paper roll n don't know how to call end printing/OnEndPrinting()/ cut paper ?
2. how to correct text shape or which font for actual printout ?
I m new for programming .....
Please help and suggest appropriate code for this ...
Thanks in advance !! | Printout for receipt printer | 0 | 1.2 | 1 | 0 | 0 | 1,031 |
32,172,766 | 2015-08-23T23:58:00.000 | 0 | 0 | 0 | 0 | 1 | c#,ironpython,keil | 1 | 32,383,582 | 0 | 1 | 0 | false | 0 | 0 | Could be problems with cr/lf.
Helpful would be a binary diff of the parsed and new created file. You could get more help if you post a few lines of a binary diff here. | 1 | 0 | 0 | 0 | I need to change some settings in Keil uVision project. I did not find how to disable/enable project options through command line.
So I tried to do this by simple parsing .uvproj and .uvopt files with System.Xml in IronPython:
import clr
clr.AddReference('System.Xml')
xml_file = System.Xml.XmlDocument()
xml_file.Load(PATH_TO_UVPROJ_FILE)
xml_file.Save(PATH_TO_UVPROJ_FILE)
The problem is that I can't open parsed .uvproj file in uVision (get error "Cannot read project file").
If I copy all text from parsed .uvproj and past it to newly created file (New-Text Document in Windows Explorer -> rename extnsion to .uvproj -> past copied text -> save file) uVision opens it without error.
Why does this happen? | IronPython: Can't open Keil uVision .uvproj file edited with System.Xml | 0 | 0 | 1 | 0 | 0 | 1,224 |
32,188,979 | 2015-08-24T18:14:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,django-apps | 0 | 32,213,209 | 0 | 1 | 0 | true | 1 | 0 | My sugestion is to create a third model, called ArtEvent and make this model points to Art and Event, this way you can create an especific app to manage events and then link everything. For example, when creating a new ArtEvent you redirects the user for the Event app, to enable him to create a new event. Then redirects again to the Art app with the created event, create a new ArtEvent and links those objects.
In future lets suppose that you want to add events to another model, like User, if you follow the same strategy you can separate what is UserEvent specific, and maintain what is common between ArtEvent and UserEvent. | 1 | 0 | 0 | 0 | I am implementing a project using Django. It's a site where people can view different Art courses and register. I am having trouble implementing the app as reusable applications. I already have a standalone App which takes care of all the aspect of Arts. Now I want to create another application where an admin create various events for the Arts in the system. conceptually these two should be a standalone apps. Event scheduling is pretty general use case and I want to implement in a way where it can be used for scheduling any kind of Event.
In my case, those Events are Art related events. I don't want to put a foreign key to Art model in my Event model. how can I make it reusable so that it would work for scheduling Events related to any kind of objects. | regarding Django philosphy of implementing project as reusable applications | 0 | 1.2 | 1 | 0 | 0 | 49 |
32,209,554 | 2015-08-25T16:41:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-2.7,python-3.x,command-prompt,enthought | 0 | 43,220,221 | 0 | 3 | 0 | false | 0 | 0 | After editing each path and creating a new variable for each python version, be sure to rename the python.exe to a unique one. i.e. "python3x" . then you can call it in the command line as "python3x". I am assuming that the original python installed (2X) retains the python.exe of which when you call "python" in the command line, it will show the 2x version | 2 | 0 | 0 | 0 | I have uninstalled Python 2.7 and installed Python 3. But, when I type Python on my command prompt I get this :
"Enthought Canopy Python 2.7.9 ........."
How can I run Python 3 from command line or how can I make it default on my computer? I asked Enthought Canopy help and I was told that I can "have Canopy be your default Python only in a "Canopy Command Prompt". Not sure what it means.
edit : Thanks everyone. As suggested, I had to uninstall everything and install Python again. | How to make Python 3 my default Python at command prompt? | 1 | 0 | 1 | 0 | 0 | 4,199 |
32,209,554 | 2015-08-25T16:41:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-2.7,python-3.x,command-prompt,enthought | 0 | 61,764,881 | 0 | 3 | 0 | false | 0 | 0 | You can copy python.exe to python3.exe.
If you are using Anaconda, then you will find it in the sub directory of your environment, for intance, c:\Anaconda\envs\myenvironment. | 2 | 0 | 0 | 0 | I have uninstalled Python 2.7 and installed Python 3. But, when I type Python on my command prompt I get this :
"Enthought Canopy Python 2.7.9 ........."
How can I run Python 3 from command line or how can I make it default on my computer? I asked Enthought Canopy help and I was told that I can "have Canopy be your default Python only in a "Canopy Command Prompt". Not sure what it means.
edit : Thanks everyone. As suggested, I had to uninstall everything and install Python again. | How to make Python 3 my default Python at command prompt? | 1 | 0 | 1 | 0 | 0 | 4,199 |
32,230,294 | 2015-08-26T15:07:00.000 | 1 | 1 | 0 | 0 | 0 | python,jira | 0 | 32,234,002 | 0 | 2 | 0 | false | 1 | 0 | Take a look at JIRA webhooks calling a small python based web server? | 1 | 1 | 0 | 0 | Let say I'm creating an issue in Jira and write the summary and the description. Is it possible to call a python script after these are written that sets the value for another field, depending on the values of the summary and the description?
I know how to create an issue and change fields from a python script using the jira-python module. But I have not find a solution for using a python script while editing/creating the issue manually in Jira. Does anyone have an idea of how I manage that? | Call python script from Jira while creating an issue | 1 | 0.099668 | 1 | 0 | 0 | 1,311 |
32,245,227 | 2015-08-27T09:13:00.000 | 0 | 0 | 0 | 1 | 0 | python,websocket,tornado | 0 | 32,245,768 | 0 | 1 | 0 | false | 1 | 0 | The on_close event can only be triggered when the connection is closed.
You can send a ping and wait for an on_pong event.
Timouts are typically hard to detect since you won't even get a message that the socket is closed. | 1 | 0 | 0 | 0 | I'm running a Python Tornado server with a WebSocket handler.
We've noticed that if we abruptly disconnect the a client (disconnect a cable for example) the server has no indication the connection was broken. No on_close event is raised.
Is there a workaround?
I've read there's an option to send a ping, but didn't see anyone use it in the examples online and not sure how to use it and if it will address this issue. | Tornado websocket pings | 0 | 0 | 1 | 0 | 1 | 1,215 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.