Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
22,639,768 | 2014-03-25T15:53:00.000 | 0 | 1 | 0 | 1 | python,linux,bash | 22,640,013 | 1 | false | 0 | 0 | The submitted script is most likely using the system Python installation and not your own. Try submitting a shell script with only one command, which python, to confirm.
The fix is to prepend the path to your Python interpreter to your system path. On my machine, the right Python is installed at /Users/mbatchkarov/anaconda/bin/python. I added export PATH="/Users/mbatchkarov/anaconda/bin:$PATH" to ~/.bash_profile
EDIT Add the same line to ~/.bashrc. | 1 | 1 | 0 | I have an account to a computing cluster that uses Scientific Linux. Of course I only have user access. I'm working with python and I need to run python scripts, so I need to import some python modules. Since I don't have root access, I installed a local python copy on my $HOME with all the required modules. When I run the scripts on my account (hosting node), they run correctly. But in order to submit jobs to the computing queues (to process on much faster machines), I need to submit a bash script that has a line that executes the scripts. The computing cluster uses SunGrid Engine. However when I submit the bash script, I get an error that the modules I installed can't be found!
So my understanding to the problem is that the modules are not sent or something to the machine that executes the script. My question is: is it possible to include all the modules in the script or so?
EDIT: I just created a bash script that runs which python and I noticed that the output was NOT my python copy. But when I run 'which python' on my ssh account, I get my python copy correctly.. | Loading python modules through a computing cluster | 0 | 0 | 0 | 109 |
22,647,454 | 2014-03-25T22:16:00.000 | 0 | 0 | 0 | 1 | java,php,python,google-app-engine,go | 22,649,967 | 1 | true | 0 | 0 | A query will be consistent only on ancestor queries. Otherwise is not consistent even if the index contains key entries.
This is because writes are applied in two phases, one to write your data and other to update indexes. Get by key neves uses an index so its always correct.
I assume that you are generating semi-secuential keys otherwise a query with key wouldnt be useful. Beware however that appengine now recommends to spread your keys so they cover a large space and is thus better distributed in bigtable. | 1 | 2 | 0 | I know generally queries on the GAE datastore are eventually consistent. However I don't see why queries on __key__ should not be strongly consistent as I presume this is what the datastore Get function uses.
Can anyone confirm querying by __key__ is strongly consistent? | Are queries on the "__key__" property strongly consistent with GAE datastore? | 1.2 | 0 | 0 | 128 |
22,650,764 | 2014-03-26T03:10:00.000 | 0 | 0 | 0 | 0 | python,ftp,tar,tarfile | 22,652,307 | 1 | true | 0 | 0 | This answer is not specific to python, because the problem is not specific to python: In theory you could read the part of the Tar-file where your data are. With FTP (and also with pythons ftplib) this is possible by doing first a REST command to specify the start position in the file, then RETR to start the download of the data and after you got the amount of data you need you can close the data connection.
But, Tar is a file format without a central index, e.g. each file in Tar is prefixed with a small header with information about name, size and other. So to get a specific file you must read the first header, check if it is the matching file and if it is not you skip the size of the unwanted file and try with the next one. With lots of smaller files in Tar this will be less effective than downloading the complete file (or at least downloading up to the relevant part - you might parse the file while downloading) because all these new data connections for each read cause lots f overhead. But if you have large files in the Tar this might work.
But, you are completely out of luck if it is not a TAR (*.tar), but a TGZ (*.tgz or *.tar.gz) file. These are compressed Tar-files and to get any part of the file you would need to decompress everything you have before. So in this case there is no way around downloading the file or at least downloading everything up to the relevant part. | 1 | 0 | 0 | I have an ftp server that contains all of my tar files, those tar files are as big as 500MB+, and they are too many and all I needed to do is to get a single file from a TAR that contains multiple files which becomes 500MB+.
My initial idea is to download each tar files and get the single file I needed, but that seems to be inefficient.
I'm using Python as Programming language. | Python: Get single file in a TAR from FTP | 1.2 | 0 | 1 | 981 |
22,652,367 | 2014-03-25T16:01:00.000 | 1 | 1 | 0 | 1 | linux,python,distributed-computing | 22,656,636 | 3 | false | 0 | 0 | You could simply call your python program from the bash script with something like: PYTHONPATH=$HOME/lib/python /path/to/my/python my_python_script
I don't know how SunGrid works, but if it uses a different user than yours, you'll need global read access to your $HOME. Or at least to the python libraries. | 1 | 5 | 0 | I have an account to a computing cluster that uses Scientific Linux. Of course I only have user access. I'm working with python and I need to run python scripts, so I need to import some python modules. Since I don't have root access, I installed a local python copy on my $HOME with all the required modules. When I run the scripts on my account (hosting node), they run correctly. But in order to submit jobs to the computing queues (to process on much faster machines), I need to submit a bash script that has a line that executes the scripts. The computing cluster uses SunGrid Engine. However when I submit the bash script, I get an error that the modules I installed can't be found! I can't figure out what is wrong. I hope if you can help. | Loading python modules through a computing cluster | 0.066568 | 0 | 0 | 2,305 |
22,653,330 | 2014-03-26T06:38:00.000 | 0 | 0 | 0 | 1 | python,django,terminal | 22,659,787 | 1 | false | 0 | 0 | The most secure way to execute functionality on a remote machine to use subprocess module to open a ssh session on the remote site, and remember to set up ssh on both machines, firewalls etc. When you start ssh you tell it what command to execute and the parameters for that command.
I wouldn't open a general terminal session up that you type commands into - that sounds like a really bad idea.
I am not going to include code here - for obvious reasons. | 1 | 0 | 0 | am doing a project in Django I want to invoke a terminal in another machine to execute the code and how can I pass the the arguments to that terminal using python | how to invoke a terminal/powershell in a remote machine using python | 0 | 0 | 0 | 103 |
22,655,420 | 2014-03-26T08:39:00.000 | 0 | 0 | 0 | 0 | python,django,hosting | 22,655,516 | 3 | false | 1 | 0 | Hosting yourself can be cheaper, but you will have to spend some time maintaing the system to keep it safe. Choosing a service may be a bit more expansive but you don't have to deal with the system itself.
Choose your best. | 2 | 0 | 0 | I am looking for options on places to host a Django site.
Should I find a service that already has the proper programs and dependencies installed?
Or can I gain access to a server and install them myself? | Good places to deploy a simple Django website | 0 | 0 | 0 | 125 |
22,655,420 | 2014-03-26T08:39:00.000 | 1 | 0 | 0 | 0 | python,django,hosting | 22,655,606 | 3 | false | 1 | 0 | Webfaction
Heroku
Google App Engine
AWS Elastic Beanstalk
Windows Azure
But, cheaper, do it yourself. VPS's these days are quite cheap (digitalocean.com $5/month). An easy to manage combination: Ubuntu + Nginx + Gunicorn, and follow some tutorials about how to secure and update your VPS. | 2 | 0 | 0 | I am looking for options on places to host a Django site.
Should I find a service that already has the proper programs and dependencies installed?
Or can I gain access to a server and install them myself? | Good places to deploy a simple Django website | 0.066568 | 0 | 0 | 125 |
22,656,024 | 2014-03-26T09:07:00.000 | 3 | 1 | 1 | 0 | python,access-modifiers | 56,082,181 | 3 | false | 0 | 0 | What differences do access modifiers in c# and java make? If I have the source code, I could simply change the access from private to public if I want to access a member variable. It is only when I have a compiled library that access modifiers can't be changed, and perhaps they provide some useful functionality there in restricting the API. However, python can't be compiled and so sharing libraries necessitates sharing the source code. Thus, until someone creates a python compiler, access modifiers would not really achieve anything. | 1 | 15 | 0 | why python does not have Access modifier like in c#, java i.e public, private etc.what are the alternative way of encapsulation and information hiding in python. | why python does not have access modifier?And what are there alternatives in python? | 0.197375 | 0 | 0 | 7,428 |
22,660,190 | 2014-03-26T11:53:00.000 | 2 | 0 | 0 | 0 | python,django,django-models,django-admin | 22,660,278 | 1 | true | 1 | 0 | i will suggest you to do something like below i am doing .
Django admin provided you to create a method of particular field name which you have defined into list_display.
In that method you are ovveride return content for that field like below.
class AAdmin(admin.ModelAdmin):
list_display = ('id', 'email_settings')
""" """
def email_settings(self, obj):
from django.core.urlresolvers import reverse
return '%s'%('/admin/core/emailsetting/?id='+str(obj.email_setting.id), obj.email_setting.id)
email_settings.allow_tags = True
email_settings.short_dscription = "Email Setting Link"
Here you can see url is hardcoded .
You can use _meta to get app name and model name .
Example : obj._meta.app_name | 1 | 0 | 0 | I want to implement something like this:
I have model A admin with a status field which is a link to the model B admin.
Now when i click on column for the row with link for model B admin it should go to model B admin which it is currently doing but it should only display a single record out of all the records model B for which i clicked.
Model A contains a Foreign Key for model B's record and that is the record which should be displayed in the admin view | Display single record in django modeladmin | 1.2 | 0 | 0 | 166 |
22,662,456 | 2014-03-26T13:26:00.000 | 1 | 0 | 0 | 0 | python,database,django,sqlite,postgresql | 22,664,876 | 2 | false | 1 | 0 | use postgreSQL, our team worked with sqlite3 for a long time. However, when you import data to db,it often give the information 'database is locked!'
The advantages of sqlite3
it is small,as you put it, no server setup needed
max_length in models.py is not strictly, you set max_length=10, if you put 100 chars in the field, sqlite3 never complain about it,and never truncate it.
but postgreSQL is faster than sqlite3, and if you use sqlite3.
some day you want to migrate to postgreSQL, It is a matter beacuse sqlite3 never truncate string but postgreSQL complains about it! | 1 | 1 | 0 | I am working on a Python/Django application. The core logic rests in a Python application, and the web UI is taken care of by Django. I am planning on using ZMQ for communication between the core and UI apps.
I am using a time-series database, which uses PostgreSQL in the background to store string data, and another time-series tool to store time-series data. So I already have a PostgreSQL requirement as part of the time-series db. But I need another db to store data (other than time-series), and I started work using PostgreSQL.
sqlite3 was a suggestion from one of my team members. I have not worked on either, and I understand there are pros and cons to each one of them, I would like to understand the primary differences between the two databases in question here, and the usage scenarios. | Database engine choice for Django/ Python application | 0.099668 | 1 | 0 | 759 |
22,666,642 | 2014-03-26T16:01:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,django-forms | 22,666,767 | 1 | false | 1 | 0 | request.POST is a dictionary. When it is not empty it returns True.
request.method == 'POST' (note the upper-case for POST and the double equal sign ==) is checking the method.
I believe that you made request.method = 'post' which is clearly not what you meant. | 1 | 0 | 0 | I am adding data from a ModelForm to the db, but "if request.POST returns" a true, and if "request.method = 'post'" returns a false. How can that be? From what I understand it is supposed to work the other way around. | request.method == "post" returns a false, but | 0 | 0 | 0 | 164 |
22,668,907 | 2014-03-26T17:37:00.000 | 0 | 0 | 0 | 0 | python,telnet | 22,669,160 | 3 | false | 0 | 0 | Have you looked into using expect (there should be a python binding); basically, what I think you want to do is:
From your python script, use telnetlib to connect to server A (pass in username/password).
Within this "socket", send the remaining commands, e.g. "telnet serverB" and use expect (or some other mechanism) to check that you get back the expected "User:" prompt; if so, send user and then password and then whatever commands, and otherwise handle errors.
This should be very much doable and is fairly common with older stuff that doesn't support a cleaner API. | 2 | 2 | 0 | Is it possible to telnet to a server and from there telnet to another server in python?
Since there is a controller which I telnet into using a username and password, and from the controller command line I need to login as root to run linux command. How would I do that using python?
I use the telentlib to login into router controller but from the router controller I need to log in again to get into shell. Is this possible using python?
Thanks! | Telnet from telnet session in python | 0 | 0 | 1 | 1,456 |
22,668,907 | 2014-03-26T17:37:00.000 | 1 | 0 | 0 | 0 | python,telnet | 22,669,249 | 3 | false | 0 | 0 | Just checked it with the hardware I have in hand & telnetlib. Saw no problem.
When you are connected to the first device just send all the necessary commands using telnet.write('cmd'). It may be sudo su\n, telnet 192.168.0.2\n or whatever else. Telnetlib keeps in mind only its own telnet connection, all secondary connections are handled by the corresponding controllers. | 2 | 2 | 0 | Is it possible to telnet to a server and from there telnet to another server in python?
Since there is a controller which I telnet into using a username and password, and from the controller command line I need to login as root to run linux command. How would I do that using python?
I use the telentlib to login into router controller but from the router controller I need to log in again to get into shell. Is this possible using python?
Thanks! | Telnet from telnet session in python | 0.066568 | 0 | 1 | 1,456 |
22,674,128 | 2014-03-26T22:12:00.000 | 5 | 0 | 0 | 0 | python,django,postgresql,heroku | 22,693,845 | 1 | true | 1 | 0 | Have you set your DJANGO_SETTINGS_MODULE environment variable? I believe what is happening is this: by default Django is using your local.py settings, which is why it's trying to connect on localhost.
To make Django detect and use your production.py settings, you need to do the following:
heroku config:set DJANGO_SETTINGS_MODULE=settings.production
This will make Django load your production.py settings when you're on Heroku :) | 1 | 3 | 0 | I'm making a Django app with the Two Scoops of Django template. Getting this Heroku error, are my Postgres production settings off?
OperationalError at /
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
Exception Location: /app/.heroku/python/lib/python2.7/site-packages/psycopg2/__init__.py
foreman start works fine
Procfile: web: python www_dev/manage.py runserver 0.0.0.0:$PORT --noreload
local.py settings:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'www',
'USER': 'amyrlam',
'PASSWORD': '*',
'HOST': 'localhost',
'PORT': '5432',
}
}
production.py settings: commented out local settings from above, added standard Heroku Django stuff:
import dj_database_url
DATABASES['default'] = dj_database_url.config()
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
ALLOWED_HOSTS = ['*']
import os
BASE_DIR = os.path.dirname(os.path.abspath(file))
STATIC_ROOT = 'staticfiles'
STATIC_URL = '/static/'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
UPDATE: production settings, tried changing:
import dj_database_url
DATABASES['default'] = dj_database_url.config(default=os.environ["DATABASE_URL"])
(named my Heroku color URL to DATABASE_URL, same link in heroku config) | Can't get Django/Postgres app settings working on Heroku | 1.2 | 1 | 0 | 1,923 |
22,675,084 | 2014-03-26T23:18:00.000 | 6 | 0 | 0 | 0 | python,flask,session-variables | 22,688,640 | 1 | true | 1 | 0 | Problem was that I had the key static in my init which caused it to work in dev but in production in the .wsgi it was still dynmaic, I have changed this and all seems to be working now. | 1 | 5 | 0 | I have recently deployed my first Flask application (first web application ever actually), one problem I am running into and haven't had luck tracking down is related to sessions.
What I am doing is when the user logs in I set session['user'] = user_id and what is happening is I occasionally get a key error when making a request involving that session key. If I try to make the request again the session key is there and the request works fine. I have done research and set the app.config['SERVER_NAME'] to my domain and made sure the secret_key was static, it was dynamic before.
This does not happen when on my local development server so I am a bit stumped at this point. | Flask Session will not Persist | 1.2 | 0 | 0 | 2,289 |
22,677,222 | 2014-03-27T02:34:00.000 | 4 | 0 | 1 | 0 | python,git,visual-studio,azure,azure-devops | 22,678,022 | 1 | false | 0 | 0 | With VSO there isn't a 1-1 relationship between projects and repositories; a project may contain more than 1 repo. I would suggest a single team project with multiple repositories would work better for you. If you want to logically contain work items to align with the repos, I'd use area paths. | 1 | 2 | 0 | Previously I have been using Bitbucket with git; and I might have a dozen or so repositories for each project.
Python modules work especially well with this layout; as you can add each dependant module into requirements.txt; and it additionally allows you to reduce merge issues when working in a team; and helps with decoupling and increasing cohesion.
Starting to use Visual Studio online with Azure (still git though); and I was wondering if this approach to software engineering is still viable.
Would you recommend setting up a "Visual Studio Online Project" for each of my Python modules? | Multiple teams for one project in Visual Studio - bad practice? | 0.664037 | 0 | 0 | 360 |
22,679,857 | 2014-03-27T06:26:00.000 | 0 | 0 | 0 | 1 | performance,python-2.7,32-bit,windows-server-2012 | 22,773,695 | 1 | true | 1 | 0 | The issue was in 32-bit version of Python shipped with Zoo.
Installing a 64-bit version and modifying the Zoo engine to use it, has boosted things significantly. | 1 | 0 | 0 | We migrated to Helicon Zoo on Windows 2012 (from ISAPI on 2008). Problem is that the users started complaining about random slowdowns and timeouts with the application.
The Python is 2.7 32-bit (due to Zoo requirements).
That said, the problem is not Zoo related, as the runserver seems to exhibit same issues.
The CPU shows to be the highest usage, practically reaching 80%-90% on every request.
On Linux, same application works just fine.
Are there any known caveats with Python 2.7 32-bit on Windows 2012? | High CPU usage for DJango 1.4 on Windows 2012 | 1.2 | 0 | 0 | 94 |
22,681,832 | 2014-03-27T08:21:00.000 | 3 | 0 | 1 | 0 | python,ms-word | 22,681,895 | 2 | false | 0 | 0 | To fix this, instead of directly copying and pasting, use Insert -> Object -> OpenDocument Text. The second option is to create a style for your code. | 1 | 28 | 0 | I need to get my code (Python 2.7 written in the Python IDE) into a word document for my dissertation but I am struggling to find a way of copying it in and keeping the formatting, I've tried paste special and had no luck. The only way I've found so far is screenshoting but with just over 1000 lines of code this is proving very laborious. | Copying code into word document and keeping formatting | 0.291313 | 0 | 0 | 104,473 |
22,683,308 | 2014-03-27T09:30:00.000 | 3 | 0 | 1 | 0 | python,root,pycharm | 22,683,631 | 2 | true | 0 | 0 | I guess to best way is to create a virtualenv either in the terminal or in pycharm including the corrext python version 2.7 and install pyroot via pip into this virtualenv. Then you can simply ssh in the remote host, activate the venv and start your project from the terminal. Or you ssh into it with X-forwarding and start Pycharm itself from your client. | 2 | 1 | 0 | I have a working Python project on my PC, which I am running from Pycharm.
It uses Pyroot (an interface to Root C++ library), whose C++ lib path I have added in Project Settings/Python Interpreter/Paths in Pycharm. It also needs to use the 2.7 Python interpreter, instead of 3., which is a default python in my terminal.
I want to run this project remotely on another desktop, so I need to be able to run it from terminal specifying the path to Root and the interpreter version.
Is there a way to easily extract from Pycharm the exact run command it is using when I'm running the code via run button?
Alternatively, if that's impossible, how should I specify the path to Root and the interpreter version when running from terminal? | Is it possible to easily extract python run configuration (with additional path) from Pycharm? | 1.2 | 0 | 0 | 387 |
22,683,308 | 2014-03-27T09:30:00.000 | 0 | 0 | 1 | 0 | python,root,pycharm | 22,685,042 | 2 | false | 0 | 0 | If you select the correct project and go to File > Settings, under the Project Settings you can see the Project Interpreter which tells you which interpreter is being used.
Hope this is what you are looking for. | 2 | 1 | 0 | I have a working Python project on my PC, which I am running from Pycharm.
It uses Pyroot (an interface to Root C++ library), whose C++ lib path I have added in Project Settings/Python Interpreter/Paths in Pycharm. It also needs to use the 2.7 Python interpreter, instead of 3., which is a default python in my terminal.
I want to run this project remotely on another desktop, so I need to be able to run it from terminal specifying the path to Root and the interpreter version.
Is there a way to easily extract from Pycharm the exact run command it is using when I'm running the code via run button?
Alternatively, if that's impossible, how should I specify the path to Root and the interpreter version when running from terminal? | Is it possible to easily extract python run configuration (with additional path) from Pycharm? | 0 | 0 | 0 | 387 |
22,684,906 | 2014-03-27T10:33:00.000 | 14 | 0 | 1 | 0 | python-3.x,web2py | 22,687,810 | 1 | false | 0 | 0 | Currently, web2py only works with Python 2.6 - 2.7. Due to the promise of backward compatibility, web2py will not migrate to Python 3 only. However, work is underway on making web2py run under both Python 2 (specifically, 2.7) and Python 3 (specifically, >= 3.5).
UPDATE: As of the 2.15.1 release, web2py now supports both Python 2 and Python 3. | 1 | 11 | 0 | Does web2py function with python 3.3 or python 3.4? I have installed web2py but it cannot run with the python3.4 that I use.
I get an error after trying to run the 'web2py.exe - S welcome' that says, syntax error | Web2py and python 3 | 1 | 0 | 0 | 5,443 |
22,686,976 | 2014-03-27T12:00:00.000 | 2 | 0 | 1 | 0 | python,pydev | 28,541,098 | 2 | true | 0 | 0 | Shift-Alt-Up (Select enclosing scope) works quite well, although it selects the text. But you can then press the right arrow once to deselect, and the cursor should be right at the end of the scope.
An actual "Jump to end of scope" doesn't exist yet. | 1 | 3 | 0 | I'm looking for a shortcut to jump to the other end of a long code block (can be if, for, while, end of function, class, etc.).
Similar to "go to matching brace" in Visual Studio / C++.
(I've been searching for this sometime now, StackOverflow, Google, ...)
Thanks!
Edit: with "go to matching brace", I meant { and } in C++ and Java
Edit2: I'm also happy with confident answers that say there is no such feature :-) | PyDev, shortcut "jump to beginning/end of block" | 1.2 | 0 | 0 | 361 |
22,688,151 | 2014-03-27T12:48:00.000 | 2 | 0 | 0 | 0 | python,django | 54,917,188 | 2 | false | 1 | 0 | I just ran into this as well which caught me by surprise, I thought my page was sending all my env variables to the server. I use the env to store credentials so I was concerned.
Any application running in your environment has access to your env variables, therefore the server has access to your env variables. Bottom line the browser is not sending all your env variables to the server. The request object is built on the server side. | 1 | 11 | 0 | Why do I see all my environment variables in request.META when using the dev server? | Django dev server request.META has all my env vars | 0.197375 | 0 | 0 | 703 |
22,689,597 | 2014-03-27T13:47:00.000 | 0 | 1 | 1 | 0 | python,c++,excel,automation,ms-word | 22,691,619 | 1 | false | 0 | 0 | MS-Word has extensive programming capabilities built in ("Visual Basic for Applications (VBA)" These exact same programming capabilities are available to applications you write in any language, including C++, that can access Word via COM. Depending on your needs, it could be possible to fill in an entire Word document from one click to run such a program. | 1 | 0 | 0 | Ok for my work we handle a lot of calculation documents. All of these have coversheets and revlogs that must be generated. We can easily create an excel file that has most of the information needed to fill in the forms but automating the actual process of filling in these forms that are premade in word has proven tricky. I used macros to some success but if something about a specific form differed to greatly the entire thing would mess up and I still had to open each word file individually and then hit the macro.
Some of this process isn't going to be automatable as it requires pulling information from pdfs that isn't always in a standard format but any fast automation would be better than none. I have a good bit of c++ experience (by a good bit I just mean several courses on data structures etc, nothing way too high level). I have also used python some and stumbled my way through visual basic a tad.
Any idea on how to go about automating generating sometimes 100+ of these forms? | What would be the best way to automate filling in a premade form in word | 0 | 0 | 0 | 602 |
22,690,404 | 2014-03-27T14:19:00.000 | 3 | 0 | 1 | 1 | python,variables,subprocess,global | 22,690,448 | 3 | false | 0 | 0 | No, global variables are not visible to a sub-process. Variables are private to each process. If you want to share variables then you need to use some form of inter-process communication. | 3 | 1 | 0 | A quick question for Python 2.7
Are global variables visible to a subprocess?
Can a subprocess change the values of global variables?
Many thanks. | are global variables visible to subprocess and changable by subprocess? | 0.197375 | 0 | 0 | 3,561 |
22,690,404 | 2014-03-27T14:19:00.000 | 2 | 0 | 1 | 1 | python,variables,subprocess,global | 22,690,516 | 3 | true | 0 | 0 | The processes doesn't share the variables in a general operating system terms. Use some communication mechanism like message passing, shared memory etc to achieve the inter process communication. | 3 | 1 | 0 | A quick question for Python 2.7
Are global variables visible to a subprocess?
Can a subprocess change the values of global variables?
Many thanks. | are global variables visible to subprocess and changable by subprocess? | 1.2 | 0 | 0 | 3,561 |
22,690,404 | 2014-03-27T14:19:00.000 | 0 | 0 | 1 | 1 | python,variables,subprocess,global | 22,858,917 | 3 | false | 0 | 0 | Maybe the simplest way is write those into a file and read from the file in another process, although this might take extra time. | 3 | 1 | 0 | A quick question for Python 2.7
Are global variables visible to a subprocess?
Can a subprocess change the values of global variables?
Many thanks. | are global variables visible to subprocess and changable by subprocess? | 0 | 0 | 0 | 3,561 |
22,692,291 | 2014-03-27T15:33:00.000 | 16 | 0 | 1 | 0 | javascript,python,coffeescript | 22,692,398 | 1 | true | 1 | 0 | Try with [a, b, c] = ['this', 'is', 'variables']. | 1 | 6 | 0 | Can I assign multiple variables in coffee like in python:
a, b, c = 'this', 'is', 'variables'
print c >>>variables | Multiple assignment of variables in coffee | 1.2 | 0 | 0 | 3,111 |
22,693,665 | 2014-03-27T16:30:00.000 | 0 | 0 | 1 | 0 | python,pyinstaller,antivirus | 70,388,992 | 2 | false | 0 | 1 | I have the same issue.
If you are using Avast, do not uninstall it.
This will make your computer vulnerable from threats.
If you are sure, you can click 'add exclusion' when the Avast Hardened Mode blocked a program thing comes up.
If this isn't correct, please comment or edit. | 1 | 3 | 0 | I made a simple GUI (wxpython) application (some static text, buttons, comboboxes etc) and made exe file with pyinstaller, but the Avast antivirus says it's a virus. How can I fix this? | python executables alarms antivirus | 0 | 0 | 0 | 1,891 |
22,695,359 | 2014-03-27T17:47:00.000 | 0 | 0 | 0 | 0 | excel,python-2.7,ms-access-2010 | 22,695,473 | 2 | false | 0 | 0 | In Access select "External Data" then under "Import & Link" select Excel. You should be able to just use the wizard to choose the Excel file and import the data to a new table. | 1 | 0 | 0 | What is the code sintax to import an excel file to an MS access database IN PYTHON ?
I have already tried making it a text file but with no sucesss | Import Excel spread sheet into Access | 0 | 1 | 0 | 993 |
22,697,210 | 2014-03-27T19:21:00.000 | 0 | 1 | 0 | 0 | python,upload,flask,rabbitmq,tornado | 43,458,666 | 1 | false | 0 | 0 | If all the files are available at the start. I would zip them first to a single file. It is not about the compression but the number of files.
There are certain IO operations (open/close and network start/end) that will happen 100 times for each file which you can easily avoid. Compression will help too.
Now sockets or HTTP, it wont matter much if you have single file (or technically a stream). | 1 | 1 | 0 | we have a problem with sending in most efficient way about 1000 (or even more) 2MB's chunks via network. We want to avoid pure sockets (if it won't be possible we will use them). For now we've tested:
List item rabbitmq client -> server: about 39sec/GB on localhost (very slow)
requests client -> flask server : still about 40sec/GB on localhost
flask on tornado with creating threads for each IO write opperation and still 40 sec/GB of SSD flash drive
raw tornado still 40 sec/GB
We are running out of ideas. Best solution for us is to use lightweight solution, maybe http. | What's the fastest way to send 1000 2MB's files using Python? | 0 | 0 | 1 | 965 |
22,698,687 | 2014-03-27T20:37:00.000 | 4 | 0 | 0 | 0 | python,arrays,sorting,numpy | 22,698,775 | 2 | false | 0 | 0 | sorted(Data, key=lambda row: row[1]) should do it. | 1 | 25 | 1 | I'm trying to convert all my codes to Python. I want to sort an array which has two columns so that the sorting must be based on the 2th column in the ascending order. Then I need to sum the first column data (from first line to, for example, 100th line). I used "Data.sort(axis=1)", but it doesn't work. Does anyone have any idea to solve this problem? | How to sort 2D array (numpy.ndarray) based to the second column in python? | 0.379949 | 0 | 0 | 75,440 |
22,702,147 | 2014-03-28T00:45:00.000 | 0 | 0 | 1 | 0 | python,pdf | 27,494,267 | 1 | true | 0 | 0 | I am all set now.
I figured out that my server already has pdf installed and the only thing I had to do was to call it. Sorry for confusion.
Ticket could be closed. | 1 | 0 | 0 | I have a task to convert simple text file into pdf format. Also I need to add a header to that newly created pdf file.
The server which will have this text file and will convert it does not have any Microsoft Office document or other tools for conversion. One suggested to use python for that task since the server has it installed.
Could you please help me to start with conversion from text to pdf using python?
P.S. My system does not have pyPdf module and I failed to install it.
Thanks
Here is some update:
I run some program which at the end generate manifest. Manifest is a simple text file which looks like .csv file but columns are separated by white space. I ship this manifest to client. My current task is to ship to client additionally to this manifest another file which should have the same content and the header with the client name and be in PDF format. | Convert text file into pdf | 1.2 | 0 | 0 | 886 |
22,702,428 | 2014-03-28T01:16:00.000 | 1 | 0 | 0 | 0 | python-2.7,numpy,scipy,cluster-analysis | 22,731,897 | 2 | false | 0 | 0 | k-means is exclusively for coordinates. And more precisely: for continuous and linear values.
The reason is the mean functions. Many people overlook the role of the mean for k-means (despite it being in the name...)
On non-numerical data, how do you compute the mean?
There exist some variants for binary or categorial data. IIRC there is k-modes, for example, and there is k-medoids (PAM, partitioning around medoids).
It's unclear to me what you want to achieve overall... your data seems to be 1-dimensional, so you may want to look at the many questions here about 1-dimensional data (as the data can be sorted, it can be processed much more efficiently than multidimensional data).
In general, even if you projected your data into unix time (seconds since 1.1.1970), k-means will likely only return mediocre results for you. The reason is that it will try to make the three intervals have the same length.
Do you have any reason to suspect that "before", "during" and "after" have the same duration? If not, don't use k-means.
You may however want to have a look at KDE; and plot the estimated density. Once you have understood the role of density for your task, you can start looking at appropriate algorithms (e.g. take the derivative of your density estimation, and look for the largest increase / decrease, or estimate an "average" level, and look for the longest above-average interval). | 1 | 2 | 1 | I have a list of dates I'd like to cluster into 3 clusters. Now, I can see hints that I should be looking at k-means, but all the examples I've found so far are related to coordinates, in other words, pairs of list items.
I want to take this list of dates and append them to three separate lists indicating whether they were before, during or after a certain event. I don't have the time for this event, but that's why I'm guessing it by breaking the date/times into three groups.
Can anyone please help with a simple example on how to use something like numpy or scipy to do this? | Clustering a list of dates | 0.099668 | 0 | 0 | 7,440 |
22,703,149 | 2014-03-28T02:34:00.000 | 0 | 0 | 1 | 0 | python,class,oop | 22,703,202 | 5 | false | 0 | 0 | You've covered functions, right?
Classes act like functions that produce new objects. Class() means you're calling it, and every time you call it, you get a new object. That's just what classes do when called.
x = Class is very different from x = Class(). The former will, indeed, just make an alias for the class.
As for "why", well, it's actually pretty handy at times to be able to substitute a function for a class or vice versa. For example, the int() function isn't a function at all; you're just creating a new int object.
As for =, well, there's no excuse for that :) Most languages use a = b to mean "take b and store it in a", not to mean a and b are equal. Historical reasons, I suppose. | 2 | 0 | 0 | This is my first programming language, so be gentle. I was doing swimmingly in reading my book before OOP came up and I've been terribly lost. I bought a new book just on OOP in Python and I still can't grasp the basics.
First, I was struggling with the "self" concept, but I'm conceptually lost on an even more fundamental level.
Why does x = Class() create a new instance of that class? Wouldn't it just refer to class?
If I write y = Class(), too, how come you don't wind up with two different variables that refer to the same thing even though I defined them as the same thing? Why not have language like "Instantiate("name_of_new_instance") Class()"?
I don't understand what's going on here.
Edit: A lot of answers so quickly! So am I to understand that the equals sign here is arbitrary, like the programming equivalent of a homophone? (homograph?) Why was it chosen that way, it doesn't seem very intuitive. I'm not criticizing it, is there a historical reason? Is there some logic going on underneath that is lost on beginners? | Why does x = Class() create a new object in python? | 0 | 0 | 0 | 1,706 |
22,703,149 | 2014-03-28T02:34:00.000 | 0 | 0 | 1 | 0 | python,class,oop | 22,703,630 | 5 | false | 0 | 0 | You ask:
Why not have language like "Instantiate("name_of_new_instance")
Class()"?
The answer is that Python is exactly like that language - except that it leaves out the unnecessary word Instantiate and instead uses an equal sign to indicate that assignment is taking place. Other than that small (and meaningless) difference in syntax the languages are the same.
While I like the use of the keyword Instantiate in your language because it's very clear about what's happening, I also think Python's design has a number of advantages:
It's less verbose.
It is clearer that an assignment is taking place.
It provides a more obvious place to place any arguments required when initializing a new instance of Class
It will be more familiar to most programmers coming from c-descended languages.
Once you have experience with a number of different languages, I hope you'll share my appreciation for the clever decisions that the designer of Python made and that make (good) Python code both clear and extremely concise. Of course, you may feel otherwise in which case you'll find a world of syntaxes available in many different languages or, perhaps, you'll find a need to develop your own. | 2 | 0 | 0 | This is my first programming language, so be gentle. I was doing swimmingly in reading my book before OOP came up and I've been terribly lost. I bought a new book just on OOP in Python and I still can't grasp the basics.
First, I was struggling with the "self" concept, but I'm conceptually lost on an even more fundamental level.
Why does x = Class() create a new instance of that class? Wouldn't it just refer to class?
If I write y = Class(), too, how come you don't wind up with two different variables that refer to the same thing even though I defined them as the same thing? Why not have language like "Instantiate("name_of_new_instance") Class()"?
I don't understand what's going on here.
Edit: A lot of answers so quickly! So am I to understand that the equals sign here is arbitrary, like the programming equivalent of a homophone? (homograph?) Why was it chosen that way, it doesn't seem very intuitive. I'm not criticizing it, is there a historical reason? Is there some logic going on underneath that is lost on beginners? | Why does x = Class() create a new object in python? | 0 | 0 | 0 | 1,706 |
22,703,236 | 2014-03-28T02:43:00.000 | 1 | 0 | 0 | 0 | python,lxml | 22,703,299 | 3 | false | 0 | 0 | Exactly like you have. lxml.etree.parse() accepts a string filename and will read the file for you. | 1 | 0 | 0 | I have the following line of code: xml = BytesIO("<A><B>some text</B></A>") for the file named test.xml.
But I would like to have something like xml = "/home/user1/test.xml"
How can I use the file location instread of having to put the file content? | XML file as input | 0.066568 | 0 | 1 | 265 |
22,714,606 | 2014-03-28T13:45:00.000 | 1 | 1 | 1 | 0 | python,module,keyboard,operating-system,key | 22,715,757 | 1 | false | 0 | 0 | Python probably isn't the best language for this. In fact I'm pretty sure it's not possible under most circumstances. You'd need the script running all the time, I assume, which is a problem in and of itself. But a further problem is that AFAIK python can't arbitrarily modify keyboard input across the whole computer.
So you'll probably need something that can work on a lower level, such as C or C++. | 1 | 0 | 0 | I'm using Python to create a simple program to trick my brother.
The idea of my program is to read any key input that he writes and output another one. For example, I press 's' letter and it outputs 'o'.
I do have the character converter working, however I now need to catch the key pressed and instantaneously return the new key to the screen.
How can I achieve this?
Thank you very much for your time | Python: get pressed keyboard keys and return | 0.197375 | 0 | 0 | 313 |
22,715,768 | 2014-03-28T14:34:00.000 | 1 | 0 | 1 | 0 | python,multithreading | 22,716,613 | 1 | true | 0 | 0 | As far as I understand you process any new file with a separate thread, so it behaves like a server processing multiple requests with a single routine.
1) I think that time-triggered creation isn't good in your case because it doesn't depend on either system performance or number of files to process. You may run a few threads as daemons and have a main thread that assign tasks to these threads as soon as they come. If there are too many at the same time, you just drop new tasks. On the other hand, you may create a new thread that does processing each time new file appears and then join it when the processing has finished
2) You may start new thread explicitly giving it the file name. Whether it's possible or not for a few threads to work with a single file simultaneously would depend on what you exactly do with the file. In general it becomes way more complicated than single file per thread | 1 | 2 | 0 | I have a thread I use to process data. Right now it triggers every time it detects a new file in a folder. I am coding in Python, but maybe it is more of a general programming question?
My question is two-fold:
Should I use a trigger like that (event-driven, more or less), or should I be using time based (every 3 minutes, create a new thread)?
If I go with time-based and create a new thread, wouldn't it cause problems if the two threads are processing the same data? Is there a way to tell them to work together or to not spawn a second one if one exists?
I apologize for the probable naivety of my question, I am still quite new to multi-threading and mutliple processes, so I still don't know when to use what. | Python threading -- How to know if thread already is running? | 1.2 | 0 | 0 | 512 |
22,715,888 | 2014-03-28T14:39:00.000 | 2 | 0 | 0 | 0 | python,mongodb,security,amazon-web-services,amazon-ec2 | 22,716,431 | 2 | false | 1 | 0 | EC2 security policies by default block all incoming ports except ones you have sanctioned, as such the firewall will actually stop someone from getting directly to your MongoDB instance; as such yes it is secure enough.
Since the instances are physically isolated there is no chance of the problems you would get on shared hosting of someone being able to route through the back of their instances to yours (though some things are still shared like IO read head). | 2 | 1 | 0 | I'm using a flask on an ec2 instance as server, and on that same machine I'm using that flask talking to a MongoDB.
For the ec2 instance I only leaves port 80 and 22 open, without leaving the mongo port (27017) because all the clients are supposed to talk to the flask server via http calls. Only in the flask I have code to insert or query the server.
What I'm wondering is
Is it secure enough? I'm using a key file to ssh to that ec2 machine, but I do need to be 99% sure that nobody else could query/insert into the mongodb
If not, what shall I do?
Thanks! | Security concerning MongoDB on ec2? | 0.197375 | 1 | 0 | 182 |
22,715,888 | 2014-03-28T14:39:00.000 | 2 | 0 | 0 | 0 | python,mongodb,security,amazon-web-services,amazon-ec2 | 22,716,299 | 2 | true | 1 | 0 | Should be secure enough. If I understand correctly, you don't have ports 27017 open to the world, i.e. you have (or should)block it thru your aws security group and perhaps your local firewall on the ec2 instance, then the only access to that port will be from calls originating on the same server.
Nothing is 100% secure, but I don't see any holes in what you have done. | 2 | 1 | 0 | I'm using a flask on an ec2 instance as server, and on that same machine I'm using that flask talking to a MongoDB.
For the ec2 instance I only leaves port 80 and 22 open, without leaving the mongo port (27017) because all the clients are supposed to talk to the flask server via http calls. Only in the flask I have code to insert or query the server.
What I'm wondering is
Is it secure enough? I'm using a key file to ssh to that ec2 machine, but I do need to be 99% sure that nobody else could query/insert into the mongodb
If not, what shall I do?
Thanks! | Security concerning MongoDB on ec2? | 1.2 | 1 | 0 | 182 |
22,717,414 | 2014-03-28T15:45:00.000 | 3 | 0 | 1 | 1 | python,google-app-engine | 22,719,539 | 2 | true | 1 | 0 | If you are using the App Engine Launcher then by clicking on the Logs you can see all the logs and errors.
An alternative way is to start the development server via the command line (as it's already mentioned) and you will see all the logs there, which makes it much easier to work with because the Logs windows is not that flexible. | 1 | 1 | 0 | While using Google App Engine if there is an error in python the result is a blank page. It is difficult to debug python since you don't get line number on which there is error. It is extremely frustrating when you get blank page because of indentation error. Is there any way to execute python google app engine script in python interpreter so i get python error there itself. | How to check if there are errors in python | 1.2 | 0 | 0 | 182 |
22,717,928 | 2014-03-28T16:11:00.000 | 3 | 0 | 1 | 0 | javascript,python,tornado | 22,718,068 | 1 | false | 1 | 0 | No.
What you are doing in Tornado is constructing some HTML and javascript as text, ready to be sent to the user's browser to be interpreted. On the server, it is only text. You can put values from Python into the text, because the Python is running on the server. There is a clear and complete separation between what happens on the server (Tornado, python) and what happens later on the client (HTML, Javascript). | 1 | 1 | 0 | While working on Tornado template, I know we can use/work with Python variables in HTML/Javascript using {{python_variable}}.
Similarly, is it possible to use Javascript variable in Python code, without passing to another file? | Using Javascript variables in Python | 0.53705 | 0 | 0 | 202 |
22,718,185 | 2014-03-28T16:23:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,pyinstaller | 51,217,755 | 1 | false | 0 | 0 | Four years later! Try --hiden-imports=pycountry on creating the pyinstaller exe | 1 | 2 | 0 | So I have been creating a python script to handle some user access
and need to have this script distributed to the office so that we are all able to use it.
However, when I try to create the .exe file using PyInstaller it completes but somehow it then continues to crash everytime I try to launch the .exe.
I have narrowed the problem down to a dependency, a module called PyCountry.
The module is used to convert a country into the ALPHA3 ISO standard.
If the module is imported the app will crash everytime.
If the module is not the app will run just fine.
Are there alternatives to PyCountry or a way which I can make it work with PyCountry?
I have already tried adding the path of PyCountry directly into a .spec file but that doesn't seem to do anything. | A dependency is crashing the .exe created with PyInstaller | 0 | 0 | 0 | 445 |
22,718,646 | 2014-03-28T16:46:00.000 | 1 | 0 | 0 | 0 | python,postgresql,openerp,erp,openerp-7 | 22,756,295 | 1 | true | 1 | 0 | Encodings are a complicated thing, and it is difficult to answer an encoding-related question without precise facts. ANSI is not an encoding, I assume you actually mean ASCII. And ASCII itself can be seen as a subset of UTF-8, so technically ASCII is valid UTF-8.
OpenERP 7.0 only exports CSV files in UTF-8, so if you do not get the expected result you are probably facing a different kind of issue, for example:
The original data was imported using a wrong encoding (you can choose the encoding when you import, but again the default is UTF-8), so it is actually corrupted in the database, and OpenERP cannot do anything about it
The CSV file might be exported correctly in UTF-8 but you are opening it with a different encoding (for example on Windows most programs will assume your files are ISO-8859-1/Latin-1/Windows-1252 encoded). Double-check the settings of your program.
If you need more help you'll have to be much more specific: what result do you get (what does the data look like), what did you expect, etc. | 1 | 0 | 0 | I can export a CSV with openERP 7 , but it is encoded in ANSI. I would like to have it as a UTF-8 encoded file. How can I achieve this ? The default export option in openERP doesn"t have any extra options. What files should be modified ? Or is there an app for this matter ? Any help would be appreciated. | openERP 7 need to export data in UTF-8 CSV , but how? | 1.2 | 0 | 0 | 574 |
22,719,863 | 2014-03-28T17:48:00.000 | 0 | 0 | 0 | 0 | python,networkx | 24,051,792 | 1 | false | 0 | 0 | I had to change the seed inside every class I used. | 1 | 1 | 1 | I am using the community module to extract communities from a networkx graph. For the community module, the order in which the nodes are processed makes a difference. I tried to set the seed of random to get consistent results but that is not working. Any idea on how to do this?
thanks | Fix the seed for the community module in Python that uses networkx module | 0 | 0 | 1 | 165 |
22,720,349 | 2014-03-28T18:17:00.000 | 1 | 0 | 0 | 0 | python,pandas | 22,724,963 | 2 | true | 0 | 0 | I've made it work with records.groupby('product_name').filter(lambda x: len(x['url']) == 1). Note that simply using len(x) doesn't work. With a dataframe with more than two columns (which is probably most of the real-life dataframes), one has to specify a column for x: any column, except the one to group by with. Also, this code initially didn't work for me because my index on the dataframe was not unique. I'm not sure why this should interfere with the function of filtering, but it did. After reindexing the dataframe, I finally got it to work. | 1 | 0 | 1 | With pandas I can do grouping using df.groupby('product_name').size(). But if I'm only interested rows whose "product_name" is unique, i.e. those records with groupby.size equal to one, how can I filter the df to see only such rows? In other words, can I perform filtering on a database using pandas, based on the number of times an attribute occurs in the database? (I could do that with SQL alright.) | Can I select rows based on group size with pandas? Or do I have to use SQL? | 1.2 | 0 | 0 | 2,961 |
22,723,052 | 2014-03-28T20:56:00.000 | 1 | 0 | 1 | 0 | python,ruby | 22,724,452 | 3 | false | 0 | 0 | Python:
Maybe you could write a logging decorator that would satisfy some of your requirements. You can find examples of logging decorators for python by googling "logging decorator python". | 1 | 2 | 0 | I work a lot in both Ruby and Python. I am quite comfortable with the usual debuggers, but I still find myself still entering puts or print statements in the code.
Why? Because I want to inspect variables without having to stop and start the code. This is especially important for long sequences of code such as background processing.
The problem of course is the code is then littered with these statements. Sometimes I need to add a note, or unpack an array so some additional logic. Later after the code goes to production and a new problem is found I might need to put the print / puts back in. It would be good to be able to store them externally to the actual program code.
Are there any tools that allow the creation of variable logging at specific points in the code, as well as the ability to run short snippets of code for print presentation? | Logging without altering code in Python and Ruby | 0.066568 | 0 | 0 | 87 |
22,725,003 | 2014-03-28T23:35:00.000 | 1 | 0 | 1 | 1 | python,pyinstaller | 22,725,485 | 1 | true | 0 | 0 | If you want no additional installation under every circumstance, one way or another you need to ship it with a python interpreter. So, in fact you are making the user install the interpreter in an unofficial fashion.
Most linux distributions come with Python nowadays, and you would get problems originating of version conflict very rarely if executed under the same major version. I'd say, just chmod the file appropriately, set the shebang and ship it.
If still insisting on your way, pyinstaller ought to have a way to bundle only the interpreter and binary dependencies together while having the original script stay in plain text, although I have done it under windows and not %100 percent sure it exists under linux. | 1 | 1 | 0 | I'd like to release one python script to other users for Linux OS.
I need to make sure the released script doesn't require other users to install the compatible version of python( maybe there is no python installed).
This released script could also be modified and ran by other users.
I tried pyinstaller. It gave me two options:
1, release an executable, but the executable is not modifiable.
2, release a directory(I'm not quite sure the method) with a spec file but it looks like very complicated.
Is there any other better method I can release the script?
Thanks, | How to release python script as a product? | 1.2 | 0 | 0 | 622 |
22,725,031 | 2014-03-28T23:40:00.000 | 1 | 1 | 0 | 1 | python,django,logging,openshift | 22,736,609 | 1 | true | 0 | 0 | The deploy output is logged to stdout if I recall correctly. If you git push from the command line you should see where things are failing. | 1 | 0 | 0 | I wanna be able to find the output text of the "deploy" script, that openshift calls automatically.
Suppose I have the following line in that "deploy" script:
python "$OPENSHIFT_REPO_DIR"wsgi/openshift/manage.py syncdb --noinput
Then I wanna see the output text of that call...
I already looked at the openshifthome/python/logs, and there are access* and error* logs, but there are no the ouputs that I want.
I want this because some times, after I push to the master git branch, it fails, and I wanna know where it fails...
Thanks | Where can i find the deploy log in python app deployed in Openshift | 1.2 | 0 | 0 | 127 |
22,726,572 | 2014-03-29T03:20:00.000 | 0 | 0 | 1 | 0 | python,macos,compiler-construction,python-idle,coderunner | 22,727,390 | 1 | true | 0 | 0 | The different programs probably have different definitions of tabs and whitespace.In IDLE, you can use Format->Tabify/Untabify menu options to change between tabs and spaces. When you click on these menu options, you prompted for the columns per tab, which will default to 4(At least on my machine). Hope this helps! | 1 | 0 | 0 | I'm using mac osx mavericks (fully updated) and while looking for a pyscripter alternative, I decided to download CodeRunner from the app store.
I've noticed that certain programs I create in one application, don't always run the same in the other environment.
For example, there are times when I create a program using CodeRunner, but when I open the exact same program in the IDLE environment, it spits back an error. Usually complaining about the syntax or logic of the program. And the other way around can sometimes occur as well.
Is this normal behavior? Should I be saving the program in a different format? I assumed that since I'm coding in Python, the code would behave the same regardless of the environment I use. | Compiler vs. IDLE Environment | 1.2 | 0 | 0 | 569 |
22,728,758 | 2014-03-29T08:23:00.000 | 1 | 0 | 0 | 0 | python,django,messenger | 22,729,332 | 2 | false | 1 | 0 | Or you can install xmpp server (like eJabberd) and write a server side interface over it. It will be easier, faster and optimal solution.
Gmail and Facebook both uses xmpp protocol. People using your application will also be able to send chat request to their friends in gmail.
You wont even have to write a website interface, there are javascript library (like Converse.js) available which you can directly plug into your website and you will be good to go. | 1 | 0 | 0 | I'm a learning Python/Django programmer and want to try to create an easy web-messenger. Is it real to write web-messenger for django? And does any modules for that exist or any open-source protocols support python? | Creating messenger for django | 0.099668 | 0 | 0 | 3,320 |
22,729,223 | 2014-03-29T09:21:00.000 | 4 | 0 | 0 | 0 | python,math,numpy,artificial-intelligence | 22,730,167 | 3 | false | 0 | 0 | @Paul already gave you the answer for computational question
However - from neural network point of view your problem is indication that you are doing something wrong. There is no reasonable use of neural networks, where you have to compute such number. You seem to forget about at least one of:
Input data scaling/normalization/standarization
small weights bounds initizlization
regularization term which keeps weights small when the size of network grows
all these elements are basic and crucial parts of working with neural networks. I recommend to have a look at Neural Networks and Learning Machines by Haykin. | 1 | 5 | 1 | I'm using a sigmoid function for my artificial neural network. The value that I'm passing to the function ranges from 10,000 to 300,000. I need a high-precision answer because that would serve as the weights of the connection between the nodes in my artificial neural network. I've tried looking in numpy but no luck. Is there a way to compute the e^(-x) | How to calculate exp(x) for really big integers in Python? | 0.26052 | 0 | 0 | 3,244 |
22,729,357 | 2014-03-29T09:36:00.000 | 0 | 0 | 0 | 1 | python,celery | 22,898,676 | 1 | true | 0 | 0 | Solved by flagging ATask related model with aborted status flag and adding check at start of BTask. | 1 | 1 | 0 | I need to realise following scenario:
Execute task A
Execute multiple task B in parallel with different arguments
Wait for all tasks to finish
Execute multiple task B in parallel with different arguments
Wait for all tasks to finish
Execute task C
I have achieved this by implementing chain of chords, here is simplified code:
# inside run() method of ATask
chord_chain = []
for taskB_group in taskB.groups.all():
tasks = [BTask().si(id=taskB_model.id) for taskB_model in taskB_group.children.all()]
if len(tasks):
chord_chain.append(chord(tasks, _dummy_callback.s()))
chord_chain.append(CTask().si(execution_id))
chain(chord_chain)()
The problem is that I need to have ability to call revoke(terminate=True) on all BTasks in any point of time. The lower level problem is that I can't get to BTask celery ids.
Tried to get BTask ids via chain result = chain(chord_chain)(). But I didn't found that information in returned AsyncResult object. Is it possible to get chain children ids from this object? (result.children is None)
Tried to get BTask ids via ATask AsyncResult, but it seems that children property only contains results of first chord and not the rest of tasks.
>>> r=AsyncResult(#ATask.id#)
>>> r.children
[<GroupResult: 5599ae69-4de0-45c0-afbe-b0e573631abc [#BTask.id#, #BTask.id#]>,
<AsyncResult: #chord_unlock.id#>] | Get celery task ids in advanced workflow | 1.2 | 0 | 0 | 998 |
22,732,410 | 2014-03-29T14:34:00.000 | -3 | 0 | 1 | 0 | python,list,data-structures,dictionary | 22,732,452 | 2 | false | 0 | 0 | Depends what you mean by efficiency.
When it comes to speed, list will be faster but dictionary is more easy to organize and access with keys. | 1 | 0 | 0 | I need to add item or edit item in a list or a dictionary.
The list is smth like
[10, 15, 42, 78]
The dictionary is smth like
{0: 10, 1: 15, 2: 42, 3: 78}
Which one is more efficient? | python: data structure efficiency between list and dictionary | -0.291313 | 0 | 0 | 227 |
22,734,871 | 2014-03-29T18:12:00.000 | 1 | 0 | 1 | 0 | python,visual-studio-2012 | 29,839,934 | 1 | false | 0 | 0 | As far as I know PIL has not been release for Python 3.3.4 that is yet to be released. You can use PIL for Python 2.7 | 1 | 0 | 0 | I want to import some immages to python application in my Visual Studio, but PIL must be installed first and I dont see any install software for PIL for Visual Studio.
Can somebody help me?
Thanks | How to install PIL for python in Visual Studio (2012) | 0.197375 | 0 | 0 | 1,027 |
22,736,754 | 2014-03-29T20:52:00.000 | 0 | 0 | 0 | 0 | python,django,django-forms,django-validation | 22,736,818 | 2 | false | 1 | 0 | As far as I remember a field can have several validators (like min_length, max_length) which will be called by the default clean_field method. | 1 | 5 | 0 | In a form in django, what is the difference between a validator for a field and a clean_<field> method for that field? | django difference between validator and clean_field method | 0 | 0 | 0 | 1,315 |
22,737,000 | 2014-03-29T21:15:00.000 | 6 | 0 | 0 | 0 | python,arrays,math,numpy | 22,737,241 | 4 | false | 0 | 0 | In Python, (length,) is a tuple, with one 1 item. (length) is just parenthesis around a number.
In numpy, an array can have any number of dimensions, 0, 1, 2, etc. You are asking about the difference between 1 and 2 dimensional objects. (length,1) is a 2 item tuple, giving you the dimensions of a 2d array.
If you are used to working with MATLAB, you might be confused by the fact that there, all arrays are 2 dimensional or larger. | 2 | 15 | 1 | When I check the shape of an array using numpy.shape(), I sometimes get (length,1) and sometimes (length,). It looks like the difference is a column vs. row vector... but It doesn't seem like that changes anything about the array itself [except some functions complain when I pass an array with shape (length,1)].
What is the difference between these two?
Why isn't the shape just, (length)? | one-dimensional array shapes (length,) vs. (length,1) vs. (length) | 1 | 0 | 0 | 18,706 |
22,737,000 | 2014-03-29T21:15:00.000 | 0 | 0 | 0 | 0 | python,arrays,math,numpy | 61,132,626 | 4 | false | 0 | 0 | A vector in Python is actually a two-dimensional array. It's just a coincidence that the number of rows is 1 (for row vectors), or the number of columns is 1 (for column vectors).
By contrast, a one-dimensional array is not a vector (neither a row vector nor a column vector). To understand this, think a concept in geometry, scalar. A scalar only has one attribute, which is numerical. By contrast, a vector has two attributes, number and direction. Fortunately, in linear algebra, vectors also have "directions", although only two possible directions - either horizontal or vertical (unlike infinite possible directions in geometry). A one-dimensional array only has numerical meaning - it doesn't show which direction this array is pointing to. This is why we need two-dimensional arrays to describe vectors. | 2 | 15 | 1 | When I check the shape of an array using numpy.shape(), I sometimes get (length,1) and sometimes (length,). It looks like the difference is a column vs. row vector... but It doesn't seem like that changes anything about the array itself [except some functions complain when I pass an array with shape (length,1)].
What is the difference between these two?
Why isn't the shape just, (length)? | one-dimensional array shapes (length,) vs. (length,1) vs. (length) | 0 | 0 | 0 | 18,706 |
22,737,982 | 2014-03-29T22:55:00.000 | 0 | 0 | 0 | 0 | python,mongodb,queue,mongodb-query,worker | 22,738,408 | 1 | true | 0 | 0 | Since reads in MongoDB are concurrent I completely understand what your saying. Yes, it is possible for two workers to pick the same row, amend it and then re-save it overwriting each other (not to mention wasted resources on crawling).
I believe you must accept that one way or another you will lose performance, that is an unfortunate part of ensuring consistency.
You could use findAndModify to pick exclusively, since findAndModify has isolation it can ensure that you only pick a URL that has not been picked before. The problem is that findAndModify, due to being isolated, to slow down the rate of your crawling.
Another way could be to do an optimistic lock whereby you write a lock to the database rows picked very quickly after picking them, this will mean that there is some wastage when it comes to crawling duplicate URLs but it does mean you will get the maximum performance and concurrency out of your workers.
Which one you go for requires you to test and discover which best suites you. | 1 | 0 | 0 | I am building a web crawler in Python using MongoDB to store a queue with all URLs to crawl. I will have several independent workers that will crawl URLs. Whenever a worker completes crawling a URL, it will make a request in the MongoDB collection "queue" to get a new URL to crawl.
My issue is that since there will be multiple crawlers, how can I ensure that two crawlers won't query the database at the same time and get the same URL to crawl?
Thanks a lot for your help | Multiple workers getting information from a single MongoDB queue | 1.2 | 1 | 1 | 510 |
22,738,455 | 2014-03-29T23:52:00.000 | 109 | 0 | 1 | 0 | python,pycharm | 23,043,824 | 5 | false | 0 | 0 | I have had the same problem as you, even though I configured Python 3.4.0 as the project's interpreter and all print's in the code were Python 3 compliant function calls.
I got it sorted out by doing this in PyCharm:
File -> Invalidate Caches / Restart... -> Invalidate and Restart | 3 | 54 | 0 | I started to learn python language, and decided to try out PyCharm IDE, which looks really nice. But, whenever I write print it says "Unresolved reference 'print'". I can run the program, but this red-underline is really annoying. How can I fix this? | PyCharm Unresolved reference 'print' | 1 | 0 | 0 | 30,545 |
22,738,455 | 2014-03-29T23:52:00.000 | 3 | 0 | 1 | 0 | python,pycharm | 55,994,206 | 5 | false | 0 | 0 | Same problem, I deleted the .idea and __pycache__ directories in the project directory and everything was fine :) | 3 | 54 | 0 | I started to learn python language, and decided to try out PyCharm IDE, which looks really nice. But, whenever I write print it says "Unresolved reference 'print'". I can run the program, but this red-underline is really annoying. How can I fix this? | PyCharm Unresolved reference 'print' | 0.119427 | 0 | 0 | 30,545 |
22,738,455 | 2014-03-29T23:52:00.000 | 1 | 0 | 1 | 0 | python,pycharm | 64,562,772 | 5 | false | 0 | 0 | Just delete .idea folder from your project directory. | 3 | 54 | 0 | I started to learn python language, and decided to try out PyCharm IDE, which looks really nice. But, whenever I write print it says "Unresolved reference 'print'". I can run the program, but this red-underline is really annoying. How can I fix this? | PyCharm Unresolved reference 'print' | 0.039979 | 0 | 0 | 30,545 |
22,740,033 | 2014-03-30T03:48:00.000 | 1 | 0 | 0 | 1 | python,django,websocket,twisted,publish-subscribe | 22,750,492 | 2 | false | 1 | 0 | There are dozens or hundreds of ways to do inter-process communication. For example, you could use HTTP by running an HTTP server in one process and using an HTTP client in the other.
The specific choice of protocol probably doesn't matter a whole lot. The particular details of the kind of communication you need might suggest one protocol over the others. If the extent of your requirements are just to provide notification that "something has happened" then a very simple protocol will probably do the job just fine. | 1 | 3 | 0 | I'm implementing a WebSocket server using Python and Autobahn (something that builds off of Twisted). What's the best way to let my Autobahn/Twisted server know that something has happened from within my Django application?
More specifically, I'm implementing a notifications service and instant update service that automatically let's my client side application know when things have changed and what it needs to update.
Is there any way to allow Django to "publish" to my Twisted server and then update the client side? I'm not really sure how this should all look.
Thanks | How to communicate between Django and Twisted when implementing a publish-subscribe pattern? | 0.099668 | 0 | 0 | 578 |
22,740,059 | 2014-03-30T03:53:00.000 | 0 | 0 | 0 | 0 | python,praw | 23,044,806 | 2 | false | 1 | 0 | The sorting types available in PRAW are equivalent to those available on the webinterface, such as 'new', 'top' or 'controversial'. There isn't a special sort to retrieve worst comments. It may be silly to loop through all of them, but that's the only way to do what you want. | 2 | 0 | 0 | Is there a way to get a redditors worst comment using praw?
I have tried redditor.get_comments(sort="worst").next().body with different sorts but nothing produces the desired result. I suppose I could get all their comments and then loop through them but that seems silly. | How to get a redditors most down voted comment using PRAW? | 0 | 0 | 0 | 585 |
22,740,059 | 2014-03-30T03:53:00.000 | 1 | 0 | 0 | 0 | python,praw | 40,520,320 | 2 | false | 1 | 0 | This is a little late, but probably the best way to do this is to sort by top, then use after=t1_d9pvq54 (for example) and a high count to quickly page through the comments until you get to the last one which will be the worst comment. | 2 | 0 | 0 | Is there a way to get a redditors worst comment using praw?
I have tried redditor.get_comments(sort="worst").next().body with different sorts but nothing produces the desired result. I suppose I could get all their comments and then loop through them but that seems silly. | How to get a redditors most down voted comment using PRAW? | 0.099668 | 0 | 0 | 585 |
22,741,838 | 2014-03-30T08:22:00.000 | 1 | 0 | 0 | 0 | python | 22,741,877 | 3 | false | 0 | 1 | It's impossible.
Python/Tkinter app is a desktop application, which requires desktop manager, has access to file system etc.
Web application is a different stack of technologies (HTTP, HTML, javascript etc), it is not possible to mix them | 1 | 0 | 0 | So I was wondering if there is any Python package that can allow a pure Python application with a graphic interface to be embedded in a website. I have an application with a Tkinter interface that I want to make available on a website. Any way to do this without converting too much code?
Thanks! | Web Server/Site Python | 0.066568 | 0 | 0 | 74 |
22,750,497 | 2014-03-30T22:06:00.000 | 1 | 0 | 0 | 0 | python,sql,database,image | 22,750,529 | 1 | true | 0 | 0 | Store the image as a file, and store the path of the file in the database.
The fact that the file is an image is irrelevant. If you want a more specific answer, you will need to ask a more specific question. Also, please edit your title so that it corresponds to the question. | 1 | 0 | 0 | Hi Everyone Ive hit a road block in sql. Its the dreaded storing images in sql database. Apparently the solution to this is to store image in a file system. Does anyone know any book or video tutorial that teaches this I cant seem to find any in the web. Im using My Sql and Python to learn how to work with images. I cant find any examples in the web. | Sql Filesystem programming Python | 1.2 | 1 | 0 | 67 |
22,755,394 | 2014-03-31T07:04:00.000 | 2 | 0 | 1 | 0 | python,multithreading,python-2.7,tkinter | 22,755,431 | 1 | true | 0 | 0 | You should call thread1.join() from within thread2's run method, so that's it's thread2 that waits on thread1, instead of the main thread. | 1 | 0 | 0 | The main thread in my program creates the UI. thread1 communicates with the server and thread2 writes the results to an Excel sheet. I want thread2 to start only after thread1 has finished executing. However, when I use thread1.join(), the UI goes irresponsive. How do I fix this? (Both thread1 and thread2 are created in the main thread.) | Waiting for a thread to finish in python | 1.2 | 0 | 0 | 177 |
22,757,755 | 2014-03-31T09:14:00.000 | 2 | 0 | 0 | 0 | python,web-scraping,scrapy | 22,757,917 | 1 | true | 1 | 0 | You can't do this, as scrapy will not execute the JavaScript code.
What you can do:
Rely on a headless browser like Selenium, which will execute the JavaScript. Afterwards, use XPath (or simple DOM access) like before to query the web page after executing the page.
Understand where the contents come from, and load and parse the source directly instead. Chrome Dev Tools / Firebug might help you with that, have a look at the "Network" panel that shows fetched data.
Especially look for JSON, sometimes also XML. | 1 | 0 | 0 | I am trying to scrap a website where targeted items are populated using document.write method. How can I get full browser html rendered version of the website in the Scrapy? | Scrapy: scraping website where targeted items are populated using document.write | 1.2 | 0 | 1 | 428 |
22,757,997 | 2014-03-31T09:26:00.000 | 0 | 0 | 0 | 0 | python,django,email,openshift,mezzanine | 24,405,097 | 1 | false | 1 | 0 | I myself is looking for free SMTP library to just send emails. So far not much luck.
Tried java embedded SMTP library Aspirin. I am able to send mails but not very comfortable working with it as I keep getting some unknown exceptions.
Apache James as another Java based SMTP server but don't think we can embed in the code yet. | 1 | 0 | 0 | I have django 1.6 and python 2.7 deployed on Openshift. I notice that when the application sends emails out, Openshift overrides the email header by changing the 'From' field to '[email protected]' and ignore all 'Reply-to' field that has been set in the application.
I have searched around and it seems like Openshift overrides the email header and recommendation is to use their email service partner which is NOT FREE.
Is there any other way to avoid this ie. deploy Django application on Openshift while still having the application sends email as per dictated in the program. This exact program runs with no issues on test environment and localhost.
Any pointers are much appreciated. Thank you. | Openshift overrides email header 'from', 'reply-to' fields. How to send email without having to use SendGrid nor other paid email service.? | 0 | 0 | 0 | 403 |
22,758,813 | 2014-03-31T10:05:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi | 22,759,029 | 1 | false | 0 | 0 | For learning Python there are too many very good web ressources easily found on the net, so I don't mention them here. For your nice little project for the Raspi you should get familiar with the python module called RPi.GPIO. With it you can easily turn on & off your LEDs depending on the ping response of your company servers. | 1 | 1 | 0 | --First month in programming; be gentle with me--
I'm looking to build a short application using python to run on an RPi; the idea is to ping our company owned servers individually and eventually have them return as LED status lights. For now though; I would like it to broadcast a desktop notification to specific Macs on the same network.
I have 0 experience with python or programming general. Where should I start? | Python, Rpi server status notifier | 0 | 0 | 0 | 90 |
22,760,837 | 2014-03-31T11:50:00.000 | 6 | 0 | 0 | 0 | javascript,python,web-applications,numpy | 22,761,056 | 1 | true | 1 | 0 | Consider both situations: If the computation is client-side, then your client gets loaded, the computation power of the client computer (which maybe is just a mobile phone or whatever) comes into play, and it won't matter much whether other users of the site are doing computations at the same time.
On the other hand, if the computation is done server-side, then your server gets loaded, the computation time in a single-user situation is probably smaller (because your server probably is more powerful than the average client computer), but it will drop dramatically in case you have lots of users accessing your server at the same time.
Other aspects come into play:
If you do it server-side, you should ensure that no private data gets leaked in the process of transmitting the parameters or the results (so use https or similar).
Doing it server-side allows for later upgrading of the computational power (maybe split the task onto several nodes in order to have smaller computation time for higher server costs).
Doing it client-side might allow to do it even off-line, given a proper caching mechanism.
So, all in all, your question is too broad and underspecified to give a clear answer. | 1 | 2 | 0 | I have a desktop application, made in Python, with PyQT and scipy / numpy.
The aim of the program is, to find the optimal set of parameters for a differential equation, given some data.
Thus, we use a numerical solver and an optimization routine from numpy. The computation is quite eavy, but also quick (30 sec max), but can become longer (several hours) if we use custom parameters space exploration.
The next step is to "put it on the cloud", so the user doesn't have to bother how to install the application.
Thus, we want to create a Flask application, with display using d3.js or something like that.
I have never done any JS, so I wanted to know what is the best architecture :
the user uploads his data, they are sent on the server, it performs the computations and send them back => we can use scipy / numpy on the server, but too many simultaneous connections can shut down everything.
the user uploads his data, they are processed in JavaScript, on the client side => no more problem on the server, but I have to discover a new language and implement scientific computations myself (and I think it will be longer than the Fortran routines from numpy)
Using / learning JS is not the real problem, being efficient with it is more problematic.
Which is the best option for future modifications (the computations are longer, we want to provide a clustering of the results...) and for development time.
What would you do ?
Thanks. | Where should I make heavy computations ? Client or server side? | 1.2 | 0 | 0 | 1,700 |
22,764,021 | 2014-03-31T14:17:00.000 | 0 | 0 | 1 | 0 | python,numpy,scipy | 22,766,449 | 1 | false | 0 | 0 | What you should do is: install python 2.6.2 separately onto your system (it looks like you are using windows, right?), and then install scipy corresponding to python 2.6.2, and then copy the site-packages to the abaqus folder.
Note that 1) you can't use matplotlib due to the tkinter problem; 2) the numpy is already coming with abaqus so you don't need to install it by hand. | 1 | 1 | 1 | Can anyone give inputs/clue/direction on installation of compatible version of numpy and scipy in abaqus python 2.6.2?
I tried installing numpy-1.6.2, numpy-1.7.1 and numpy-1.8.1. But all gives an error of unable to find vcvarsall.bat. because it doesn't have a module named msvccomplier. based on the some of the answers, I verified the visual studio version and it is 2008.
Could anyone please give direction on this? | Installation of Compatible version of Numpy and Scipy on Abaqus 6.13-2 with python 2.6.2 | 0 | 0 | 0 | 2,294 |
22,764,927 | 2014-03-31T14:55:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 46,719,809 | 4 | false | 0 | 1 | To get rid of a GUI window I used the following in my code.
window.destroy()
and the following to bring it up again.
nameoffunction()
window.lift() | 1 | 3 | 0 | anyone know how to hide python GUI Tkinter,
I 've created keylogger, for GUI I used python module Tkinter , I want to add button called HIDE, so when user click it it will hide GUI , and when user press key like CTRL+E , it should unhide GUI....? | How To Hide Tkinter python Gui | 0 | 0 | 0 | 11,986 |
22,765,563 | 2014-03-31T15:22:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine | 22,772,893 | 2 | true | 1 | 0 | TL;DR: We do not support having the dev_appserver use the real app-engine datastore. Even with the suggested use of "remote_api", AFAIK, the dev_appserver does not know how to use it.
If you really want to make this work, you could write your own low-level API and have your own datastore abstraction that uses your API instead of the actual datastore, however this is a non trivial amount of work.
Another option is to have a servlet that can pre-populate your dev datastore with the data you need from checked in files. The checked in raw data could be non-real data or obfuscated real data. At dev_appserver startup, you hit this URL and your database becomes pre-populated with data. If you take this route, you get the bonus of not operating on your live data with dev code.
HTH! | 1 | 0 | 0 | Is it possible to setup the App Engine SDK on my local machine to use the live datastore while developing? Sometimes it's just easier for my workflow to work live.
If not, is there an easy way to download or sync the live data to development machine?
Thanks! | Use production App Engine datastore on development machine? | 1.2 | 0 | 0 | 334 |
22,766,066 | 2014-03-31T15:43:00.000 | 0 | 0 | 1 | 1 | python,macos,python-2.7,path | 22,774,906 | 3 | false | 0 | 0 | Another way is to set the PYTHONPATH environment variable to /User/username in your shell. Since you know about your shell, I expect that you already know how to edit your shell resource script. You could also add it to your .profile file in which case it should be available even if you change which shell you happen to be using. | 1 | 1 | 0 | I'm using Python 2.7.2. and I want to open and use a dictionary I created in my shell. My problem is, when I try to import this dictionary into my shell it can't find the file because python is just looking into the 'my documents' folder.
My question is, how can I navigate to the correct folder (just one folder further in 'my documents' folder.
I am using a Macintosh. | How can I specify a path on a Mac | 0 | 0 | 0 | 186 |
22,768,253 | 2014-03-31T17:29:00.000 | 2 | 0 | 1 | 0 | python,pip,m2crypto | 22,768,369 | 1 | false | 0 | 0 | I have wondered the same thing in the past. However, I've had no problem installing packages "manually" by downloading them to my desktop, expanding them, then copying the appropriate objects to a sub-folder in my /extras directory (mine is a Django system). Be sure there is an init.py file in there, and I always ensure it will be added to my svn source control. | 1 | 5 | 0 | I'm having trouble installing M2Crypto on my shared webhost account with pip. Can I just copy the module's source into site-packages, or does pip do something extra? | What happens during a python module install? Can I just copy the module source to site-packages? | 0.379949 | 0 | 0 | 953 |
22,772,554 | 2014-03-31T21:29:00.000 | 4 | 1 | 1 | 1 | python,shell | 22,772,608 | 1 | true | 0 | 0 | yes, add it to your PYTHONPATH as you are doing, but you cannot invoke it with python foo.py, instead, use python -m foo. | 1 | 2 | 0 | Say I have a script script.py located in a specific folder in my system. This folder is not available on PATH.
Assuming that I will always run script.py using python script.py, is there any way to run my script from anywhere on the system without having to modify PATH?
I thought modifying PYTHONPATH would do it, but it doesn't. PYTHONPATH seems to only affect the module search path, and not the script search path. Is my understanding correct? | Python-specific PATH environment variable? | 1.2 | 0 | 0 | 47 |
22,775,423 | 2014-04-01T01:54:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 22,775,536 | 2 | true | 0 | 0 | "Data driven programming": store your questions in a data file, and your program just needs the required logic to load and present them. | 1 | 0 | 0 | I am making a program that asks a user a lot of questions, and I have each question defined at the top of my file. Unfortunately because of the ridiculous number of questions I need to have, the file has become extremely packed and difficult to navigate. The questions are organized by different sections, so I thought it would be great if I could fold all of the variables by section and label them with a comment.
I am using Pydev for Eclipse. I have done some searching but haven't found anything promising. Any suggestions for how to do this or how to better organize my variables? | How to manage many variables | 1.2 | 0 | 0 | 115 |
22,775,681 | 2014-04-01T02:24:00.000 | 1 | 0 | 0 | 0 | python-2.7,opencv | 22,776,862 | 1 | true | 0 | 0 | You can use the filename of the image for that purpose. All you need to do is keep the filenames stored somewhere in your application, alongside the Mat objects. | 1 | 0 | 1 | I implemented face recognition algorithm in raspberry pi(python 2.7 is was used), i have many sets of faces, if the captured face is one in database then the face is detected(i am using eigen faces algo). My question is can i know whose face(persons name) is detected? (can we have sort of tags to image and display name corresponding to it when the face is detected) Note: OpenCV used | How to know name of the person in the image? | 1.2 | 0 | 0 | 599 |
22,778,197 | 2014-04-01T06:15:00.000 | 1 | 0 | 0 | 0 | python,user-interface,pyqt,pyside | 22,778,301 | 1 | false | 0 | 1 | Qt supports a thing called SVG font. It is a font where every letter is a colored vector image. You can use these in text fields and webkit. | 1 | 0 | 0 | I have created a small chat app using pyside. I want to add 'Emoticons'(smileys) options for users to chat. I haven't getting any material on internet. I will be really grateful if someone helps to solve this problem.
Thanks in advance. | Adding Emoticons to app | 0.197375 | 0 | 0 | 114 |
22,782,726 | 2014-04-01T10:09:00.000 | 1 | 0 | 0 | 0 | python,gtk | 37,508,137 | 2 | false | 0 | 1 | As far as I know you cannot disable them. They will receive any key stroke that is not consumed by the focused Window. | 1 | 12 | 0 | I'm using some predefined accelerators connected with certain hot keys. Is it possible to temporarily disable them? I don't want to change the hot keys, in order not to confuse users. The accelerators are activated when typing into a combo box, which really is unacceptable. | How to disable accelerators when typing text in GTK+ | 0.099668 | 0 | 0 | 462 |
22,785,010 | 2014-04-01T11:49:00.000 | 6 | 0 | 0 | 1 | python,apache-spark | 23,485,718 | 2 | false | 0 | 0 | I started a new Python project in PyDev, then went into Project -> Properties -> PyDev - PYTHONPATH -> External libraries. I added a "source path" entry for
/path/to/spark/spark-0.9.1/python
This allowed PyDev to see all Spark-related code and provide auto complete, etc.
Hope this helps. | 1 | 1 | 0 | how do i use python for a Spark program in eclipse?
I've installed PyDev plugin in eclipse and installed Python on the system but how do i use PySpark. | Starting up PySpark for using python with Spark in eclipse | 1 | 0 | 0 | 6,907 |
22,796,476 | 2014-04-01T20:43:00.000 | 3 | 0 | 1 | 1 | python,shell,terminal | 22,799,374 | 2 | false | 0 | 0 | I like @ebarr's answer, but a quick and dirty way to do it is to write to several files. You can then open multiple terminals and tail the files. | 1 | 3 | 0 | (I am using Python and ArchLinux)
I am writing a simple AI in Python as a school project. Because it is a school project, and I would like to visibly demonstrate what it is doing, my intention is to have a different terminal window displaying printed output from each subprocess- one terminal showing how sentences are being parsed, one showing what pyDatalog is doing, one for the actual input-output chat, etc, possibly on two monitors.
From what I know, which is not much, a couple of feasible ways to go about this are threading each subprocess and figuring out display from there, or writing/using a library which allows me to make and configure my own windows.
My question is, then, are those the best ways, or is there an easy way to output to multiple terminals simultaneously. Also, if making my own windows (and I'm sorry if my terminology is wrong when I say 'making my own windows'. I mean building my own output areas in Python) is the best option, I'm looking for which library I should use for that. | Outputting text to multiple terminals in Python | 0.291313 | 0 | 0 | 5,633 |
22,798,998 | 2014-04-01T23:43:00.000 | 0 | 0 | 1 | 0 | python,interpreter,pycharm,arcpy | 22,799,204 | 1 | false | 0 | 0 | You can upgrade your ArcGis Python 2.6 release to 2.7.
If you need Python 3, it won't work, because arcpy works only in Python 2. | 1 | 0 | 0 | Platform Windows. IDE PyCharm CE.
I have a script that uses module ArcPy from ESRI. This module has its own Python 2.6. I have to import a module that uses dictionary comprehensions not supported by Python 2.6.
How do I work around this without rewriting the code to avoiding list comprehensions?
Other questions:
What should be the correct pattern for project creation and interpreter maintenance? Should I always use virtual dedicated envs for each project importing any needed modules and keep isolation for each?
Is it correct to import the extra needed packages for my projects to the ArcPy python installation Python 2.6.5 (C:/Python26/ArcGIS10.0/python.exe)? Can this cause problems later to arc map? | Python interpreter, virtual environments and pycharm | 0 | 0 | 0 | 466 |
22,799,208 | 2014-04-02T00:05:00.000 | 3 | 0 | 0 | 0 | python,pandas | 22,799,245 | 3 | false | 0 | 0 | You can use .dropna() after a DF[DF==np.inf]=np.nan, (unless you still want to keep the NANs and only drop the infs) | 1 | 20 | 1 | In Pandas, I can use df.dropna() to drop any NaN entries. Is there anything similar in Pandas to drop non-finite (e.g. Inf) entries? | Keep finite entries only in Pandas | 0.197375 | 0 | 0 | 18,662 |
22,800,258 | 2014-04-02T02:12:00.000 | 0 | 0 | 1 | 0 | python,multithreading,python-3.x | 22,852,404 | 1 | true | 0 | 0 | In my case, the best way to do this seems to be to maintain a running worker process, and send the code to it on an as-needed basis. If the process acts up, I kill it and then start a new one immediately to avoid any delay the next time. | 1 | 0 | 0 | I'm writing a program in which I want to evaluate a piece of code asynchronously. I want it to be isolated from the main thread so that it can raise an error, enter an infinite loop, or just about anything else without disrupting the main program. I was hoping to use threading.Thread, but this has a major problem; I can't figure out how to stop it. I have tried Thread._stop(), but that frequently doesn't work. I end up with a thread that I can't control hogging both interpreter time and CPU power. The code in the thread doesn't open any files or do anything else that would cause problems if I hard-killed it.
Python's multiprocessing.Process.terminate() does this really well; unfortunately, initiating a process on Windows takes nearly a second, which is long enough to cause annoying delays in my GUI.
Does anyone know either a: how to kill a Python thread (I don't think I care how dirty the exit is), or b: how to speed up starting a process?
A third possibility would be a third-party library that provides an alternative method for asynchronous execution, but I've never heard of any such thing. | Isolating code with a Python thread | 1.2 | 0 | 0 | 301 |
22,805,118 | 2014-04-02T08:04:00.000 | 1 | 0 | 1 | 1 | python,linux,python-idle | 22,805,475 | 2 | false | 0 | 0 | Depends on the script. Unless you use anything os-specific, you are golden.
In the standard library most of the modules are totally os-agnostic, and for the rest the rule of thumb is - "if it is possible to provide the same functionality across *nix and windows, it has probably been done".
Python actually makes it pretty easy to write portable programs. Even file paths manipulation is pretty portable if you do it right - os.path.sep instead of '/', os.path.join instead of string concatenation etc.
Notable exceptions are
sockets - windows sockets are bit different
multiprocessing - windows does not have fork(), that may or may not be a problem.
Needless to say, things related to username, hostname and such.
os and sys are a mixed bag - you should read the compatibility notes in the docs.
Everything packaging and distribution-related. | 1 | 1 | 0 | I am using Python 2.7.5+ on my Linux Mint to write simple programs as .py files and running them in Konsole Terminal. These work fine on my computer, but I need to share them with a friend using Windows (IDLE I suppose) and wonder if these will work as they are without modification.
The programs start with the usual #!/usr/bin/python, you know. | .py file from Linux to Windows: is it going to just work? | 0.099668 | 0 | 0 | 220 |
22,805,650 | 2014-04-02T08:30:00.000 | 0 | 1 | 0 | 0 | selenium,python-unittest | 22,939,972 | 1 | true | 0 | 0 | The solution I came up with finally is:
Have a module for the tests which fixes the global data, including the hostname, and provides my TestCase class (I added an assertLoadsOk method to simply check for the HTTP status code).
This module does commandline processing as well:
It checks for its own options
and removes them from the argument vector (sys.argv).
When finding an "unknown" option, stop processing the options, and leave the rest to the testrunner.
The commandline processing happens on import, before initializing my TestCase class.
It works well for me ... | 1 | 0 | 0 | I'm creating a unittest- and Selenium-based test suite for a web application. It is reachable by several hostnames, e.g. implying different languages; but of course I want to be able to test e.g. my development instances as well without changing the code (and without fiddling with the hosts file which doesn't work for me anymore, because of network security considerations, I suppose).
Thus, I'd like to be able to specify the hostname by commandline arguments.
The test runner does argument parsing itself, e.g. for chosing the tests to execute.
What is the recommended method to handle this situation? | How to pass an argument (e.g. the hostname) to the testrunner | 1.2 | 0 | 1 | 94 |
22,806,639 | 2014-04-02T09:12:00.000 | 1 | 1 | 0 | 0 | python,sockets,ssh,raspberry-pi,ethernet | 22,872,068 | 1 | true | 0 | 0 | Samba, FTP/SFTP, or also (if doable on windows) SSHFS. If you want your own implementation then for example you could use a REST API (web app) running on PI and allowing file operations in some folders (create, modify, delete, get, list...). You could also think about using Git and git pulling/pushing between each other :) | 1 | 0 | 0 | I am developping an application in which i have to establish an ethernet connection between raspberry pi and a windows pc. On my pc i want to develop a python program (gui) that can not only import files from the raspberry pi, but also read those files and modify them. I don't want to use any soft or program already existing. So what is the best solution: sockets, or ssh? or there is an other choice? | How to access to the Raspberry pi files using a python gui running on windows? | 1.2 | 0 | 0 | 412 |
22,807,281 | 2014-04-02T09:35:00.000 | 0 | 0 | 0 | 0 | python-2.7,tcl | 22,811,026 | 2 | false | 0 | 0 | Surya,
you should have a look at the ncgi and htmlparse packages in tcllib to extract the information you need.
Joachim | 1 | 0 | 0 | I want to take input from the webpage, parse the data submitted using python or tcl and start the script execution based on the inputs given.
Please suggest me a solution, how it can be done.
I am not sure whether some web server need to be started for this.
Thanks in Advance.
Regards,
Surya | Read input from the webpage, parse the data submitted using python or tcl and start the script execution based on the inputs given | 0 | 0 | 1 | 514 |
22,807,853 | 2014-04-02T09:59:00.000 | 0 | 0 | 0 | 0 | javascript,python,django,file-upload | 22,808,378 | 1 | true | 1 | 0 | I've noticed that when using showMessage function when initializing plugin it can override default behaviour of the plugin. Problem solved. | 1 | 0 | 0 | I'm using this plugin: https://github.com/zmathew/django-ajax-upload-widget and I'm wondering if there is any way of disabling alerts/notifications when upload fails without changing plugin code?
I want to use bootstrap notifications instead of this ugly default alert popups, but also I have to use Django Eggs so I can't change the plugin code/files.
In documentation I've seen that I can set plugin behaviour when upload success, but can't see anything about upload fail. Please help. | Disabling alerts and errors in Django File Uploader | 1.2 | 0 | 0 | 17 |
22,811,050 | 2014-04-02T12:11:00.000 | 9 | 1 | 1 | 0 | python,c++,haskell,floating-point | 22,811,510 | 6 | false | 0 | 0 | The format C and C++ use for representing float and double is standardized (IEEE 754), and the problems you describe are inherent in that representation. Since Python is implemented in C, its floating point types are prone to the same rounding problems.
Haskell's Float and Double are a somewhat higher level abstraction, but since most (all?) modern CPUs use IEEE754 for floating point calculations, you most probably will have that kind of rounding errors there as well.
In other words: Only languages/libraries which choose to not base their floating point types on the underlying architecture might be able to circumvent the IEEE754 rounding problems to a certain degree, but since the underlying hardware does not support other representations directly, there has to be a performance penalty. Therefore, probably most languages will stick to the standard, not least because its limitations are well known. | 1 | 2 | 0 | First of all, I was not studying math in English language, so I may use wrong words in my text.
Float numbers can be finite(42.36) and infinite (42.363636...)
In C/C++ numbers are stored at base 2. Our minds operate floats at base 10.
The problem is -
many (a lot, actually) of float numbers with base 10, that are finite, have no exact finite representation in base 2, and vice-versa.
This doesn't mean anything most of the time. The last digit of double may be off by 1 bit - not a problem.
A problem arises when we compute two floats that are actually integers. 99.0/3.0 on C++ can result in 33.0 as well as 32.9999...99. And if you convert it to integer then - you are in for a surprise. I always add a special value (2*smallest value for given type and architecture) before rounding up in C for this reason. Should I do it in Python or not?
I have run some tests in Python and it seems float division always results as expected. But some tests are not enough because the problem is architecture-dependent. Do somebody know for sure if it is taken care of, and on what level - in float type itself or only in rounding up and shortening functions?
P.S. And if somebody can clarify the same thing for Haskell, which I am only starting with - it would be great.
UPDATE
Folks pointed out to an official document stating there is uncertainty in floating point arithmetic. The remaining question is - do math functions like ceil take care of them or should I do it on my own? This must be pointed out to beginner users every time we speak of these functions, because otherwise they will all stumble on that problem. | Do Python and Haskell have the float uncertanity issue of C/C++? | 1 | 0 | 0 | 466 |
22,811,844 | 2014-04-02T12:41:00.000 | 1 | 1 | 0 | 1 | python,c++,linux,input,keyboard | 22,812,228 | 2 | false | 0 | 0 | The most generic solution is to use pseudo-terminals: you connect tttyn to the standard in and standard out of the program you want to monitor, and use pttyn to read and write to it.
Alternatively, you can create two pipes, which you connect to the standard in and standard out of the program to be monitored before doing the exec. This is much simpler, but the pipes look more like a file than a terminal to the program being monitored. | 1 | 2 | 0 | I work on a project to control my PC with a remote, and a infrared receptor on an Arduino.
I need to simulate keyboard input with a process on linux who will listen arduino output and simulate keyboard input. I can dev it with Python or C++, but i think python is more easy.
After many search, i found many result for... windows u_u
Anyone have a library for this ?
thanks
EDIT: I found that /dev/input/event3 is my keyboard. I think write in to simulate keyboard, i'm searching how do that | Simulate keyboard input linux | 0.099668 | 0 | 0 | 2,879 |
22,817,533 | 2014-04-02T16:27:00.000 | 3 | 0 | 1 | 0 | python,python-2.7,numpy,pip,python-packaging | 22,817,669 | 2 | false | 0 | 0 | Maybe run deactivate if you are running virtualenv? | 1 | 4 | 1 | Trying to uninstall numpy. I tried pip uninstall numpy. It tells me that it isn't installed. However, numpy is still installed at /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy.
How can I make sure pip finds the numpy package? | Pip doesn’t know where numpy is installed | 0.291313 | 0 | 0 | 7,578 |
22,820,723 | 2014-04-02T19:05:00.000 | 0 | 0 | 0 | 0 | javascript,python,django,angularjs,data-binding | 22,821,198 | 2 | false | 1 | 0 | JSON is the way to go. I would look at libraries like Tastypie and Django REST framework to alleviate the amount of code to write. | 1 | 4 | 0 | One of the features hawked by AngularJS aficionados is the two-way data binding between DOM contents and JavaScript data that the framework offers.
I'm presently working on a couple of learning projects integrating AngularJS and Django, and one of the pain points is that the problem AngularJS solves between data in JavaScript and DOM representation is not immediately solved for the pairing of AngularJS and Django. Ergo, coordinating AngularJS and Django (AFAICT as an AngularJS novice) involves the kind of programming that is common in jQuery DOM manipulations and Angular seems to be written to obviate the need for. This is great for learning, but leads me to ask, "Has anyone tried to do for AngularJS + Django what AngularJS and Django individually offer to developers, namely obviating the need for this kind of stitching-up code?" AngularJS is more explicit about "Let two-way binding do the work," but Django as "the web framework for perfectionists with deadlines" seems intended to decrease manual labor.
At present I am building JSON to send to the client, but I was wondering if there were any projects to reconcile AngularJS to Django. | Are there any three-way data binding frameworks between the DOM, JavaScript, and server-side database for AngularJS and Django? | 0 | 0 | 0 | 1,890 |
22,826,006 | 2014-04-03T01:08:00.000 | 4 | 0 | 1 | 0 | python,jenkins | 22,840,336 | 3 | false | 0 | 0 | Any output to stdout from a process spawned by Jenkins should be captured by Console Output. One caveat is that it won't be displayed until a newline character is printed, so make sure your lines are terminated.
If you are launching python in some weird way that dis-associates it from Jenkins parent process, then I can't help you. | 2 | 11 | 0 | I have a Python script print strings. Now when run it in Jenkins I didn't see the printed strings in Jenkins Builds' Console Output.
Anyway to achieve that? | how to get python print result in jenkins console output | 0.26052 | 0 | 0 | 15,615 |
22,826,006 | 2014-04-03T01:08:00.000 | 16 | 0 | 1 | 0 | python,jenkins | 51,525,067 | 3 | true | 0 | 0 | Try using -u (unbuffered) option when running the python script.
python -u my_script.py | 2 | 11 | 0 | I have a Python script print strings. Now when run it in Jenkins I didn't see the printed strings in Jenkins Builds' Console Output.
Anyway to achieve that? | how to get python print result in jenkins console output | 1.2 | 0 | 0 | 15,615 |
22,830,060 | 2014-04-03T06:57:00.000 | 1 | 0 | 1 | 0 | python,syntax-error,runtime-error,message | 22,830,091 | 2 | false | 0 | 1 | You should run it from command window/terminal instead of double clicking on the file. | 1 | 1 | 0 | I am pretty new to Python and I have been pretty annoyed with this problem. I am not sure if this matters, but I run my .py file with Python 2.7.6 with python installed on my computer, not using it on any online thing or other program. Every time I come across an error, my program works fine until it comes to the error, but the window disappears right before I can possibly read whatever the error said it was... Anyways, I haven't been able to find out what is wrong with my programming, and I am tired of guessing and guessing what is wrong. How can I extend the time so I can read the error message? Or something like that? Thanks | Python - I can't see my what my error is because the window disappears immediately | 0.099668 | 0 | 0 | 2,813 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.