Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
14,278,009
2013-01-11T12:14:00.000
0
0
1
0
0
python,virtualenv,pycharm,pylint
0
71,898,652
0
4
0
false
0
0
In Tool Settings, set Program: to $PyInterpreterDirectory$/pylint
3
9
0
0
Using the PyCharm IDE, when setting up an external tool, how can you set up the external tools with a path relative to use the current virtual env defaults.? An example being pylint - where I'd want the virtual env version and not the system one to run.
Pycharm External tools relative to Virtual Environment
0
0
1
0
0
3,302
14,278,009
2013-01-11T12:14:00.000
0
0
1
0
0
python,virtualenv,pycharm,pylint
0
14,508,728
0
4
0
false
0
0
just found your post while looking for documentation about the "variables" that could bew used when setting parameters for external tools. No documentation but you can see a list of all the available stuff after pressing thE "INSERT MACRO" button in the Edit Tool dialog. I don't see any reference to the interpreter path there but I usually use the virtualenv as my project path. If you are doing that too you could infer the python interpreter path from there.
3
9
0
0
Using the PyCharm IDE, when setting up an external tool, how can you set up the external tools with a path relative to use the current virtual env defaults.? An example being pylint - where I'd want the virtual env version and not the system one to run.
Pycharm External tools relative to Virtual Environment
0
0
1
0
0
3,302
14,278,009
2013-01-11T12:14:00.000
16
0
1
0
0
python,virtualenv,pycharm,pylint
0
33,673,270
0
4
0
false
0
0
Not sure about older versions, but in PyCharm 5 one can use $PyInterpreterDirectory$ macro. It's exactly that we want
3
9
0
0
Using the PyCharm IDE, when setting up an external tool, how can you set up the external tools with a path relative to use the current virtual env defaults.? An example being pylint - where I'd want the virtual env version and not the system one to run.
Pycharm External tools relative to Virtual Environment
0
1
1
0
0
3,302
14,288,177
2013-01-11T23:13:00.000
1
0
1
0
0
python,automation,interop,concept
0
14,288,210
0
5
0
false
0
0
You should look into a package called selenium for interacting with web browsers
1
27
0
1
I'm having the idea of writing a program using Python which shall find a lyric of a song whose name I provided. I think the whole process should boil down to couple of things below. These are what I want the program to do when I run it: prompt me to enter a name of a song copy that name open a web browser (google chrome for example) paste that name in the address bar and find information about the song open a page that contains the lyrics copy that lyrics run a text editor (like Microsoft Word for instance) paste the lyrics save the new text file with the name of the song I am not asking for code, of course. I just want to know the concepts or ideas about how to use python to interact with other programs To be more specific, I think I want to know, fox example, just how we point out where is the address bar in Google Chrome and tell python to paste the name there. Or how we tell python how to copy the lyrics as well as paste it into the Microsof Word's sheet then save it. I've been reading (I'm still reading) several books on Python: Byte of python, Learn python the hard way, Python for dummies, Beginning Game Development with Python and Pygame. However, I found out that it seems like I only (or almost only) learn to creat programs that work on itself (I can't tell my program to do things I want with other programs that are already installed on my computer) I know that my question somehow sounds rather silly, but I really want to know how it works, the way we tell Python to regconize that this part of the Google chrome browser is the address bar and that it should paste the name of the song in it. The whole idea of making python interact with another program is really really vague to me and I just extremely want to grasp that. Thank you everyone, whoever spend their time reading my so-long question. ttriet204
Interact with other programs using Python
0
0.039979
1
0
0
89,546
14,289,656
2013-01-12T02:41:00.000
0
0
0
0
0
c++,python
0
14,289,688
0
2
0
false
0
1
Due to complexities of the C++ ABI (such as name mangling), it's generally difficult and platform-specific to load a C++ library directly from Python using ctypes. I'd recommend you either create a simple C API which can be easily wrapped with ctypes, or use SWIG to generate wrapper types and a proper extension module for Python.
1
0
0
0
I'm a Python guy building a Linux-based web service for a client who wants me to interface with a small C++ library that they're currently using with a bunch of Windows based VB applications. They have assured me that the library is fairly simple (as far as they go I guess), and that they just need to know how best to compile and deliver it to me so that I can use it in Python under Linux. I've read a bit about the ctypes library and other options (SWIG, etc), but for some reason I haven't really been able to wrap my head around the concept and still don't know how to tell them what I need. I'm pretty sure having them re-write it with Python.h, etc is out, so I'm hoping there's a way I can simply have them compile it on Linux as a .so and just import it into Python. Is such a thing possible? How does one accomplish this?
How does one get a C++ library loaded into Python as a shared object file (.so)?
0
0
1
0
0
201
14,323,390
2013-01-14T17:21:00.000
0
0
0
0
0
python,qt,video,pyside,phonon
0
14,509,771
0
3
0
true
0
1
OK - for others out there looking for the same info, I found Hachoir-metadata and Hachoir-parser (https://bitbucket.org/haypo/hachoir/wiki/Home). They provide the correct info but there is a serious lack of docs for it and not that many examples that I can find. Therefore, while I have parsed a video file and returned the metadata for it, I'm now struggling to 'get' that information in a usable format. However, I will not be defeated!
1
0
0
0
Can anybody tell me how I can return the dimensions of a video (pixel height/width) using Qt (or any other Python route to that information). I have googled the hell out of it and cannot find a straight answer. I assumed it would either be mediaobject.metadata() or os.stat() but neither appear to return the required info.
qt phonon - returning video dimensions
0
1.2
1
0
0
305
14,355,747
2013-01-16T10:05:00.000
0
0
1
0
0
c++,python,c,list
0
14,355,997
0
4
0
false
0
0
As I understand, you don't want or able to use standard library (BTW, why?). You can look into the code of the vector template in one of STL implementations and see, how it's implemented. The basic algorithm is simple. If you're deleting something from the middle of the array, you must shrink it after. If you're inserting into the middle - you must expand it before. Sometimes it involves memory reallocation and moving you array, by memcpy for example. Or you can implement double-linked list as mentioned. But it won't behave like array.
3
0
0
0
For e.g. if we have an array in python arr = [1,3,4]. We can delete element in the array by just using arr.remove(element) ot arr.pop() and list will mutate and it's length will change and that element won't be there. Is there a way to do this is C ot C++?. If yes the how to do that?
Python list and c/c++ arrays
0
0
1
0
0
2,120
14,355,747
2013-01-16T10:05:00.000
0
0
1
0
0
c++,python,c,list
0
14,355,936
0
4
0
false
0
0
In C++ either you can use std::list or std::vector based on your mileage If you need to implement in C, you need to write your double-link list implementation or if you wan't to emulate std::vector, you can do so by using memcpy and array indexing
3
0
0
0
For e.g. if we have an array in python arr = [1,3,4]. We can delete element in the array by just using arr.remove(element) ot arr.pop() and list will mutate and it's length will change and that element won't be there. Is there a way to do this is C ot C++?. If yes the how to do that?
Python list and c/c++ arrays
0
0
1
0
0
2,120
14,355,747
2013-01-16T10:05:00.000
5
0
1
0
0
c++,python,c,list
0
14,355,769
0
4
0
false
0
0
I guess you're looking for std::vector (or other containers in the standard library).
3
0
0
0
For e.g. if we have an array in python arr = [1,3,4]. We can delete element in the array by just using arr.remove(element) ot arr.pop() and list will mutate and it's length will change and that element won't be there. Is there a way to do this is C ot C++?. If yes the how to do that?
Python list and c/c++ arrays
0
0.244919
1
0
0
2,120
14,356,218
2013-01-16T10:27:00.000
0
0
0
0
0
python,xml-rpc,openerp
0
14,356,856
0
3
0
false
1
0
Add an integer field in res partner table for storing external id on both database. When data is retrived from the external server and adding to your openerp database, store the external id in the record of res partner in local server and also save the id of the newly created partner record in the external server's partner record. So next time when the external partner record is updated, we can search the external id in our local server and update that record. Please check the openerp module base_synchronization and read the codes, which will be helpful for you.
1
6
0
0
My Question is a bit complex and iam new to OpenERP. I have an external database and an OpenERP. the external one isn't PostgreSQL. MY job is that I need to synchronize the partners in the two databases. External one being the more important. This means that if the external one's data change so does the OpenERp's, but if OpenERP's data changes nothing changes onthe external one. I can access to the external database, and using XML RCP I have acces to OpenERP's as well. I can import data from the external database simply with XML RCP but the problem is the sync. I can't just INSERT the modified partner and delete the old one because i have no way to identify the old one. I need to UPDATE it. But then i need an id that says which is which. and external ID. To my knowledge OpenERP can handle external IDs. How does this work? and how can i add an external ID to my res.partner using this? I was told that I cant create a new module for this alone I need to use the internal ID works.
Adding external Ids to Partners in OpenERP withouth a new module
0
0
1
1
0
4,853
14,364,214
2013-01-16T17:29:00.000
2
0
0
0
0
python,database,migration,sqlalchemy
0
14,364,804
0
2
0
true
1
0
Some thoughts for managing databases for a production application: Make backups nightly. This is crucial because if you try to do an update (to the data or the schema), and you mess up, you'll need to be able to revert to something more stable. Create environments. You should have something like a local copy of the database for development, a staging database for other people to see and test before going live and of course a production database that your live system points to. Make sure all three environments are in sync before you start development locally. This way you can track changes over time. Start writing scripts and version them for releases. Make sure you store these in a source control system (SVN, Git, etc.) You just want a historical record of what has changed and also a small set of scripts that need to be run with a given release. Just helps you stay organized. Do your changes to your local database and test it. Make sure you have scripts that do two things, 1) Scripts that modify the data, or the schema, 2) Scripts that undo what you've done in case things go wrong. Test these over and over locally. Run the scripts, test and then rollback. Are things still ok? Run the scripts on staging and see if everything is still ok. Just another chance to prove your work is good and that if needed you can undo your changes. Once staging is good and you feel confident, run your scripts on the production database. Remember you have scripts to change data (update, delete statements) and scripts to change schema (add fields, rename fields, add tables). In general take your time and be very deliberate in your actions. The more disciplined you are the more confident you'll be. Updating the database can be scary, so don't rush things, write out your plan of action, and test, test, test!
1
1
0
0
I'm working on a web-app that's very heavily database driven. I'm nearing the initial release and so I've locked down the features for this version, but there are going to be lots of other features implemented after release. These features will inevitably require some modification to the database models, so I'm concerned about the complexity of migrating the database on each release. What I'd like to know is how much should I concern myself with locking down a solid database design now so that I can release quickly, against trying to anticipate certain features now so that I can build it into the database before release? I'm also anticipating finding flaws with my current model and would probably then want to make changes to it, but if I release the app and then data starts coming in, migrating the data would be a difficult task I imagine. Are there conventional methods to tackle this type of problem? A point in the right direction would be very useful. For a bit of background I'm developing an asset management system for a CG production pipeline. So lots of pieces of data with lots of connections between them. It's web-based, written entirely in Python and it uses SQLAlchemy with a SQLite engine.
How to approach updating an database-driven application after release?
0
1.2
1
1
0
120
14,367,670
2013-01-16T20:55:00.000
0
0
0
0
0
python,django,django-models,django-forms,django-views
0
14,367,745
0
2
0
false
1
0
What other fields are there in the model db? You could add an upload_id, which is set automatically after uploading, send it to the next view, and query for that in the db.
1
0
0
0
Suppose I want to query and display all the images that the user just uploaded through a form on the previous page (multiple images are uploaded at once, each is made into a separate object in the db). What's the best way to do this? Since the view for uploading the images is different from the view for displaying the images, how does the second view know which images were part of that upload? I thought about creating and saving the image objects in the first view, gathering the pks, and passing them to the second view, but I understand that it is bad practice. So how should I make sure the second view knows which primary keys to query for?
Django: How to query objects immediately after they have been saved from a separate view?
0
0
1
0
0
100
14,371,156
2013-01-17T02:11:00.000
5
1
0
1
0
python,python-3.x,pytest
0
59,968,198
0
6
0
false
0
0
Install it with pip3: pip3 install -U pytest
3
61
0
1
I've installed pytest 2.3.4 under Debian Linux. By default it runs under Python 2.7, but sometimes I'd like to run it under Python 3.x, which is also installed. I can't seem to find any instructions on how to do that. The PyPI Trove classifiers show Python :: 3 so presumably it must be possible. Aside from py.test somedir/sometest.py, I can use python -m pytest ..., or even python2.7 -m pytest ..., but if I try python3 -m pytest ... I get /usr/bin/python3: No module named pytest
Pytest and Python 3
1
0.16514
1
0
0
65,185
14,371,156
2013-01-17T02:11:00.000
27
1
0
1
0
python,python-3.x,pytest
0
14,371,623
0
6
0
false
0
0
python3 doesn't have the module py.test installed. If you can, install the python3-pytest package. If you can't do that try this: Install virtualenv Create a virtualenv for python3 virtualenv --python=python3 env_name Activate the virtualenv source ./env_name/bin/activate Install py.test pip install py.test Now using this virtualenv try to run your tests
3
61
0
1
I've installed pytest 2.3.4 under Debian Linux. By default it runs under Python 2.7, but sometimes I'd like to run it under Python 3.x, which is also installed. I can't seem to find any instructions on how to do that. The PyPI Trove classifiers show Python :: 3 so presumably it must be possible. Aside from py.test somedir/sometest.py, I can use python -m pytest ..., or even python2.7 -m pytest ..., but if I try python3 -m pytest ... I get /usr/bin/python3: No module named pytest
Pytest and Python 3
1
1
1
0
0
65,185
14,371,156
2013-01-17T02:11:00.000
69
1
0
1
0
python,python-3.x,pytest
0
14,371,849
0
6
0
true
0
0
I found a workaround: Installed python3-pip using aptitude, which created /usr/bin/pip-3.2. Next pip-3.2 install pytest which re-installed pytest, but under a python3.2 path. Then I was able to use python3 -m pytest somedir/sometest.py. Not as convenient as running py.test directly, but workable.
3
61
0
1
I've installed pytest 2.3.4 under Debian Linux. By default it runs under Python 2.7, but sometimes I'd like to run it under Python 3.x, which is also installed. I can't seem to find any instructions on how to do that. The PyPI Trove classifiers show Python :: 3 so presumably it must be possible. Aside from py.test somedir/sometest.py, I can use python -m pytest ..., or even python2.7 -m pytest ..., but if I try python3 -m pytest ... I get /usr/bin/python3: No module named pytest
Pytest and Python 3
1
1.2
1
0
0
65,185
14,375,397
2013-01-17T09:02:00.000
1
1
0
1
0
c++,python,gcc,g++,redhat
0
14,390,969
0
1
0
true
0
0
You want to link to the python static library, which should get created by default and will be called libpython2.7.a If I recall correctly, as long as you don't build Python with --enable-shared it doesn't install the dynamic library, so you'll only get the static lib and so simply linking your C++ application with -lpython2.7 -L/path/where/you/installed/python/lib should link to the static library.
1
0
0
0
I have a C++ application from Windows that I wish to port across to run on a Red Hat Linux system. This application embeds a slightly modified version of Python 2.7.3 (I added the Py_SetPath command as it is essential for my use case) so I definitely need to compile the Python source. My problem is that despite looking, I can't actually find any guidance on how to get Python to emit the right files for me to link against and how to then get g++ to link my C++ code against it in such a way that I don't need to have an installed copy of Python on every system I distribute this to. So my questions are: how do I compile Python so that it can be embedded into the C++ app on Linux? what am I linking against for the C++ app to work? Sorry for these basic questions, but having convinced my employer to let me try and move our systems over to Linux, I'm keen to make it go off as smoothly as possible and I'm worried avbout not making too much progress!
Compile Python 2.7.3 on Linux for Embedding into a C++ app
1
1.2
1
0
0
667
14,377,002
2013-01-17T10:30:00.000
2
0
0
0
0
python,substring,jinja2
0
14,386,224
0
2
0
false
1
0
Or you can create a filter for phonenumbers like {{ phone_number|phone }}
1
0
0
0
I have a question and I tried to solve diff way with Jinja2. I have a number that is saved in database. When I print the original number is for ex: 907333-5000. I want that number to be printed in this format: (907) 333-5000 but I don't know exactly how to do with Jinja2. Thank you
Jinja2 substring integer
0
0.197375
1
0
0
6,415
14,383,025
2013-01-17T15:58:00.000
0
0
0
1
1
python,google-app-engine,pydev
0
14,387,118
0
3
0
false
1
0
If you want to use Eclipse's Import feature, go with General -> File system.
2
1
0
0
Created a gae project with the googleappengine launch and have been building it with textmate. Now, I'd like to import it to the Eclipse PyDev GAE project. Tried to import it, but it doesn't work. Anyone know how to do that? Thanks in advance.
Pydev: How to import a gae project to eclipse Pydev gae project?
0
0
1
0
0
712
14,383,025
2013-01-17T15:58:00.000
2
0
0
1
1
python,google-app-engine,pydev
0
14,383,720
0
3
0
true
1
0
You could try not using the eclipse import feature. Within Eclipse, create a new PyDev GAE project, and then you can copy in your existing files.
2
1
0
0
Created a gae project with the googleappengine launch and have been building it with textmate. Now, I'd like to import it to the Eclipse PyDev GAE project. Tried to import it, but it doesn't work. Anyone know how to do that? Thanks in advance.
Pydev: How to import a gae project to eclipse Pydev gae project?
0
1.2
1
0
0
712
14,388,251
2013-01-17T21:14:00.000
7
0
0
1
0
python,google-app-engine,full-text-search
1
14,390,379
0
1
0
true
1
0
If you empty out your index and call index.delete_schema() (index.deleteSchema() in Java) it will clear the mappings that we have from field name to type, and you can index your new documents as expected. Thanks!
1
5
0
0
The Situation Alright, so we have our app in appengine with full text search activated. We had an index set on a document with a field named 'date'. This field is a DateField and now we changed the model of the document so the field 'date' is now a NumericField. The problem is, on the production server, even if I cleared all the document from the index, the server responds with this type of error: Failed to parse search request ""; SortSpec numeric default value does not match expression type 'TEXT' in 'date' The Solution The problem is, "I think", the fact that the model on the server doesn't fit the model of the search query. So basically, one way to do it, would be to delete the whole index, but I don't know how to do it on the production server. The dev server works flawlessly
How to delete or reset a search index in Appengine
0
1.2
1
0
0
2,747
14,389,892
2013-01-17T23:15:00.000
2
0
1
0
0
python,matplotlib,latex,ipython,ipython-notebook
0
57,860,793
0
2
0
false
0
0
I ran into the problem posted in the comments: ! LaTeX Error: File 'type1cm.sty' not found. The issue was that my default tex command was not pointing to my up-to-date MacTex distribution, but rather to an old distribution of tex which I had installed using macports a few years back and which wasn't being updated since I had switched to using MacTex. I diagnosed this by typing which tex on the command line and getting /opt/local/bin/tex which is not the default install location for MacTex. The solution was that I had to edit my $PATH variable so that the right version of tex would get called by matplotlib. I added export PATH="/usr/local/texlive/2019/bin/x86_64-darwin:$PATH" on the last line of my ~/.bash_profile. Now when I write echo $PATH on the command line I get: /usr/local/texlive/2019/bin/x86_64-darwin:blah:blah:blah... Don't forget to restart both your terminal and your jupyter server afterwards for the changes to take effect.
1
1
0
0
Displaying lines of LaTeX in IPython Notebook has been answered previously, but how do you, for example, label the axis of a plot with a LaTeX string when plotting in IPython Notebook?
IPython Notebook: Plotting with LaTeX?
0
0.197375
1
0
0
10,151
14,396,178
2013-01-18T09:46:00.000
0
0
1
0
0
python,binary,hex
0
14,396,302
0
3
0
false
0
0
use binascii.hexlify() - that should do it
1
0
0
0
I'd like to be able to do the reverse of: foo = long(binarystring.encode('hex'), 16)
In Python how do you reverse this call: long("1234", 16)?
0
0
1
0
0
123
14,398,980
2013-01-18T12:28:00.000
1
0
1
1
0
python,windows
0
14,399,306
0
2
0
false
0
0
What you can do, is to apply some sort of shell history functionality: every command issued by the user would be placed in a list, and then you'd implement a special call (command of your console), let's say 'history' that would print out the list for the user in order as it was being filled in, with increasing number to next to every command. Then, another call (again a special command of your console), let's say '!!' (but really coult be anything, like 'repeat') and a number of the command you want to repeat would fetch the command from the list and execute it without retyping: typing '!! 34' would execute again the command number 34 that can be 'something -a very -b long -c with -d very -e large -f number -g of -h arguments -666'. I am aware its not exactly the same thing that you wanted, but it is very easy to implement quickly, and provides the command repetition functionality you're after, and should be decent replacement until you figure out how to do it the way you want ;)
1
0
0
0
So I am working on a console based python(python3 actually) program where I use input(">")to get the command from user. Now I want to implement the "last command" function in my program - when users press the up arrow on the keyboard they can see their last command. After some research I found I can use curses lib to implement this but there are two problems. Curses are not available on Windows. The other parts of my program use print() to do the output. I don't want to rewrite them with curses. So are there any others ways to implement the "last command" function? Thanks.
How to implement "last command" function in a console based python program
1
0.099668
1
0
0
195
14,399,223
2013-01-18T12:41:00.000
0
0
0
1
1
python,mysql,django,pip,mysql-python
1
14,399,388
0
1
0
false
0
0
At first glance it looks like damaged pip package. Have you tried easy_install instead with the same package?
1
1
0
0
When I try installing mysql-python using below command, macbook-user$ sudo pip install MYSQL-python I get these messages: /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h:891:1: warning: this is the location of the previous definition /usr/bin/lipo: /tmp/_mysql-LtlmLe.o and /tmp/_mysql-thwkfu.o have the same architectures (i386) and can't be in the same fat output file clang: error: lipo command failed with exit code 1 (use -v to see invocation) error: command 'clang' failed with exit status 1 Does anyone know how to solve this problem? Help me please!
clang error when installing MYSQL-python on Lion-mountain (Mac OS X 10.8)
0
0
1
1
0
506
14,400,767
2013-01-18T14:15:00.000
0
0
0
0
0
python,download,popupwindow
0
14,401,005
0
1
0
false
0
0
There are python bindings to Selenium, that are helping in scripting simulated browser behavior allowing to do really complex stuff with it - take a look at it, should be enough for what you need.
1
0
0
0
I need to login to a website. Navigate to a report page. After entering the required information and clicking on the "Go" button(This is a multipart/form-data that I am submitting), there's a pop up window asking me to save the file. I want to do it automatically by python. I search the internet for a couple of days, but can't find a way in python. By using urllib2, I can process up to submitting multipart form, but how can I get the name and location of the file and download it? Please Note: There is no Href associated with the "Go" button. After submitting the form, a file-save dialog popup asking me where to save the file. thanks in advance
Using python to download a file after clicking the submit button
0
0
1
0
1
346
14,409,454
2013-01-18T23:44:00.000
1
0
1
0
0
python,python-3.x,python-idle
0
14,410,812
0
2
0
true
0
0
As others have noted, IDLE will not display the three dots that you are looking for; however, if you must have them, you could always run python from the terminal by typing 'python' into the terminal and pressing ENTER. To exit python from the terminal you could type either 'exit()' or 'quit()' followed by ENTER.
1
1
0
0
I'm using the IDLE command prompt with Python 3.3 and when I enter a multi-line command, I see a blank line rather than the multi-line prompt that has three dots. I know that this is a little thing, but does anyone know how to enable the multi-line prompt?
Python IDLE Multi-line Prompt
0
1.2
1
0
0
2,005
14,410,771
2013-01-19T03:21:00.000
0
1
0
1
0
python,applescript,itunes
0
14,410,871
0
2
0
false
0
0
You can set up a simple Automator workflow to retrieve the current iTunes song. Try these two actions for starters: iTunes: Get the Current Song Utilities: Run Shell Script Change the shell script to cat > ~/itunes_track.txt and you should have a text file containing the path of the current track. Once you get your data out of Automator you should be all set :)
1
0
0
0
I have been doing a little bit of research and haven't found anything that is quite going to work. I want to have python know what the current song playing in iTunes is so I can serially send it to my Arduino. I have seen Appscript but it is no longer supported and from what I have read full of a few bugs now that it hasn't been updated. I am using Mac OS X 10.8.2 & iTunes 10.0.1 Anyone got any ideas on how to make this work. Any information is greatly appreciated. FYI: My project is a little 1.8' colour display screen that I am going to have serval pieces of information on RAM HDD CPU Song etc.
Python getting Itunes song
0
0
1
0
0
2,056
14,423,868
2013-01-20T11:10:00.000
2
0
1
0
0
python
0
14,423,880
0
5
0
false
0
0
You can use vars(info) or info.__dict__. It will return the object's namespace as a dictionary in the attribute_name:value format.
1
0
0
0
Is there any method to let me know the values of an object's attributes? For example, info = urllib2.urlopen('http://www.python.org/') I wanna know all the attributes' values of info. Maybe I don't know what are the attributes the info has. And str() or list() can not give me the answer.
how to reveal an object's attributes in Python?
0
0.07983
1
0
1
306
14,430,915
2013-01-21T00:23:00.000
0
0
0
0
0
wxpython
0
14,460,784
0
1
0
false
0
1
I don't believe the default TreeCtrl has database interaction builtin. You would have to add that. If you want it to check for updates periodically, you could use a wx.Timer. If you'll be updating the database with your wxPython GUI, then there shouldn't be a problem as you'll have to update the display anyway. You might also want to look at the DVC_DataViewModel in the wxPython demo. I think it may make this sort of thing easier as it has the concept of data objects, which I think implies that you could create a database object to feed your GUI.
1
0
0
0
Im a newbie to Wxpython programming and I have a general on how to make my application more program. Assume, I have a tree with the list of employees being listed in the tree. When I click a employee the information is retrieved in the db and the information is diplayed on the right side of the panel. Now, when I edit information of one of the employee and save it again the data the current row in the table needs to be end dated and the new row will be created in the db and also refreshing the tree. So, Basically if something is saved the tree should be refreshed automatically. How do I accomplish this?
Wxpython dynamic programming
0
0
1
0
0
115
14,431,639
2013-01-21T02:18:00.000
1
0
0
0
0
python,django,selenium,gunicorn
0
20,411,022
0
2
0
false
1
0
Off top of my head, you can try to override LiveServerTestCase.setUpClass and wind up gunicorn instead of LiveServerThread
2
11
0
0
I am running an app in django with gunicorn. I am trying to use selenium to test my app but have run into a problem. I need to create a test server like is done with djangos LiveServerTestCase that will work with gunicorn. Does anyone have any ideas of how i could do this? note: could also someone confirm me that LiveServerTestCase is executed as a thread not a process
How to set up a django test server when using gunicorn?
0
0.099668
1
0
0
1,308
14,431,639
2013-01-21T02:18:00.000
2
0
0
0
0
python,django,selenium,gunicorn
0
20,448,450
0
2
0
false
1
0
I've read the code. Looking at LiveServerTestCase for inspiration makes sense but trying to cook up something by extending or somehow calling LiveServerTestCase is asking for trouble and increased maintenance costs. A robust way to run which looks like what LiveServerTestCase does is to create from unittest.TestCase a test case class with custom setUpClass and tearDownClass methods. The setUpClass method: Sets up an instance of the Django application with settings appropriate for testing: a database in a location that won't interfere with anything else, logs recorded to the appropriate place and, if emails are sent during normal operations, with emails settings that won't make your sysadmins want to strangle you, etc. [In effect, this is a deployment procedure. Since we want to eventually deploy our application, the process above is one which we should develop anyway.] Load whatever fixtures are necessary into the database. Starts a Gunicorn instance run this instance of the Django application, using the usual OS commands for this. The tearDownClass: Shuts down the Gunicorn instance, again using normal OS commands. Deletes the database that was created for testing, deletes whatever log files may have been created, etc. And between the setup and teardown our tests contact the application on the port assigned to Gunicorn and they load more fixtures if needed, etc. Why not try to use a modified LiveServerTestCase? LiveServerTestCase includes the entire test setup in one process: the tests, the WSGI server and the Django application. Gunicorn is not designed for operating like this. For one thing, it uses a master process and worker processes. If LiveServerTestCase is modified to somehow start the Django app in an external process, then a good deal of the benefits of this class go out the window. LiveServerTestCase relies on the fact that it can just modifies settings or database connections in its process space and that these modifications will carry over to the Django app, because it lives in the same process. If the app is in a different process, these tricks can't work. Once LiveServerTestCase is modified to take care of this, the end result is close to what I've outlined above. Additional: Could someone get Gunicorn and Django to run in the same process? I'm sure someone could glue them together, but consider the following. This would certainly mean changing the core code of Gunicorn since Gunicorn is designed to use master and worker processes. Then, this person who created the glue would be responsible for keeping this glue up to date when Gunicorn or Django's internals change in such a way that makes the glue break. At the end of the day doing this requires more work than using the method outlined at the start of this answer.
2
11
0
0
I am running an app in django with gunicorn. I am trying to use selenium to test my app but have run into a problem. I need to create a test server like is done with djangos LiveServerTestCase that will work with gunicorn. Does anyone have any ideas of how i could do this? note: could also someone confirm me that LiveServerTestCase is executed as a thread not a process
How to set up a django test server when using gunicorn?
0
0.197375
1
0
0
1,308
14,441,086
2013-01-21T14:47:00.000
0
0
1
0
0
python,api
0
14,441,199
0
2
0
false
0
0
The process is mostly like this: 1) You write a new library (the wrapper) 2) This library depends on the existing library (the one you are going to wrap) 3) The wrapper will call the underlying library, offering a different API than the orignal library Usually you want to do this because the orignal library has not developer friendly APIs in the first place. Howver you do not say why you are supposed to undergo such task. Whoever gave you the task should also be able to give you the rationale for the work. The person giving you the task can exactly tell you what is wantd and how do it. Because there is no details in your question it is impossible to give better answer.
1
0
0
0
How to write a python helper API to wrap existing python library. I have never written anything like this or may be written but not realized it. Can someone tell me what exactly it is and how to do it ?
Understanding what is API wrapper around existing library
0
0
1
0
0
159
14,441,670
2013-01-21T15:17:00.000
2
0
0
0
0
python,python-3.x,gtk,gtk3,pygobject
0
14,460,308
0
1
0
true
0
0
I have found the way to do this: Return boolean True from the method which handles the key-press-event. Any value that doesn't evaluate to true passes control back to Gtk. In the particular way I implement this editor, the key-press-event signal of the toplevel vindow is connected to the method __key_event_handler, which basically filters all the keystrokes, modified with either Ctrl or Alt keys and returns True after processing the input, or just passes the control back to Gtk otherwise. This way, I can manage all the modified keystrokes, which are meant to be editor commands, and not need to handle insertion of normal characters.
1
2
0
0
I'm writing a source code editor, and I want to disable any of the pre-defined keystrokes, e.g Ctrl-V for paste, how can I do that?
How to disable all default keystrokes on Gtk
0
1.2
1
0
0
383
14,455,871
2013-01-22T10:02:00.000
1
0
0
0
0
python,qt,pyqt,pyqt4,pyside
0
14,540,595
0
2
0
false
0
1
You could use a proxy model that takes gets it's data from two models, one for your default values, the other for your database, and use it to populate your QComboBox.
1
1
0
0
I have populated a combobox with an QSqlQueryModel. It's all working fine as it is, but I would like to add an extra item to the combobox that could say "ALL_RECORDS". This way I could use the combobox as a filtering device. I obviously don't want to add this extra item in the database, how can I add it to the combobox after it's been populated by a model?
Adding an item to an already populated combobox
0
0.099668
1
1
0
243
14,459,821
2013-01-22T13:34:00.000
1
0
1
0
0
python,numbers,digits
0
14,460,639
0
5
0
false
0
0
Any solution that has you trying all permutations of digits will be horribly inefficient, running in O(n!). Just 14 digits (and the multiply operator) would give around 1 trillion combinations! An O(n lg n) solution would be: Sort the digits from high to low. Concatenate them into one string. Print the string. If you must multiply at least one digit, then Sort. Take the highest digit and multiply by the concatenation of the remaining digits. Print the result. If you must multiply at least one digit, then you might need to try all permutations (see @Mike's answer).
1
0
0
0
I want to know what is the biggest number you can make from multiplying digits entered by the user like this: 5*6*7*2 OR 567*2 OR 67*25 ...etc so 5/6/7/2 should be entered by the user as variables, but how do I tell python to form a number from two variables (putting the two digits next to each other and treating the outcome as a number by itself).
A number out of digits
0
0.039979
1
0
0
221
14,478,392
2013-01-23T11:16:00.000
0
0
0
0
0
python,google-app-engine,oauth,imap,two-legged
0
14,495,175
0
2
0
false
0
0
try: .. catch:.. logic can be used to handle such exceptions. As Google has retired OAuth1.0, its recommended to use OAuth2.0 instead of OAUth1.
1
1
0
0
I am Google Apps administrator and using xoauth.py and IMAP to download users mails without users password. But this process gets stopped after 1 hour. I searched many blogs and I came to know this token expires after 1 hour. Can anyone tell me how to extend that expiration time to Never, or how to refresh this token?
xoauth.py IMAP token expires after 1 hour
0
0
1
0
0
326
14,478,615
2013-01-23T11:28:00.000
2
0
1
0
0
python,vim
0
14,479,445
0
4
0
false
0
0
It's not the best solution for your problem but for one file you can reindent the whole file (if you configured the indentation rules to match your taste): Shift+V Shift+G =
1
2
0
0
I want to change the indentation in all my existing(!) Python files from 2-space to 4-space shift width. Any suggestions how to do this in Vim?
How to a posteriori double the indentation levels on all lines of a file in Vim?
0
0.099668
1
0
0
207
14,486,696
2013-01-23T18:25:00.000
2
0
0
1
0
python,daemon,celery
1
14,514,760
0
2
0
false
0
0
Run the Celery daemons under a supervisor, such as supervisord. When the celeryd process dies, the supervisor will spin it back up.
1
2
0
0
In Celery, the only way to conclusively reload modules is to restart all of your celery processes. I know how to remotely shutdown workers (broadcast('shutdown', destination = [<workers>])), but not how to bring them back up. I have a piece of Python code that works to create a daemon process with a new worker in it, but when I try to run it inside Celery as a celery task, I get AssertionError: daemonic processes are not allowed to have children, which I'm guessing is related to the way that the worker pool is set up. Is there a way to override this error in Celery somehow? If not, is there another way to get Celery to start up another worker to replace itself with? Maybe wrap this whole thing in a bash script (though the point of doing this in Python was to avoid using Python to call bash to call Python). If not, is there another way to convince Celery to reload the newest version of the code? the --autoreload flag and broadcast('pool_restart') both do nothing. Example Behavior: Create: @task def add(x,y): return x+y Load up celery, run add.delay(4,4), get back 8. Change add to: @task def add(x,y): return x*y Do magic (currently "Restart Celery") Run add.delay(4,4) again You should get back 16. And I always get back 8, unless I shut down celery and reload it, which I can't do remotely without having a way to bring up worker processes from a remote machine, preferably using a script.
Using a celery task to create a new process
1
0.197375
1
0
0
2,089
14,490,753
2013-01-23T22:42:00.000
0
0
0
0
0
python,popup,screen,draw
0
15,838,631
0
2
0
false
0
1
I am using wxPython and looking for a way to have a popup message. Now I am using a popupmenu in which I append each item of the menu with one line of the message.
1
0
0
0
I'm writing a program and I need to inform the user about some changes with a popup message, but not a popup window. Something like the rectangle informing about new message in Kadu - no window, just a bitmap drawn directly on the screen for a few seconds. I wonder if there is a simple way to do that with win32 package or Tkinter, and handle the event when the user clicks on the rectangle. Actually the message would be constant, so the bitmap might be loaded from a file, but I still don't know how to start. Any ideas, please? Regards, mopsiok
Python - popup message drawn on the screen
0
0
1
0
0
1,969
14,500,185
2013-01-24T11:24:00.000
0
0
1
0
0
python,macos,installation,py2app,dmg
0
14,504,112
0
1
1
true
1
0
I believe what you are looking for is to add: #!/usr/bin/python to the first line of your code will allow your friend to just double click on the file and it should open. Just as a warning osx does not tell us what version and as such what version of python and what standard libraries are going to be present. Also, just make sure that if they have played around with their settings to much and they double click on python it does not work they will have to choose to open the file in "terminal.app" in the Utilities Applications folder (/Applications/Utilities/terminal.app) The other idea is borrow a mac and compile it with the py2app program that you already mentioned. Otherwise there is no generic binary file that you will be able to compile in windows and have run on mac.
1
1
0
0
I have developed an application for a friend. Aplication is not that complex, involves only two .py files, main.py and main_web.py, main being the application code, and _web being the web interface for it. As the web was done later, it's kept in this format, I know it can be done with one app but not to complicate it too much, I kept it that way. Two two communicate with some files, and web part uses Flask so there's "templates" directory too. Now, I want to make a package or somehow make this easier for distribution, on a OSX system. I see that there is a nice py2app thingy, but I am running Windows and I can't really use it since it won't work on Win. I also don't know will py2app make problems since some configs are in text files in the directory, and they change during the runtime. So, I am wondering, is there any other way to make a package of this, some sort of setup like program, or maybe some script or something? Some simple "way" of doing this would be to just copy the files in the directory in the "Documents", and add some shortcuts to the desktop to run those two apps, and that would be it, no need for anything else. DMG would be fine, but not mandatory.
Make a Python app package/install for Mac
1
1.2
1
0
0
1,071
14,503,078
2013-01-24T13:58:00.000
1
0
0
0
0
python,web-crawler
0
14,503,228
0
6
0
false
1
0
if you are using google chrome, you can check the url which is dynamically being called in network->headers of the developer tools so based on that you can identify whether it is a GET or POST request. If it is a GET request you can find the parameters straight away from the url. If it is a POST request you can find the parameters from form data in network->headers of the developer tools.
1
3
0
0
I want to crawl a website having multiple pages and when a page number is clicked it is dynamically loaded.How to screen scrape it? i.e as the url is not present as href or a how to crawl to other pages? Would be greatful if someone helped me on this. PS:URL remains the same when different page is clicked.
How to crawl a web site where page navigation involves dynamic loading
1
0.033321
1
0
1
4,658
14,503,387
2013-01-24T14:15:00.000
2
0
0
0
0
python,flask
0
14,527,032
0
1
0
false
1
0
It's hard to answer a question like this, because it is so general. First, your Blueprint needs to be implemented in a way that makes no assumptions about the state of the app object it will be registered with. Second, you'll want to use a configurable url scheme to prevent route conflicts. There are far more nuanced components of this, but without seeing your specific code and problem it's about as specific as I feel I can get.
1
3
0
0
The Flask documentation says : that you can register blueprints multiple times though not every blueprint might respond properly to that. In fact it depends on how the blueprint is implemented if it can be mounted more than once. But I can't seem to find out what must be done to mount a blueprint safely more than once.
Implementing a Flask blueprint so that it can be safely mounted more than once?
0
0.379949
1
0
0
636
14,509,986
2013-01-24T20:19:00.000
0
0
1
0
0
python
0
60,312,770
0
8
0
false
0
0
I'm using has_leading_zero = re.match(r'0\d+', str(data)) as a solution that accepts any number and treats 0 as a valid number without a leading zero
1
9
0
0
I have a binary string say '01110000', and I want to return the number of leading zeros in front without writing a forloop. Does anyone have any idea on how to do that? Preferably a way that also returns 0 if the string immediately starts with a '1'
Python trick in finding leading zeros in string
0
0
1
0
0
10,560
14,511,669
2013-01-24T22:06:00.000
1
0
0
0
0
python,google-drive-api
0
14,556,472
0
2
0
true
0
0
As you have correctly stated, you will need to keep (or work out) the file hierarchy for a changed file to know whether a file has changed within a folder tree. There is no way of knowing directly from the changes feed whether a deeply nested file within a folder has been changed. Sorry.
2
3
0
0
I am currently working on an app that syncs one specific folder in a users Google Drive. I need to find when any of the files/folders in that specific folder have changed. The actual syncing process is easy, but I don't want to do a full sync every few seconds. I am condisering one of these methods: 1) Moniter the changes feed and look for any file changes This method is easy but it will cause a sync if ANY file in the drive changes. 2) Frequently request all files in the whole drive eg. service.files().list().execute() and look for changes within the specific tree. This is a brute force approach. It will be too slow if the user has 1000's of files in their drive. 3) Start at the specific folder, and move down the folder tree looking for changes. This method will be fast if there are only a few directories in the specific tree, but it will still lead to numerous API requests. Are there any better ways to find whether a specific folder and its contents have changed? Are there any optimisations I could apply to method 1,2 or 3.
How can I find if the contents in a Google Drive folder have changed
0
1.2
1
0
1
547
14,511,669
2013-01-24T22:06:00.000
0
0
0
0
0
python,google-drive-api
0
20,685,131
0
2
0
false
0
0
There are a couple of tricks that might help. Firstly, if your app is using drive.file scope, then it will only see its own files. Depending on your specific situation, this may equate to your folder hierarchy. Secondly, files can have multiple parents. So when creating a file in folder-top/folder-1/folder-1a/folder-1ai. you could declare both folder-1ai and folder-top as parents. Then you simply need to check for folder-top.
2
3
0
0
I am currently working on an app that syncs one specific folder in a users Google Drive. I need to find when any of the files/folders in that specific folder have changed. The actual syncing process is easy, but I don't want to do a full sync every few seconds. I am condisering one of these methods: 1) Moniter the changes feed and look for any file changes This method is easy but it will cause a sync if ANY file in the drive changes. 2) Frequently request all files in the whole drive eg. service.files().list().execute() and look for changes within the specific tree. This is a brute force approach. It will be too slow if the user has 1000's of files in their drive. 3) Start at the specific folder, and move down the folder tree looking for changes. This method will be fast if there are only a few directories in the specific tree, but it will still lead to numerous API requests. Are there any better ways to find whether a specific folder and its contents have changed? Are there any optimisations I could apply to method 1,2 or 3.
How can I find if the contents in a Google Drive folder have changed
0
0
1
0
1
547
14,512,022
2013-01-24T22:30:00.000
2
0
1
0
0
python,amazon-web-services,boto
0
14,512,942
0
1
0
true
0
0
The installer does not create a config file for you. You have to manually create one. You can create a system-wide config file in /etc/boto.cfg or a personal config file in ~/.boto. Or you can create one somewhere else and point the BOTO_CONFIG environment variable at it.
1
0
0
0
I am trying to set my AWS id and secret in boto.cfg for the AWS python library "boto". I've installed it on my mac with pip install and I don't see it in /etc How do I locate and access it or do I have to create it? And if so how?
how to access and edit python library config file
0
1.2
1
0
0
150
14,516,737
2013-01-25T06:45:00.000
1
0
0
0
0
python,django,django-views,django-database
0
14,516,949
0
2
0
false
1
0
there are some cheats for this. The general solution is trying to include the initial code in some special places, so that when the server starts up, it will run those files and also run the code. Have you ever tried to put print 'haha' in the settings.py files :) ? Note: be aware that settings.py runs twice during start-up
1
6
0
0
I want to initialize some variables (from the database) when Django starts. I am able to get the data from the database but the problem is how should I call the initialize method . And this should be only called once. Tried looking in other Pages, but couldn't find an answer to it. The code currently looks something like this :: def get_latest_dbx(request, ....): #get the data from database def get_latest_x(request): get_latest_dbx(request,x,...) def startup(request): get_latest_x(request)
Django : Call a method only once when the django starts up
0
0.099668
1
0
0
8,360
14,527,010
2013-01-25T17:18:00.000
0
0
1
0
0
python,ftp,zip
0
14,527,258
0
4
0
false
0
0
The zipfile module can be used to extract things from a zip file; the ftplib would be used to access the zipfile. Unfortunately, ftplib doesn't provide a file-like object for zipfile to use to access the contents of the file. I suppose you could read the zip & store it in memory, for example in a string, which could then be wrapped up in a file-like object (StringIO), although you're still getting the whole zip, just not saving it to disk. If you don't need to save the individual files, but just access (i.e. read) them, zipfile will allow you to do this.
1
1
0
0
I'm trying to download a zip file via ftp, but then extract the files inside without ever actually saving the zip. Any idea how I do this?
Download zip file via FTP and extract files in memory in Python
0
0
1
0
0
7,256
14,535,650
2013-01-26T09:36:00.000
1
0
0
0
0
python,math,matrix,permutation,itertools
0
14,535,721
0
2
0
false
0
0
Just pull the placed numbers out of the permutation set. Then insert them into their proper position in the generated permutations. For your example you'd take out 1, 16, 4, 13. Permute on (2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15), for each permutation, insert 1, 16, 4, 13 where you have pre-selected to place them.
1
0
1
0
So far I have been using python to generate permutations of matrices for finding magic squares. So what I have been doing so far (for 3x3 matrices) is that I find all the possible permutations of the set {1,2,3,4,5,6,7,8,9} using itertools.permutations, store them as a list and do my calculations and print my results. Now I want to find out magic squares of order 4. Since finding all permutations means 16! possibilities, I want to increase efficiency by placing likely elements in the corner, for example 1, 16 on diagonal one corners and 4, 13 on diagonal two corners. So how would I find permutations of set {1,2....16} where some elements are not moved is my question
How would I go about finding all possible permutations of a 4x4 matrix with static corner elements?
1
0.099668
1
0
0
2,502
14,553,762
2013-01-28T00:05:00.000
2
0
1
0
0
python,oop,tree,class-method,instance-method
1
14,553,768
0
1
0
true
0
0
Make a Tree subclass of Node and add the tree-only methods on that class instead. Then make your root an instance of Tree, the rest of your graph uses Node instances.
1
2
0
0
Need a way to indicate/enforce that certain methods can only be done on the root Node of my tree data structure. I'm working in Python 2.x I have a class, Node, which I use in conjunction with another class, Edge, to build a tree data structure (edges are letters and nodes are words, in this case). Some methods in Node are needed for every instance of the Node, like get_word, which runs backwards through the tree to determine the word being represented by that Node. But other Node operations, like load_word_into_tree, seem more like a class methods -- they operate on the entire tree. Furthermore, the way I have structured the that call, it requires the root node and the root node only as its input. If it is called on any other node, it'll totally mess up the tree. I see two options: Make load_word_into_tree an instance method, but raise an error if it is called on any Node that isn't the root. I'm leaning towards this, but something just seems not right about it. In my mind, instance methods are methods that every instance should need, and to have this method tacked on to every Node when it can only ever be used for the root seems like a waste. Make load_word_into_tree a class method, but pass it the root node as an arg. This gets around the problem of a 'wasteful' instance method, but also seems like a misuse of concept of a class method, since it takes a single node as its input. Furthermore, I'm not sure what use I'd have for the required cls variable available to every class method. Any help on where and how to implement this function would be greatly appreciated.
Root node operations in tree data structures
1
1.2
1
0
0
487
14,586,171
2013-01-29T15:18:00.000
1
0
0
1
0
python,matlab,simulink
0
14,650,472
0
1
0
false
0
0
Yes, it is quite possible. You should take a look at "Real-Time Testing" document which you can find in your dSPACE installation directory.
1
1
0
0
I hope someone can help us. We are using a dSpace 1103 out of Simulink/Matlab and ControlDesk. What I would like to know is, is it possible to use python in ControlDesk to transfer data into the dSpace from network? I mean, write an UDP Listener in python and use that script to update variables inside the Simulink/Matlab model? Or is there any other good chance to transfer data from a program into ControlDesk such that the changes are send to dSpace? Another question is, how long does it normally take if I change a variable in ControlDesk that the change is done inside dSpace (1-2 ms)?? Is this completely stochastic or more or less a constant value? Thanks a lot.
Python and ControlDesk interaction
1
0.197375
1
0
0
3,445
14,592,390
2013-01-29T21:12:00.000
4
1
0
1
0
python,linux,bash,user-interface,command-line-interface
1
14,592,451
0
5
0
false
0
0
It can check the value of $DISPLAY to see whether or not it's running under X11, and $(tty) to see whether it's running on an interactive terminal. if [[ $DISPLAY ]] && ! tty; then chances are good you'd want to display a GUI popup.
1
4
0
0
I want to do the following: If the bash/python script is launched from a terminal, it shall do something such as printing an error message text. If the script is launched from GUI session like double-clicking from a file browser, it shall do something else, e.g. display a GUI message box.
How can Linux program, e.g. bash or python script, know how it was started: from command line or interactive GUI?
0
0.158649
1
0
0
788
14,592,874
2013-01-29T21:44:00.000
2
1
0
0
0
python,twitter
0
14,593,070
0
1
1
true
0
0
I think you should use "since_id" parameter in your url. since_id provides u getting pages that older than since_id. So, for the next page you should set the since_id parameter as the last id of your current page.
1
0
0
0
So I'm trying to run a search query through the Twitter API in Python. I can get it to return up to 100 results using the "count" parameter. Unfortunately, version 1.1 doesn't seem to have the "page" parameter that was present in 1.0. Is there some sort of alternative for 1.1? Or, if not, does anyone have any suggestions for alternative ways to get a decent amount of tweets returned for a subject. Thanks. Update with solution: Thanks to the Ersin below. I queried as a normally would for a page, and when it's return I would check for the id of the oldest tweet. I'd then use this as the max_id in the next URL.
Returning more than one page in Python Twitter search
1
1.2
1
0
1
297
14,614,196
2013-01-30T21:32:00.000
2
0
0
1
0
python,unix,cron
0
14,614,330
0
2
0
false
0
0
Have the program run every 5 hours -- I'm not to familiar with system-level timing operations. for nix cron is the default solution to accomplish this Have the program efficiently run in the background -- I want these 'updates' to occur without the user knowing. Using cron the program will be run in the background on your server. The user shouldn't be adversly affected by it. If the user loads a page viewing mp3s you have scraped. Then in the midst of your script running/saving data to the database the user hits refresh, the new mp3's might show up, i don't know if this is what you had in mind by "without the user knowing" Have the program activate on startup -- I know how I would set this up as a user, but I'm not sure how to add such a configuration to the python file, if that's even possible. Keep in mind that this is going to be a simple .py script -- I'm not compiling it into an executable. I'm pretty sure cron entries will persist at reboot, (i'm not 100%), make sure that cron daemon is started on boot
1
2
0
0
I have a fairly light script that I want to run periodically in the background every 5 hours or so. The script runs through a few different websites, scans them for new material, and either grabs .mp3 files from them or likes songs on youtube based on their content. There are a few things I want to achieve with this program that I am unsure of how to attain: Have the program run every 5 hours -- I'm not to familiar with system-level timing operations. Have the program efficiently run in the background -- I want these 'updates' to occur without the user knowing. Have the program activate on startup -- I know how I would set this up as a user, but I'm not sure how to add such a configuration to the python file, if that's even possible. Keep in mind that this is going to be a simple .py script -- I'm not compiling it into an executable. The program is designed mainly with OSX and other Unix based systems in mind. Any advice on achieving some of these goals?
How to run a python background process periodically
0
0.197375
1
0
0
2,035
14,633,329
2013-01-31T19:11:00.000
1
0
1
0
0
python,gis,census
0
14,633,770
0
3
0
false
0
0
well, i got it. ex = 'Block 2022, Block Group 2, Census Tract 1, Shelby County, Tennessee' new_id = '47157' + ex[40:len(ex)-26].zfill(4) + '0' + ex[24] + ex[6:10] state and county values are constant; block groups only go to one digit (afaik).
1
2
0
0
I've been tasked to dig through census data for things at the block level. After learning how to navigate AND find what i'm looking for I hit a snag. tabblock polygons (block level polygon) have an id consisting of a 15 length string, ex: '471570001022022' but the format from the census data is labelled: 'Block 2022, Block Group 2, Census Tract 1, Shelby County, Tennessee' the block id is formatted: state-county-tract-group-block, with some leading zeros to make 15 characters. sscccttttggbbbb Does anyone know a quick way to get this into a usable format? I thought i would ask before i spend my time trying to cook up a python script. Thanks, gm
Reformat census title
0
0.066568
1
0
0
88
14,635,693
2013-01-31T21:42:00.000
2
0
0
0
0
python,google-app-engine,mapreduce
0
20,688,782
0
1
0
true
0
0
I don't think such functionality exists (yet?) in the GAE Mapreduce library. Depending on the size of your dataset, and the type of output required, you can small-time-investment hack your way around it by co-opting the reducer as another output writer. For example, if one of the reducer outputs should go straight back to the datastore, and another output should go to a file, you could open a file yourself and write the outputs to it. Alternatively, you could serialize and explicitly store the intermediate map results to a temporary datastore using operation.db.Put, and perform separate Map or Reduce jobs on that datastore. Of course, that will end up being more expensive than the first workaround. In your specific key-value example, I'd suggest writing to a Google Cloud Storage File, and postprocessing it to split it into three files as required. That'll also give you more control over final file names.
1
0
1
0
I have a data set which I do multiple mappings on. Assuming that I have 3 key-values pair for the reduce function, how do I modify the output such that I have 3 blobfiles - one for each of the key value pair? Do let me know if I can clarify further.
GAE MapReduce, How to write Multiple Outputs
0
1.2
1
0
0
139
14,647,317
2013-02-01T13:26:00.000
2
0
1
0
0
python,python-2.7,virtualenv,pycharm,pythonpath
0
14,661,799
0
3
0
true
0
0
Even when a package is not using setuptools pip monkeypatches setup.py to force it to use setuptools. Maybe you can remove that PYTHONPATH hack and pip install -e /path/to/package.
1
6
0
0
Yesterday, I edited the bin/activate script of my virtualenv so that it sets the PYTHONPATH environment variable to include a development version of some external package. I had to do this because the setup.py of the package uses distutils and does not support the develop command à la setuptools. Setting PYTHONPATH works fine as far as using the Python interpreter in the terminal is concerned. However, just now I opened the project settings in PyCharm and discovered that PyCharm is unaware of the external package in question - PyCharm lists neither the external package nor its path. Naturally, that's because PyCharm does not (and cannot reliably) parse or source the bin/activate script. I could manually add the path in the PyCharm project settings, but that means I have to repeat myself (once in bin/activate, and again in the PyCharm project settings). That's not DRY and that's bad. Creating, in site-packages, a symlink that points to the external package is almost perfect. This way, at least the source editor of PyCharm can find the package and so does the Python interpreter in the terminal. However, somehow PyCharm still does not list the package in the project settings and I'm not sure if it's ok to leave it like that. So how can I add the external package to my virtualenv/project in such a way that… I don't have to repeat myself; and… both the Python interpreter and PyCharm would be aware of it?
PYTHONPATH vs symbolic link
0
1.2
1
0
0
2,662
14,660,277
2013-02-02T08:58:00.000
0
0
1
0
0
python
0
14,660,409
0
2
0
false
0
0
Often, packages are split into those parts for use and those parts that are required for development. You probably have those parts for use installed (interpreter, modules), but the libraries for development of modules are missing. Look for a python-dev package.
1
1
0
0
I'm building a package with Python 2.7.3 and gcc 4.7.2. Build is scons based and it terminates complaining that it can't find 'C library for python2.7'? What is this C library and how do I build it?
How to build a C library for python
0
0
1
0
0
109
14,667,218
2013-02-02T22:31:00.000
0
0
1
0
0
python,string
0
14,667,497
0
5
0
false
0
0
You don't want to assign each letter to a separate variable. Then you'd be writing the rest of your program without even being able to know how many variables you have defined! That's an even worse problem than dealing with the whole string at once. What you instead want to do is have just one variable holding the string, but you can refer to individual characters in it with indexing. Say the string is in s, then s[0] is the first character, s[1] is the second character, etc. And you can find out how far up the numbers go by checking len(s) - 1 (because the indexes start at 0, a length 1 string has maximum index 0, a length 2 string has maximum index 1, etc). That's much more manageable than figuring out how to generate len(s) variable names, assign them all to a piece of the string, and then know which variables you need to reference. Strings are immutable though, so you can't assign to s[1] to change the 2nd character. If you need to do that you can instead create a list with e.g. l = list(s). Then l[1] is the second character, and you can assign l[1] = something to change the element in the list. Then when you're done you can get a new string out with s_new = ''.join(l) (join builds a string by joining together a sequence of strings passed as its argument, using the string it was invoked on to the left as a separator between each of the elements in the sequence; in this case we're joining a list of single-character strings using the empty string as a separator, so we just get all the single-character strings joined into a single string).
1
0
0
0
I need to make a program in which the user inputs a word and I need to do something to each individual letter in that word. They cannot enter it one letter at a time just one word. I.E. someone enters "test" how can I make my program know that it is a four letter word and how to break it up, like make my program make four variables each variable set to a different letter. It should also be able to work with bigger and smaller words. Could I use a for statement? Something like For letter ste that letter to a variable, but what is it was like a 20 character letter how would the program get all the variable names and such?
Python, breaking up Strings
0
0
1
0
0
3,180
14,667,578
2013-02-02T23:15:00.000
1
0
1
0
0
python,list,data-structures,set,append
0
14,667,595
0
5
0
false
0
0
You could probably use a set object instead. Just add numbers to the set. They inherently do not replicate.
1
37
0
0
I am writing a python program where I will be appending numbers into a list, but I don't want the numbers in the list to repeat. So how do I check if a number is already in the list before I do list.append()?
check if a number already exist in a list in python
0
0.039979
1
0
0
180,790
14,669,819
2013-02-03T05:38:00.000
0
0
0
1
0
python,google-app-engine
0
14,670,069
0
2
0
false
1
0
Yes and no. Appengine is great in terms of reliability, server speed, features, etc. However, it has two main drawbacks: You are in a sandboxed environment (no filesystem access, must use datastore), and you are paying by instance hour. Normally, if you're just hosting a small server accessed once in a while, you can get free hosting; if you are running a cron job all day every day, you must use at least one instance at all times, thus costing you money. Your concerns about speed and propagation on google's servers is moot; they have a global time server pulsating through their datacenters ensuring your operations are atomic; if you request data with consistency=STRONG, so long as your get begins after the put, you will see the updated data.
1
0
0
0
I have a python script that creates a few text files, which are then uploaded to my current web host. This is done every 5 minutes. The text files are used in a software program which fetches the latest version every 5 min. Right now I have it running on my web host, but I'd like to move to GAE to improve reliability. (Also because my current web host does not allow for just plain file hosting, per their TOS.) Is google app engine right for me? I have some experience with python, but none related to web technologies. I went through the basic hello world tutorial and it seems pretty straightforward for a website, but I don't know how I would implement my project. I also worry about any caching which could cause the latest files not to propagate fast enough across google's servers.
Is google app engine right for me (hosting a few rapidly updating text files created w/ python)
0
0
1
0
0
108
14,672,039
2013-02-03T11:36:00.000
1
0
0
0
0
python,keypress,joystick,analog-digital-converter
0
14,709,657
0
1
0
true
0
1
Use PostMessage with the WM_CHAR command (if you're using Windows - you didn't say).
1
0
0
0
In python I want to simulate a joystick that when used, to give values between -63 and +63 let's say. When the value it's positive, I want to press the "w" key and "s" key when negative. I am not having problems receiving the values, but to transform these analog values into digital key presses. Does anyone has any idea how to do it (code can be in any language, I just need an general idea).
Transforming analog keypresses to digital
0
1.2
1
0
0
118
14,677,548
2013-02-03T21:34:00.000
0
0
0
0
0
python,django,django-forms
0
14,678,289
0
1
0
true
1
0
I would suggest, you use separate view login for up and down voting. Something like this /upvote/{{comment.pk|urlize}} and then write a view that works with this url. With PK find the comment that the user is trying to up/down vote, then write the necessary condition to check if the user is authorized to perform that kind of action, and then finally execute that action. I hope this helps
1
3
0
0
I want to make upvote and downvote buttons for comments but I want all the form inputs that django.contrib.comments.forms.CommentSecurityForm gives me to make sure the form is secure. Is that necessary? And if so, how do I make a form class that with upvote and downvote buttons? Custom checkbox styles?
Upvote and Downvote buttons with Django forms
1
1.2
1
0
0
829
14,681,697
2013-02-04T06:52:00.000
0
0
1
0
0
python-2.7,wxpython
0
14,692,494
0
1
0
true
0
1
You probably won't be able to get an accurate progress unless your build scripts are generating some progress information in the output that you can parse and represent in your GUI. Either way you will probably want to use wx.ProgressDialog or use a wx.Gauge in your main panel or something like that. Both wx.ProgressDialog and wx.Gauge can be used in a mode that shows actual values (a percentage complete) or an 'indeterminate' mode that represents that something is happening but there isn't any way to tell how far in the process it is.
1
0
0
0
I have developed the python script for checking out a source code from repository and build it using visual studio.When i run the script,a GUI opens(developed using wxPython) which shows a button,clicking on which does the checkout and build.I would want to show a progress bar showing the process running when i click on the button and a success message after the script finishes it's work.Please help me out. Thanks in advance.
Show progress bar when running a script
0
1.2
1
0
0
400
14,687,281
2013-02-04T12:58:00.000
0
1
0
0
0
python,django,django-testing
0
14,689,563
0
1
0
false
0
0
Try to test every case of how your custom field could be used. In example try to send by it different kind of datas (string, integers, blank, different image formats etc.) and check if it works according to your expectations.
1
0
0
0
i have developed a custom field that extends ImageField and this custom field, dynamically creates 2more normal fields. Now, I need to write tests for this custom fields ? What tests are needed for this customfield ? Can you name them so that I will code those test cases. I am not asking technically how to write a test, I donno but I wil learn . But, what I want to know is, what are the things I need to test here.
What tests do I need to write for the customfield that I have developed?
0
0
1
0
0
32
14,692,822
2013-02-04T18:11:00.000
1
0
1
0
0
python,py2exe
0
14,693,751
0
1
0
false
0
1
I think it is possible. I'm not sure how py2exe works, but I know how pyinstaller does and since both does the same it should work similiar. Namely, one-file flag doesn't really create one file. It looks like that for end user, but when user run app, it unpacks itself and files are stored somewhere physically. You could try to edit some source file (ie numbers.py, or data.py) and pack it again with changed data. I know it's not the best explanation, you have to think further on your own. I'm just showing you the possible way.
1
3
0
0
I have seen a similar question on this site but not answered correctly for my requirements. I am reasonably familiar with py2exe. I'd like to create a program (in python and py2exe) that I can distribute to my customers which would enable them to add their own data (not code, just numbers) and redistribute as a new/amended exe for further distribution (as a single file, so my code + data). I understand this can be done with more than one file. Is this conceptually possible without my customers installing python? I guess I'm asking how to perform the 'bundlefiles' option? Many thanks
compile py2exe from executable
1
0.197375
1
0
0
167
14,720,476
2013-02-06T02:11:00.000
1
0
0
1
1
python,google-app-engine
1
14,723,922
0
3
0
false
1
0
I assume you are using Linux, Ubuntu/Mint If not that would be a good start Debug as much as you can locally using dev_appserver.py - this will display errors on start up (in the console) Add your own debug logs when needed Run code snippets in the interactive console - this is really useful to test snippets of code: if you are on GAE >= 1.7.6 http://localhost:8000/console if you are on GAE < 1.7.6 http://localhost:8080/_ah/admin/interactive/interactive
1
1
0
0
I'm starting to use Google App Engine and being a newcomer to much of the stuff going on here, I broke my webpage (all I see is "server error" in my web browser). I'd like to be able to see a console of some sort which is telling me what's going wrong (python syntax? file not found? something else?). Searching around a bit didn't lead me to a quick solution to this, so I came here. Any advice? Ideally, there would be some sort of tutorial/guide that would show how to do this.
How to monitor google app engine from command line?
0
0.066568
1
0
0
104
14,723,707
2013-02-06T07:25:00.000
1
0
0
0
0
python,bioinformatics,biopython,chemistry
0
14,726,014
0
3
0
false
0
0
A pdb file can contain pretty much anything. A lot of projects allows you to parse them. Some specific to biology and pdb files, other less specific but that will allow you to do more (setup calculations, measure distances, angles, etc.). I think you got downvoted because these projects are numerous: you are not the only one wanting to do that so the chances that something perfectly fitting your needs exists are really high. That said, if you just want to parse pdb files for this specific need, just do it yourself: Open the files with a text editor. Identify where the relevant data are (keywords, etc.). Make a Python function that opens the file and look for the keywords. Extract the figures from the file. Done. This can be done with a short script written in less than 10 minutes (other reason why downvoting).
1
2
0
0
I am working on bio project. I have .pdb (protein data bank) file which contains information about the molecule. I want to find out the following of a molecule in the .pdb file: Molecular Mass. H bond donor. H bond acceptor. LogP. Refractivity. Is there any module in python which can deal with .pdb file in finding this? If not then can anyone please let me know how can I do the same? I found some modules like sequtils and protienparam but they don't do such things. I have researched first and then posted, so, please don't down-vote. Please comment, if you still down-vote as to why you did so. Thanks in advance.
python with .pdb files
0
0.066568
1
0
0
1,814
14,727,517
2013-02-06T11:06:00.000
2
0
1
0
0
python,self
0
14,727,642
0
3
0
true
0
0
No, there isn't. You could though use another word instead of self, although the convention is to use "self".
2
5
0
0
Is there anyway of making Python methods have access to the class fields/methods without using the self parameter? It's really annoying having to write self. self. self. self. The code gets so ugly to the point I'm thinking of not using classes anymore purely for code aesthetics. I don't care about the risks or best practices. I just want not to see self anymore. P.S. I know about the renaming possibility but it's not the point.
Any ideas about how to get rid of self?
0
1.2
1
0
0
1,285
14,727,517
2013-02-06T11:06:00.000
3
0
1
0
0
python,self
0
14,727,662
0
3
0
false
0
0
The only possible solution (except for making your own no-self Python version (using sources)) Try another language.
2
5
0
0
Is there anyway of making Python methods have access to the class fields/methods without using the self parameter? It's really annoying having to write self. self. self. self. The code gets so ugly to the point I'm thinking of not using classes anymore purely for code aesthetics. I don't care about the risks or best practices. I just want not to see self anymore. P.S. I know about the renaming possibility but it's not the point.
Any ideas about how to get rid of self?
0
0.197375
1
0
0
1,285
14,739,044
2013-02-06T21:22:00.000
6
0
0
1
0
python,google-app-engine,app-engine-ndb
0
14,749,034
0
2
0
false
0
0
One thing that most GAE users will come to realize (sooner or later) is that the datastore does not encourage design according to the formal normalization principles that would be considered a good idea in relational databases. Instead it often seems to encourage design that is unintuitive and anathema to established norms. Although relational database design principles have their place, they just don't work here. I think the basis for the datastore design instead falls into two questions: How am I going to read this data and how do I read it with the minimum number of read operations? Is storing it that way going to lead to an explosion in the number of write and indexing operations? If you answer these two questions with as much foresight and actual tests as you can, I think you're doing pretty well. You could formalize other rules and specific cases, but these questions will work most of the time.
2
11
0
0
I would like to hear your opinion about the effective implementation of one-to-many relationship with Python NDB. (e.g. Person(one)-to-Tasks(many)) In my understanding, there are three ways to implement it. Use 'parent' argument Use 'repeated' Structured property Use 'repeated' Key property I choose a way based on the logic below usually, but does it make sense to you? If you have better logic, please teach me. Use 'parent' argument Transactional operation is required between these entities Bidirectional reference is required between these entities Strongly intend 'Parent-Child' relationship Use 'repeated' Structured property Don't need to use 'many' entity individually (Always, used with 'one' entity) 'many' entity is only referred by 'one' entity Number of 'repeated' is less than 100 Use 'repeated' Key property Need to use 'many' entity individually 'many' entity can be referred by other entities Number of 'repeated' is more than 100 No.2 increases the size of entity, but we can save the datastore operations. (We need to use projection query to reduce CPU time for the deserialization though). Therefore, I use this way as much as I can. I really appreciate your opinion.
Effective implementation of one-to-many relationship with Python NDB
1
1
1
1
0
1,389
14,739,044
2013-02-06T21:22:00.000
7
0
0
1
0
python,google-app-engine,app-engine-ndb
0
14,740,062
0
2
0
false
0
0
A key thing you are missing: How are you reading the data? If you are displaying all the tasks for a given person on a request, 2 makes sense: you can query the person and show all his tasks. However, if you need to query say a list of all tasks say due at a certain time, querying for repeated structured properties is terrible. You will want individual entities for your Tasks. There's a fourth option, which is to use a KeyProperty in your Task that points to your Person. When you need a list of Tasks for a person you can issue a query. If you need to search for individual Tasks, then you probably want to go with #4. You can use it in combination with #3 as well. Also, the number of repeated properties has nothing to do with 100. It has everything to do with the size of your Person and Task entities, and how much will fit into 1MB. This is potentially dangerous, because if your Task entity can potentially be large, you might run out of space in your Person entity faster than you expect.
2
11
0
0
I would like to hear your opinion about the effective implementation of one-to-many relationship with Python NDB. (e.g. Person(one)-to-Tasks(many)) In my understanding, there are three ways to implement it. Use 'parent' argument Use 'repeated' Structured property Use 'repeated' Key property I choose a way based on the logic below usually, but does it make sense to you? If you have better logic, please teach me. Use 'parent' argument Transactional operation is required between these entities Bidirectional reference is required between these entities Strongly intend 'Parent-Child' relationship Use 'repeated' Structured property Don't need to use 'many' entity individually (Always, used with 'one' entity) 'many' entity is only referred by 'one' entity Number of 'repeated' is less than 100 Use 'repeated' Key property Need to use 'many' entity individually 'many' entity can be referred by other entities Number of 'repeated' is more than 100 No.2 increases the size of entity, but we can save the datastore operations. (We need to use projection query to reduce CPU time for the deserialization though). Therefore, I use this way as much as I can. I really appreciate your opinion.
Effective implementation of one-to-many relationship with Python NDB
1
1
1
1
0
1,389
14,742,893
2013-02-07T03:13:00.000
2
0
0
0
0
python,matlab,pandas,gps
0
14,754,539
0
5
0
false
0
0
What you can do is use interp1 function. This function will fit in deifferent way the numbers for a new X series. For example if you have x=[1 3 5 6 10 12] y=[15 20 17 33 56 89] This means if you want to fill in for x1=[1 2 3 4 5 6 7 ... 12], you will type y1=interp1(x,y,x1)
1
2
1
0
This is probably a very easy question, but all the sources I have found on interpolation in Matlab are trying to correlate two values, all I wanted to benefit from is if I have data which is collected over an 8 hour period, but the time between data points is varying, how do I adjust it such that the time periods are equal and the data remains consistent? Or to rephrase from the approach I have been trying: I have GPS lat,lon and Unix time for these points; what I want to do is take the lat and lon at time 1 and time 3 and for the case where I don't know time 2, simply fill it with data from time 1 - is there a functional way to do this? (I know in something like Python Pandas you can use fill) but I'm unsure of how to do this in Matlab.
Interpolation Function
0
0.07983
1
0
0
705
14,780,533
2013-02-08T20:09:00.000
3
0
1
0
0
python
0
14,780,579
0
4
0
false
0
0
You have to escape \n somehow Either just the sequence print "\\n" or mark whole string as raw print r"\n"
1
0
0
0
I want to print the actual following string: \n , but everything I tried just reads it in takes it as a new line operator.. how do I just print the following: \n ?
How to print the string "\n"
0
0.148885
1
0
0
213
14,786,072
2013-02-09T07:49:00.000
2
1
0
0
0
python,django,settings
0
53,798,521
0
7
0
false
1
0
Here's one way to do it that is compatible with deployment on Heroku: Create a gitignored file named .env containing: export DJANGO_SECRET_KEY = 'replace-this-with-the-secret-key' Then edit settings.py to remove the actual SECRET_KEY and add this instead: SECRET_KEY = os.environ['DJANGO_SECRET_KEY'] Then when you want to run the development server locally, use: source .env python manage.py runserver When you finally deploy to Heroku, go to your app Settings tab and add DJANGO_SECRET_KEY to the Config Vars.
4
29
0
0
One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS keys, etc.. values into settings files has problem: Secrets often should be just that: secret! Keeping them in version control means that everyone with repository access has access to them. My question is how to keep all keys as secret?
Keep Secret Keys Out
0
0.057081
1
0
0
26,432
14,786,072
2013-02-09T07:49:00.000
6
1
0
0
0
python,django,settings
0
14,786,575
0
7
0
false
1
0
Store your local_settings.py data in a file encrypted with GPG - preferably as strictly key=value lines which you parse and assign to a dict (the other attractive approach would be to have it as executable python, but executable code in config files makes me shiver). There's a python gpg module so that's not a problem. Get your keys from your keyring, and use the GPG keyring management tools so you don't have to keep typing in your keychain password. Make sure you are reading the data straight from the encrypted file, and not just creating a decrypted temporary file which you read in. That's a recipe for fail. That's just an outline, you'll have to build it yourself. This way the secret data remains solely in the process memory space, and not in a file or in environment variables.
4
29
0
0
One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS keys, etc.. values into settings files has problem: Secrets often should be just that: secret! Keeping them in version control means that everyone with repository access has access to them. My question is how to keep all keys as secret?
Keep Secret Keys Out
0
1
1
0
0
26,432
14,786,072
2013-02-09T07:49:00.000
5
1
0
0
0
python,django,settings
0
14,786,114
0
7
0
false
1
0
Ideally, local_settings.py should not be checked in for production/deployed server. You can keep backup copy somewhere else, but not in source control. local_settings.py can be checked in with development configuration just for convenience, so that each developer need to change it. Does that solve your problem?
4
29
0
0
One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS keys, etc.. values into settings files has problem: Secrets often should be just that: secret! Keeping them in version control means that everyone with repository access has access to them. My question is how to keep all keys as secret?
Keep Secret Keys Out
0
0.141893
1
0
0
26,432
14,786,072
2013-02-09T07:49:00.000
0
1
0
0
0
python,django,settings
0
46,735,039
0
7
0
false
1
0
You may need to use os.environ.get("SOME_SECRET_KEY")
4
29
0
0
One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS keys, etc.. values into settings files has problem: Secrets often should be just that: secret! Keeping them in version control means that everyone with repository access has access to them. My question is how to keep all keys as secret?
Keep Secret Keys Out
0
0
1
0
0
26,432
14,804,291
2013-02-11T00:27:00.000
0
0
1
0
0
c#,python,.net,sorting,hash
0
14,804,387
0
2
1
false
0
0
Hash codes aren't meant to be unique for unequal objects - there typically will be some collisions. They definitely can't be used to test for equality. Hash codes are used to place objects (hopefully) evenly across a data structure. If you want to test for equality, test whether the coordinates are equal.
1
0
0
0
I have some points in space where each point has an id. I also have a subset of these points in another group that have different id values. How can I create a new type of id for both groups of points so that the points that have the same coordinates end up using the same id values? I assume I need to generate hash codes using their coordinates which should give me the same id value for points that have the same coordinates, right? I am confused how I could use it because the set of hashcodes is much smaller than float[3]. So not sure if I am on the right track.
How to match arbitrary data using hash codes?
0
0
1
0
0
124
14,815,856
2013-02-11T15:59:00.000
1
0
0
0
0
python,html,compare,match
0
14,815,973
0
2
0
true
1
0
import urllib and use urllib.urlopen for getting the contents of an html. import re to search for the hash code using regex. You could also use find method on the string instead of regex. If you encounter problems, then you can ask more specific questions. Your question is too general.
1
0
0
0
I am making a download manager. And I want to make the download manager check the md5 hash of an url after downloading the file. The hash is found on the page. It needs to compute the md5 of the file ( this is done), search for a match on the html page and then compare the WHOLE contents of the html page for a match. my question is how do i make python return the whole contents of the html and find a match for my "md5 string"?
Matching contents of an html file with keyword python
0
1.2
1
0
1
168
14,817,290
2013-02-11T17:17:00.000
0
1
1
0
0
python,cgi
0
14,817,753
0
3
0
false
0
0
The simple (and slow way) is to acquire a lock on the file (in C you'd use flock), write on it and close it. If you think this can be a bottleneck then use a database or something like that.
2
0
0
0
I have a simple cgi script in python collecting a value from form fields submitted through post. After collecting this, iam dumping these values to a single text file. Now, when multiple users submit at the same time, how do we go about it? In C\C++ we use semaphore\mutex\rwlocks etc? Do we have anything similar in python. Also, opening and closing the file multiple times doesnt seem to be a good idea for every user request. We have our code base for our product in C\C++. I was asked to write a simple cgi script for some reporting purpose and was googling with python and cgi. Please let me know. Thanks! Santhosh
multiple users doing form submission with python CGI
0
0
1
0
0
300
14,817,290
2013-02-11T17:17:00.000
0
1
1
0
0
python,cgi
0
14,817,362
0
3
0
false
0
0
If you're concerned about multiple users, and considering complex solutions like mutexes or semaphores, you should ask yourself why you're planning on using an unsuitable solution like CGI and text files in the first place. Any complexity you're saving by doing this will be more than outweighed by whatever you put in place to allow multiple users. The right way to do this is to write a simple WSGI app - maybe using something like Flask - which writes to a database, rather than a text file.
2
0
0
0
I have a simple cgi script in python collecting a value from form fields submitted through post. After collecting this, iam dumping these values to a single text file. Now, when multiple users submit at the same time, how do we go about it? In C\C++ we use semaphore\mutex\rwlocks etc? Do we have anything similar in python. Also, opening and closing the file multiple times doesnt seem to be a good idea for every user request. We have our code base for our product in C\C++. I was asked to write a simple cgi script for some reporting purpose and was googling with python and cgi. Please let me know. Thanks! Santhosh
multiple users doing form submission with python CGI
0
0
1
0
0
300
14,824,538
2013-02-12T02:33:00.000
0
0
1
1
0
python,bash,shell,replace
0
14,824,582
0
3
0
false
0
0
Yes, of course. You can simply make an executable Python script, call it /usr/bin/pysh, add this filename to /etc/shells and then set it as your user's default login shell with chsh.
1
2
0
0
I wonder if is possible to create a bash replacement but in python. I have done REPLs before, know about subprocess and that kind of stuff, but wonder how use my python-like-bash replacement in the OSX terminal as if were a native shell environment (with limitations). Or simply run ipython as is... P.D. The majority of the google answer are related to create shell scripts. I'm interested in create a shell..
Is possible to create a shell like bash in python, ie: Bash replacement?
1
0
1
0
0
1,426
14,829,562
2013-02-12T09:44:00.000
1
0
0
0
0
python,linux,amazon-web-services,boto,amazon-swf
0
14,829,925
0
4
0
false
1
0
You can use SNS, When script A is completed it should trigger SNS, and that will trigger a notification to Server B
2
9
0
0
Use Amazon SWF to communicate messages between servers? On server A I want to run a script A When that is finished I want to send a message to server B to run a script B If it completes successfully I want it to clear the job from the workflow queue I’m having a really hard time working out how I can use Boto and SWF in combination to do this. I am not after some complete code but what I am after is if anyone can explain a little more about what is involved. How do I actually tell server B to check for the completion of script A? How do I make sure server A wont pick up the completion of script A and try and run script B (since server B should run this)? How do I actually notify SWF of script A completion? Is thee a flag, or a message, or what? I’m pretty confused about all of this. What design should I use?
Using Amazon SWF To communicate between servers
0
0.049958
1
0
1
3,687
14,829,562
2013-02-12T09:44:00.000
5
0
0
0
0
python,linux,amazon-web-services,boto,amazon-swf
0
14,881,688
0
4
0
false
1
0
I don't have any example code to share, but you can definitely use SWF to coordinate the execution of scripts across two servers. The main idea with this is to create three pieces of code that talk to SWF: A component that knows which script to execute first and what to do once that first script is done executing. This is called the "decider" in SWF terms. Two components that each understand how to execute the specific script you want to run on each machine. These are called "activity workers" in SWF terms. The first component, the decider, calls two SWF APIs: PollForDecisionTask and RespondDecisionTaskCompleted. The poll request will give the decider component the current history of an executing workflow, basically the "where am i" state information for your script runner. You write code that looks at these events and figure out which script should execute. These "commands" to execute a script would be in the form of a scheduling of an activity task, which is returned as part of the call to RespondDecisionTaskCompleted. The second components you write, the activity workers, each call two SWF APIs: PollForActivityTask and RespondActivityTaskCompleted. The poll request will give the activity worker an indication that it should execute the script it knows about, what SWF calls an activity task. The information returned from the poll request to SWF can include single execution-specific data that was sent to SWF as part of the scheduling of the activity task. Each of your servers would be independently polling SWF for activity tasks to indicate the execution of the local script on that host. Once the worker is done executing the script, it calls back to SWF through the RespondActivityTaskCompleted API. The callback from your activity worker to SWF results in a new history being handed out to the decider component that I already mentioned. It will look at the history, see that the first script is done, and schedule the second one to execute. Once it sees that the second one is done, it can "close" the workflow using another type of decision. You kick off the whole process of executing the scripts on each host by calling the StartWorkflowExecution API. This creates the record of the overall process in SWF and kicks out the first history to the decider process to schedule the execution of the first script on the first host. Hopefully this gives a bit more context on how to accomplish this type of workflow using SWF. If you haven't already, I would take a look at the dev guide on the SWF page for additional info.
2
9
0
0
Use Amazon SWF to communicate messages between servers? On server A I want to run a script A When that is finished I want to send a message to server B to run a script B If it completes successfully I want it to clear the job from the workflow queue I’m having a really hard time working out how I can use Boto and SWF in combination to do this. I am not after some complete code but what I am after is if anyone can explain a little more about what is involved. How do I actually tell server B to check for the completion of script A? How do I make sure server A wont pick up the completion of script A and try and run script B (since server B should run this)? How do I actually notify SWF of script A completion? Is thee a flag, or a message, or what? I’m pretty confused about all of this. What design should I use?
Using Amazon SWF To communicate between servers
0
0.244919
1
0
1
3,687
14,835,315
2013-02-12T14:54:00.000
1
0
0
0
0
python,amazon-web-services,boto
0
14,849,632
0
1
0
true
0
0
It should be a dictionary with the 4 keys in the dictionary (I'm going to push a change that updates the structure type with the dict type). So if you don't want notifications, you just specify empty values for the keys: {'Progressing': '', 'Completed': '', 'Warning': '', 'Error': ''}
1
1
0
0
I am trying to create my own pipe line in Elastic Transcoder. I am using the boto standard function create_pipeline(name, input_bucket, output_bucket, role, notifications). Can you please tell me the notifications (structure) how it should look? So far I have something like this: create_pipeline('test','test_start', 'test_end', 'arn:aws:iam::789823056103:role/Elastic_Transcoder_Default_Role', ... ) Thank you!
Understanding the boto documentation for Elastic Transcoder
0
1.2
1
0
0
268
14,845,942
2013-02-13T03:30:00.000
0
0
0
0
0
python,django,api,shopify
0
56,934,353
0
3
0
false
1
0
The documentation is again not promising but one thing to bear in mind is that there should in actual fact be an existing collection already created Find it by using this code collection_id = shopify.CustomCollection.find(handle=<your_handle>)[0].id then consequently add the collection_id, product_id to a Collect object and save, remember to first save your product (or have an existing one which you can find) and then only save your collection, or else the collection won't know what product its posting to (via the api), like so new_product = shopify.Product() new_product.save() add_collection = shopify.Collect('product_id': new_product.id, 'collection_id': collection_id}) add_collection.save() Also important to note that there is a 1 to 1 relationship between Product and Collect
1
1
0
0
I am using the Shopify Python API in an django app to interact with my Shopify Store. I have a collection - called - best sellers. I am looking to create a batch update to this collection - that is add/remove products to this collection. However, they python API docs does not seem to say much about how to do so. How do I fetch a collection by name? How do I add a product to it? Thank you for your help. This is what I found x=shopify.CustomCollection.find(handle="best-sellers") y=shopify.Collect() #creates a new collect p = shopify.Product.find(118751076) # gets me the product So the questions is how do I add the product "p" above to the Custom Collection "x" ?
Shopify Python API: How do I add a product to a collection?
0
0
1
0
0
4,023
14,863,125
2013-02-13T21:06:00.000
22
0
0
0
1
python,scikit-learn,classification
0
14,864,547
0
2
0
false
0
0
Have you tried to pass to your class_weight="auto" classifier? Not all classifiers in sklearn support this, but some do. Check the docstrings. Also you can rebalance your dataset by randomly dropping negative examples and / or over-sampling positive examples (+ potentially adding some slight gaussian feature noise).
1
22
1
0
I'm solving a classification problem with sklearn's logistic regression in python. My problem is a general/generic one. I have a dataset with two classes/result (positive/negative or 1/0), but the set is highly unbalanced. There are ~5% positives and ~95% negatives. I know there are a number of ways to deal with an unbalanced problem like this, but have not found a good explanation of how to implement properly using the sklearn package. What I've done thus far is to build a balanced training set by selecting entries with a positive outcome and an equal number of randomly selected negative entries. I can then train the model to this set, but I'm stuck with how to modify the model to then work on the original unbalanced population/set. What are the specific steps to do this? I've poured over the sklearn documentation and examples and haven't found a good explanation.
sklearn logistic regression with unbalanced classes
0
1
1
0
0
18,285
14,868,003
2013-02-14T04:49:00.000
1
0
0
0
0
python,python-2.7,lazy-loading,beautifulsoup
0
16,251,642
0
1
0
true
1
0
It turns out that the problem itself wasn't BeautifulSoup, but the dynamics of the page itself. For this specific scenario that is. The page returns part of the page, so headers need to be analysed and sent to the server accordingly. This isn't a BeautifulSoup problem itself. Therefore, it is important to take a look at how the data is loaded on a specific site. It's not always a "Load a whole page, process the whole page" paradigm. In some cases, you need to load part of the page and send a specific parameter to the server in order to keep loading the rest of the page.
1
3
0
0
I am toying around with BeautifulSoup and I like it so far. The problem is the site I am trying to scrap has a lazyloader... And it only scraps one part of the site. Can I have a hint as to how to proceed? Must I look at how the lazyloader is implemented and parametrize anything else?
Crawling a page using LazyLoader with Python BeautifulSoup
0
1.2
1
0
1
1,587
14,869,861
2013-02-14T07:33:00.000
3
0
1
1
1
python,linux
0
14,869,972
1
1
0
true
0
0
Do not try to uninstall the pre-installed Python. Install other Python interpreters side by side (in different directories). You may come across an option to choose the default Python interpreter for your system. Don't change that from the pre-installed one, as that may break some important scripts used by the system. Customize the default Python interpreter for your user only, not for the entire system. (I don't have a Fedora at hand so don't know how that works exactly.) Also have a look at virtualenv for having multiple isolated Python environments with their independent collection of Python modules, and pythonbrew for installing multiple Python interpreters.
1
0
0
0
I have a Fedora virtual machine. It comes with Python pre-installed. I've read that it's not a good idea to uninstall it. I want to install a different version of Python, Enthought Python. Should I try to uninstall the existing Python installation and how would I do that? Should I instead install Enthought Python to a new directory? Will that be a problem with the existing Python installation?
Installing a new distribution of Python on Fedora
0
1.2
1
0
0
91
14,875,450
2013-02-14T13:01:00.000
4
0
0
0
1
python,language-agnostic,machine-learning,object-recognition,pybrain
0
14,877,671
0
2
0
false
0
0
First, a note regarding the classification method to use. If you intend to use the image pixels themselves as features, neural network might be a fitting classification method. In that case, I think it might be a better idea to train the same network to distinguish between the various objects, rather than using a separate network for each, because it would allow the network to focus on the most discriminative features. However, if you intend to extract synthetic features from the image and base the classification on them, I would suggest considering other classification methods, e.g. SVM. The reason is that neural networks generally have many parameters to set (e.g. network size and architecture), making the process of building a classifier longer and more complicated. Specifically regarding your NN-related questions, I would suggest using a feedforward network, which is relatively easy to build and train, with a softmax output layer, which allows assigning probabilities to the various classes. In case you're using a single network for classification, the question regarding negative examples is irrelevant; for each class, other classes would be its negative examples. If you decide to use different networks, you can use the same counter-examples (i.e. other classes), but as a rule of thumb, I'd suggest showing no more than 2-10 negative examples per positive example. EDIT: based on the comments below, it seems the problem is to decide how fitting is a given image (drawing) to a given concept, e.g. how similar to a tree is the the user-supplied tree drawing. In this case, I'd suggest a radically different approach: extract visual features from each drawing, and perform knn classification, based on all past user-supplied drawings and their classifications (possibly, plus a predefined set generated by you). You can score the similarity either by the nominal distance to same-class examples, or by the class distribution of the closest matches. I know that this is not neccessarily what you're asking, but this seems to me an easier and more direct approach, especially given the fact that the number of examples and classes is expected to constantly grow.
1
0
1
0
I was thinking of doing a little project that involves recognizing simple two-dimensional objects using some kind of machine learning. I think it's better that I have each network devoted to recognizing only one type of object. So here are my two questions: What kind of network should I use? The two I can think of that could work are simple feed-forward networks and Hopfield networks. Since I also want to know how much the input looks like the target, Hopfield nets are probably not suitable. If I use something that requires supervised learning and I only want one output unit that indicates how much the input looks like the target, what counter-examples should I show it during the training process? Just giving it positive examples I'm pretty sure won't work (the network will just learn to always say 'yes'). The images are going to be low resolution and black and white.
How to Train Single-Object Recognition?
0
0.379949
1
0
0
2,529
14,891,492
2013-02-15T09:22:00.000
3
1
0
0
0
php,python,aes,pycrypto,phpseclib
0
14,892,493
0
1
0
true
0
0
I strongly recommend you adjust your PHP code to use (at least) a sixteen byte key, otherwise your crypto system is considerably weaker than it might otherwise be. I would also recommend you switch to CBC-mode, as ECB-mode may reveal patterns in your input data. Ensure you use a random IV each time you encrypt and store this with the ciphertext. Finally, to address your original question: According to the phpseclib documentation the "keys are null-padded to the closest valid size", but I'm not sure how to implement that in Python. Simply extending the length of the string with 6 spaces is not working. The space character 0x20 is not the same as the null character 0x00.
1
1
0
0
I am working on a data intensive project where I have been using PHP for fetching data and encrypting it using phpseclib. A chunk of the data has been encrypted in AES with the ECB mode -- however the key length is only 10. I am able to decrypt the data successfully. However, I need to use Python in the later stages of the project and consequently need to decrypt my data using it. I tried employing PyCrypto but it tells me the key length must be 16, 24 or 32 bytes long, which is not the case. According to the phpseclib documentation the "keys are null-padded to the closest valid size", but I'm not sure how to implement that in Python. Simply extending the length of the string with 6 spaces is not working. What should I do?
Key length issue: AES encryption on phpseclib and decryption on PyCrypto
0
1.2
1
0
0
975
14,899,139
2013-02-15T16:35:00.000
2
0
1
0
0
python,matrix,numpy,integration,exponential
0
14,900,251
0
1
0
false
0
0
Provided A has the right properties, you could transform it to the diagonal form A0 by calculating its eigenvectors and eigenvalues. In the diagonal form, the solution is sol = [exp(A0*b) - exp(A0*a)] * inv(A0), where A0 is the diagonal matrix with the eigenvalues and inv(A0) just contains the inverse of the eigenvalues in its diagonal. Finally, you transform back the solution by multiplying it with the transpose of the eigenvalues from the left and the eigenvalues from the right: transpose(eigvecs) * sol * eigvecs.
1
1
1
0
I have a matrix of the form, say e^(Ax) where A is a square matrix. How can I integrate it from a given value a to another value bso that the output is a corresponding array?
how to find the integral of a matrix exponential in python
0
0.379949
1
0
0
1,283
14,902,023
2013-02-15T19:34:00.000
1
0
0
0
0
python,django
0
14,902,498
0
2
0
false
1
0
I believe this is not supported out of the box. Off the top of my head, one way to do it would be with a special 404 handler that, having failed to match against any of the defined URLs, treats the request as a request for a static resource. This would be reasonably easy to do in the development environment but significantly more difficult when nginx, Apache, and/or gunicorn get involved. In other words, don't do this. Nest your statics (or put them on a different sub domain) but don't mix the URL hierarchy in this way.
1
2
0
0
Yep, I want it to work like in Flask framework - there I could set parameters like this: static_folder=os.getcwd()+"/static/", static_url_path="" and all the files that lies in ./static/files/blabla.bla would be accessible by mysite.com/files/blabla.bla address. I really don't want to add static after mysite.com. But if I set STATIC_URL = '/' in Django then I could get my static files by this address, but suddenly I could not fetch my pages that described in urls.py.
Django: how to make STATIC_URL empty?
0
0.099668
1
0
0
294
14,915,048
2013-02-16T21:00:00.000
0
0
0
1
1
python,process,terminate
0
14,915,445
0
2
0
false
0
0
Create a thread when your process starts. Make that thread sleep for the required duration. When that sleep is over, kill the process.
1
0
0
0
How is it possible to get a compiled .exe program written in Python to kill itself after a period of time after it is launched? If I have some code and I compile it into an .exe, then launch it and it stays in a 'running' or 'waiting' state, how can I get it to terminate after a few mins regardless of what the program is doing? The reason why is that the exe that is launched envokes a URL using PAMIE and automates some clicks. What I have noticed is that if the browser is closed the process remains in memory and does not clean itself up. I wanted to find a way to auto-clean up the process after say 5 mins which is more then enough time. I've tried using psutils to detect the process but that does not work in my case. Any suggestions is greatly appreciated.
Python built exe process to kill itself after a period of time
1
0
1
0
0
1,455
14,923,583
2013-02-17T16:59:00.000
0
0
0
1
0
python,celery
0
71,510,516
0
2
0
false
0
0
I think you can, at least in chord. When you bind=True your task, you can access self.request. In self.request.chord You can find a detailed dict. In its kwargs or options['chord'] you will find what you're looking for, but it's not an elegant solution. Also, if the parent has been replaced, you will only be able to see the final state.
1
2
0
0
Is it possible to access the arguments with which a parent task A was called, from its child task Z? Put differently, when Task Z gets called in a chain, can it somehow access an argument V that was invoked when Task A was fired, but that was not passed through any intermediary nodes between tasks A and Z? And if so, how? Using Celery 3.0 with RabbitMQ for results backend.
Access Arguments to Parent Task from Subtask in Celery
1
0
1
0
0
960
14,931,793
2013-02-18T08:04:00.000
0
0
0
0
0
python,django,ubuntu,django-templates
0
14,931,827
0
6
0
false
1
0
Should be here: /usr/lib/python2.7/site-packages/django/contrib/admin/templates
3
14
0
0
I have trouble to see django/contrib/admin/templates folder. It seems like it is hidden in /usr/lib/python2.7/dist-packages/ folder, ctrl+h wont help ( appearencely all django files are hidden). "locate django/contrib/admin/templates" in terminal shows bunch of files, but how can i see those files in GUI? I use Ubuntu 12.10 Thanks in advance
find django/contrib/admin/templates
0
0
1
0
0
15,440
14,931,793
2013-02-18T08:04:00.000
0
0
0
0
0
python,django,ubuntu,django-templates
0
14,931,847
0
6
0
false
1
0
Since, everyone is posting my comment's suggestion, might as well post it myself. Try looking at: /usr/lib/python2.6/site-packages/django/
3
14
0
0
I have trouble to see django/contrib/admin/templates folder. It seems like it is hidden in /usr/lib/python2.7/dist-packages/ folder, ctrl+h wont help ( appearencely all django files are hidden). "locate django/contrib/admin/templates" in terminal shows bunch of files, but how can i see those files in GUI? I use Ubuntu 12.10 Thanks in advance
find django/contrib/admin/templates
0
0
1
0
0
15,440
14,931,793
2013-02-18T08:04:00.000
0
0
0
0
0
python,django,ubuntu,django-templates
0
53,451,127
0
6
0
false
1
0
If you are using Python3, Django is located at your venv. In my case templates are located at <project_root>/venv/lib/python3.5/site-packages/django/contrib/admin/templates/.
3
14
0
0
I have trouble to see django/contrib/admin/templates folder. It seems like it is hidden in /usr/lib/python2.7/dist-packages/ folder, ctrl+h wont help ( appearencely all django files are hidden). "locate django/contrib/admin/templates" in terminal shows bunch of files, but how can i see those files in GUI? I use Ubuntu 12.10 Thanks in advance
find django/contrib/admin/templates
0
0
1
0
0
15,440
14,940,443
2013-02-18T16:06:00.000
0
1
0
0
0
python,aptana,aptana3
0
56,855,673
0
1
0
false
0
0
I accidentally somehow set the project as PyDev project. To disable, right click on the project > PyDev > Remove PyDev Project Config
1
3
0
0
Aptana Studio 3 keeps adding .pydevproject file, how can I disable Python or whatever it's doing this
Aptana Studio 3 keeps adding .pydevproject file, how can I disable Python?
0
0
1
0
0
459