Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
14,201,551
2013-01-07T18:08:00.000
3
0
0
1
python,linux,alsa,udev,soundcard
14,203,491
2
false
0
0
The sound card limit is defined as the symbol SNDRV_CARDS in include/sound/core.h. When I increased this seven years ago, I did not go beyond 32 because the card index is used as a bit index for the variable snd_cards_lock in sound/core/init.c, and I did not want to change more than necessary. If you make snd_cards_lock a 64-bit variable, change all accesses to use a 64-bit type, and adjust any other side effect that I might have forgotten about, you should be able to get the kernel to have more ALSA cards. This limit also exists in the alsa-lib package; you will have to change at least the check in snd_ctl_hw_open in src/control/control_hw.c.
1
6
0
I'm working on an educative multiseat project where we need to connect 36 keyboards and 36 USB sound cards to a single computer. We're running Ubuntu Linux 12.04 with the 3.6.3-030603-generic kernel. So far we've managed to get the input from the 36 keyboards, and recognized the 36 sound cards without getting a kernel panic (which happened before updating the kernel). We know the 36 sound cards have been recognized because $ lsusb | grep "Audio" -c outputs 36. However, $ aplay -l lists 32 playback devices in total (including the "internal" sound card). Also, $ alsamixer -c 32 says "invalid card index: 32" (works just from 0 through 31 ; 32 in total too). So my question is, how can I access the other sound cards if they're not even listed with these commands? I'm writing an application in python and there are some libraries to choose from, but I'm afraid they'll also be limited to 32 devices in total because of this. Any guidance will be useful. Thanks.
Need more than 32 USB sound cards on my system
0.291313
0
0
1,112
14,202,844
2013-01-07T19:38:00.000
2
0
0
0
python,flask
14,202,887
2
false
1
0
I'm fairly certain that there is no guarantee of that. However, it depends on how you're running the application. If you're using Heroku+gunicorn for example, all files on Heroku that are changed during a request are not kept, i.e., the files are ephemeral. So if you were to change the text file, the changes would not persist through to the next request. Another provider, PythonAnywhere, is not so strict about their filesystem, but again, the requests would have no guarantee of one finishing before the next could start. Moreover, for a modern web server, that would be a useless application (or more accurately server). Also if you want a small database, just use sqlite. As long as it is installed on the system, python comes with a library for interacting with it (if I remember correctly).
1
3
0
I ask because I'm wondering if I can get away with using a text file as a data store for a simple app. If each handler runs to completion, then it seems like I should be able to modify the text file during that request without worrying about conflicts, assuming I close the file at the end of each request. Is this feasible? Is there anything special I need to do in order to use a text file as a data store in a Flask app?
Does a Flask Request Run to Completion before another Request Starts?
0.197375
0
0
1,304
14,202,982
2013-01-07T19:49:00.000
1
0
1
0
python,command-line
14,205,874
4
false
0
0
Honestly when I'm wanting to do this quickly I don't mess with the code, I just use tee. Its a *nix utility that does just what you are talking about splitting the pipe for display and piping. You could farther restrict what you are displaying with a grep. Great for debugging something that uses pipes. If this is part of your production system though I probably wouldn't pass info with pipes unless you have to. If you do, log your errors/warnings and tail -f your log. I know not really a python answer, but it gets the job done.
1
2
0
I'm doing some python programming, and I would like to "pipe" the output of one program to another. That's easily doable using sys.stdin and sys.stdout. However, I would also like to be able to print info and warning messages to the terminal. Is there any (simple) way to have multiple channels, with messages printed to the terminal, but data being sent to another program?
Piping output in Python: multiple channels for data and messages?
0.049958
0
0
1,273
14,205,113
2013-01-07T22:21:00.000
5
0
0
0
python,directx,screenshot
14,205,237
1
false
0
1
This is a rather complicated topic. Actually taking a screenshot, in the most simplistic way, involves grabbing the backbuffer and writing that to a file. Your Python DirectX bindings should provide support for making a texture and filling it with data, from the back or front buffer; after that the image writing is up to you. Unfortunately, getting to the backbuffer requires hooking DirectX and intercepting the render context before the application gets ahold of it. While simple, this is not terribly well-documented and takes a decent bit of (C++) code to implement. You have to force the application to use an alternate render context which you control, then take the screenshot yourself. The basics of this interception cannot, so far as I know, be done in pure Python. You may be able to find a method using the display codec (grabbing the screen after it's been delivered to the compositor), or you could use an existing DirectX hook and implement minimal IPC to grab the data and feed it into Python for processing and writing to a file. Edit: If you are interested in doing this, I can add more detail and some links to code that may be helpful. I'm still not sure it's possible in just Python, but don't let that stop you from trying.
1
2
0
I'm looking at a way to take screenshot of DirectX games in Python. I already tried to use PIL and other stuff but I only end up with black screenshots. I saw that the project directpython11 provided a Python binding to some DirectX stuff but I didn't find anything related to screenshot of external DirectX applications. I'm kinda lost and any help will be much appreciated ;). PS: I'm coding using Python 2.7.3 32 bits on Windows 7. Thanks
Take a screenshot of a DirectX game in Python
0.761594
0
0
2,931
14,205,744
2013-01-07T23:12:00.000
0
1
0
0
python-2.7,sign,pyusb
14,761,682
1
false
0
0
I had a similar problem on a Mac (mtn lion): When I ran the sample app, I got a segment fault 11. It was crashing in the alphasign library from the sign.connect() call. Changed it to sign.connect(reset=False), and it worked fine. FYI: The segment fault occurs in the low-level USB driver, libusb, not in python code.
1
0
0
I am trying to program an alpha sign - 215r - using the alphasign python api [Alphasign] (https://alphasign.readthedocs.org/en/latest/index.html). I downloaded python 2.7, pyusb, pyserial, and libusb. I got the vid and pid of the sign using libusb and added that to the devices.py file. However, when I ran the example python code [here] (https://alphasign.readthedocs.org/en/latest/index.html), I still got an error that said it could not find device with vid and pid of 8765:1234 (the example numbers). Now, when I open the file (the code is copied and pasted from the link above) it crashes IDLE (totally shuts down). ...when I run the file from bash, it says core dump. suggestions please!!
Programming an Alpha electronic sign with Alphasign Python
0
0
0
262
14,206,982
2013-01-08T01:35:00.000
1
0
0
0
windows,authentication,wxpython,modal-dialog
14,217,396
1
true
0
1
There is no full proof way to do this on Windows. You can show a wx.Frame modally using its MakeModal() method. And you can catch EVT_CLOSE and basically veto it it they try to close the frame. However, if they have access to the Task Manager or even Run, they can probably get around the screen. Most users won't be that smart though. You can delete the shortcuts to the apps you want to launch with wx and that will force most normal users to use your login screen. It's only the smart ones who like to troll through the file system who will go around it.
1
0
0
I manage a number of Windows PCs which are used to control equipment. Each computer has a specific program installed which is what people launch to use that equipment. We want to require people to log in before they can access this program. Currently, I have a wxpython app which just launches that executable when people log in with the correct credentials. However, you can just run the program directly and bypass logging on. I'd like to make a mock logon screen, ie, fullscreen and modal, which only goes away when you log in. Also it should not be able to be bypassed by alt-tab, windows key, etc. How might I accomplish this with wxpython?
Logon-type wxpython app
1.2
0
0
95
14,209,258
2013-01-08T06:17:00.000
0
0
0
0
python,css,django,web
14,210,824
2
false
1
0
Usually the easiest way to do this is adding a folder static in the root of the project. Then look for and make settings attributes STATICFILES_DIRS = ('static',) if you want you can make this absolute path(recommended by django) STATIC_URL = '/static/' Since you're using the dev-server this should be able to work. Now to link to them in your templates what you have to do is <link href={{ STATIC_URL }}css/css.css> for the css assuming the is a file project_root/static/css/css.css you basically do the same for javascript
2
0
0
My CSS will not work when I run my site. Only the HTML displays. I have the right link. I'm confused as to what put in the MEDIA_ROOT, MEDIA_URL, STATIC_ROOT, and STATIC_URL. Every site tell me something different. I'm not using the (file) directory. I the above mentioned setting refer to where the files are placed and where they are hosted. I'm not hosting my files anywhere as of right now. I'm in dev mode. I know django has something to view static files in dev mode but it won't work!!!! My questions: 1. Should I host my files? 2. What should i put in the above mentioned settings? Keep in mind I'm in dev mode! Thanks
CSS won't work with Django powered site
0
0
0
126
14,209,258
2013-01-08T06:17:00.000
0
0
0
0
python,css,django,web
14,209,324
2
false
1
0
Give the link of your CSS in STATIC_ROOT. In html Do: {{ STATIC_URL }}/css in link.
2
0
0
My CSS will not work when I run my site. Only the HTML displays. I have the right link. I'm confused as to what put in the MEDIA_ROOT, MEDIA_URL, STATIC_ROOT, and STATIC_URL. Every site tell me something different. I'm not using the (file) directory. I the above mentioned setting refer to where the files are placed and where they are hosted. I'm not hosting my files anywhere as of right now. I'm in dev mode. I know django has something to view static files in dev mode but it won't work!!!! My questions: 1. Should I host my files? 2. What should i put in the above mentioned settings? Keep in mind I'm in dev mode! Thanks
CSS won't work with Django powered site
0
0
0
126
14,210,568
2013-01-08T07:59:00.000
0
0
0
1
python,multithreading,parallel-processing,serial-port
26,588,050
1
false
0
0
Instead of using threads you could also implementing your data sources as generators and just loop over them to consume the incoming data and do something with it. Perhaps using two different generators and zipping them together, actually would be a nice experiment I'm not entirely sure it can be done...
1
1
0
I am collecting data from two pieces of equipment using serial ports (scale and conductivity probe). I need to continuously collect data from the scale which I average between collection points of the conductivity probe (roughly a minuet). Thus I need to run two processes at the same time. One that collects data from the scale, and other which waits for data from the conductivity probe, once it gets the data it would send a command to the other process in order to get the collected scale data, which is then time stamped and saved into .csv file. I looked into subprocess but it I cant figure out how to reset a running script. Any suggestions on what to look into.
Running process in parallel for data collection
0
0
0
186
14,214,331
2013-01-08T11:51:00.000
0
1
0
0
maven,python-3.x,intellij-idea
14,501,896
1
false
0
0
Your file types are not configured correctly, .py is most likely assigned to Text files instead of Python files, you can fix it in File | Settings | File Types. There is no support for tests running via Maven, but you can create your own Run/Debug configuration for Python unit tests in IDEA.
1
1
0
I am using intellij IDEA version 11.1.5 on windows and python plugin version is 2.9.2 I am using grinder maven plugin to run the performance tests using grinder. It only supports python(Jython) to run tests. I am not getting any auto suggestions for the python development even though I have installed the python plugin. Python files are also getting displayed as a text files. Is there any other configuration to enable the auto suggestions for python development?
Python support for maven module in intellij
0
0
0
711
14,217,858
2013-01-08T15:03:00.000
3
0
0
1
python,google-app-engine,opencv,python-2.7
43,981,159
3
false
1
0
Now it is possible. The app should be deployed using a custom runtime in the GAE flexible environment. OpenCV library can be installed by adding the instruction RUN apt-get update && apt-get install -y python-opencv in the Dockerfile.
1
4
0
HI actually I was working on a project which i intended to deploy on the google appengine. However I found that google app engine is supported by python. Can I run openCV with python scripts on Google app engine?
Can I use open cv with python on Google app engine?
0.197375
0
0
2,599
14,218,896
2013-01-08T15:53:00.000
1
0
1
0
python,cygwin
14,219,758
2
false
0
0
I think I have figured out what the problem is. In the package selection window, there are three options above the package list, namely, keep, Curr and Exp. The default one is Curr, which means that cygwin will select the most stable version for each selected package. In some way, python 2.6.8-2 is considered more stable than 2.7.3-1, and then each time the 2.6 version is selected. The only way is to switch to Keep option, but then we will not update other packages as well. This is quite annoying.
1
2
0
So I have installed python 2.7 in cygwin and it runs without any problem. However, when I install new packages using cygwin's setup.exe, it will always select new version 2.6.8 for the python package by default, and if I don't switch back to 2.7.2, it will uninstall python 2.7 and install python 2.6. What's wrong with my cygwin? Is there any method to fix this problem?
Cygwin always revert Python 2.7 to Python 2.6 when update
0.099668
0
0
942
14,222,670
2013-01-08T19:34:00.000
2
0
0
0
listbox,python-2.7,tkinter
14,251,259
1
true
0
1
Assuming from silence that there's nothing I missed. I went with option 2 -- the acrobatics weren't quite as complex as I'd thought. I just created a behind-the-scenes list wrapped up in a class; every time I update the list, the class syncs up the content of the listbox by doing a ' '.join on the list then setting the listbox's listvariable to the resulting string.
1
3
0
This is one of those just-making-sure-I-didn't-miss-anything posts. I have a TKinter GUI in Python 2.7.3 that includes a listbox, and there are circumstances where I'd like to directly modify the text of a specific item at a known index. I've scoured the documents and there's no lb.itemset() method or anything like it. As best I can tell I have two options, either of which would work but just seem kind of klunky to me: lb.delete() the old item and lb.insert() the new value for it at the same index (including a step to re-select the new value if the old deleted one happened to be selected). Create a listvariable for the listbox, then use get() and set() on it -- with a pile of replace/split/join acrobatics in between to handle the differing string formats involved. Is there some simpler, more direct way to do it that I'm missing? Or have I turned up all the available options?
Directly modify a specific item in a TKinter listbox?
1.2
0
0
1,274
14,225,865
2013-01-08T23:30:00.000
1
0
0
1
python,celery
14,246,789
1
false
1
0
This is a bug in celery 3.0.12, reverting to celery 3.0.11 did the job. Hope this helps someone
1
0
0
When I run my task: my_task.apply_async([283], countdown=5) It runs immediately when it should be running 5 seconds later as the ETA says [2013-01-08 15:15:21,600: INFO/MainProcess] Got task from broker: web.my_task[4635f997-6232-4722-9a99-d1b42ccd5ab6] eta:[2013-01-08 15:20:51.580994] [2013-01-08 15:15:22,095: INFO/MainProcess] Task web.my_task[4635f997-6232-4722-9a99-d1b42ccd5ab6] succeeded in 0.494245052338s: None here is my installation: software -> celery:3.0.12 (Chiastic Slide) kombu:2.5.4 py:2.7.3 billiard:2.7.3.19 py-amqp: N/A platform -> system:Darwin arch:64bit imp:CPython loader -> djcelery.loaders.DjangoLoader settings -> transport:amqp results:mongodb Is this a celery bug? or I am missing something?
Celery 3.0.12 countdown not working
0.197375
0
0
497
14,228,044
2013-01-09T03:55:00.000
1
0
1
0
python,visual-studio-2008
14,249,523
2
false
0
0
Thank you for your help Luis, from there I found a solution to the link.exe 1120 error: 1 - get Windows SDK from Luis post above 2 - go to the folder for Visual Studio 9 bin (mine was: C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin) 3 - open the Visual Studio 2008 Command Prompt 4 - change directory to mrjdb7's folder 5 - in command prompt, enter 'python setup.py install' At this point, you should get an error: "cannot find vcvarsall.bat" 6 - copy and paste vcvar32.bat 7 - rename it vcvarsall.bat At this point, it still won't work. It is a problem in a distutils python file 8 - navigate to the distutils folder (mine: C:\Python27\Lib\distutils) 9 - open msvc9compiler.py 10 - look for function "def find_vcvarsall(version):" (mine: line 219) 11 - towards end of function, look for this line: 'vcvarsall = os.path.join(productdir, "vcvarsall.bat")' (mine: line 257) 12 - replace with: vcvarsall = os.path.join(productdir, r"bin\vcvarsall.bat") The problem I found, msvc9compiler.py was looking one folder up from where the vcvarsall.bat file is, it should have looked in the \bin folder.
1
1
0
I'm trying to build mrjbq7's wrapper for TA-Lib for Python. After several attempts, I'm fairly sure the wrapper won't build because I have Visual Studio 2010, but my Python is looking for the compiler from Visual Studio 2008. Is it possible to get only the compiler? I have a feeling I may need to install Visual Studio 2008 side by side (saw another thread said this works with no problem), but would prefer to avoid it if possible. Is it possible? More info: Visual Studio 2010 Express installed Windows 7 Python 2.7.3 pythonxy27 also installed Reason to suspect it is compiler version: a - last error I get is reference to unknown _ftol2_sse_, which other threads have said is due to using the wrong compiler b - sys.version for MSC v. is 1500, which is for MSVC 9.0 (used a couple of lines from distutils\cygwincompiler.py in an interpreter to find this) I hope that's all required info, I'll add more if needed.
how to compile with Visual Studio 2008 when Visual Studio 2010 is installed?
0.099668
0
0
6,062
14,229,547
2013-01-09T06:33:00.000
0
0
1
0
python
14,230,307
4
true
0
0
In Python, all numbers that are not 0 are evaluated as True. Only 0 is evaluated as False. In this code, 1 and 2 and 3 really evaluates as True and True and True. Therefore, it must return True. Now we know that 3 is the same as True - that's why the interpreter is correct in returning 3. So why does it return 3 when it could make life easy and return True? That's just the way python is - it returns the last evaluated value. Hope that answers your question!
2
0
0
I was looking through some lab material from a computer course offered at Berkeley U C and came across some examples on the form of questions on a test about python. 1 and 2 and 3 answer 3 I've goggled it till i was red in the fingers, but to no avail.. Could someone be kind enuff to direct me to some docs that explain this?? I've no clue what the hell is going on here..
How does "and" and "or" work comparing numbers in python?
1.2
0
0
75
14,229,547
2013-01-09T06:33:00.000
0
0
1
0
python
14,229,601
4
false
0
0
x and y returns true if both x and y are true; while x or y returns if either one is true. similarly multiple and returns true if all are true.
2
0
0
I was looking through some lab material from a computer course offered at Berkeley U C and came across some examples on the form of questions on a test about python. 1 and 2 and 3 answer 3 I've goggled it till i was red in the fingers, but to no avail.. Could someone be kind enuff to direct me to some docs that explain this?? I've no clue what the hell is going on here..
How does "and" and "or" work comparing numbers in python?
0
0
0
75
14,232,451
2013-01-09T09:53:00.000
2
0
0
0
python,image-processing,3d,2d
14,233,016
1
false
0
1
There are several things you can mean, I think none of which currently exists in free software (but I may be wrong about that), and they differ in how hard they are to implement: First of all, "a 3D volume" is not a clear definition of what you want. There is not one way to store this information. A usual way (for computer games and animations) is to store it as a mesh with textures. Getting the textures is easy: you have the photographs. Creating the mesh can be really hard, depending on what exactly you want. You say your object looks like a cylinder. If you want to just stitch your images together and paste them as a texture over a cylindrical mesh, that should be possible. If you know the angles at which the images are taken, the stitching will be even easier. However, the really cool thing that most people would want is to create any mesh, not just a cylinder, based on the stitching "errors" (which originate from the parallax effect, and therefore contain information about the depth of the pictures). I know Autodesk (the makers of AutoCAD) have a web-based tool for this (named 123-something), but they don't let you put it into your own program; you have to use their interface. So it's fine for getting a result, but not as a basis for a program of your own. Once you have the mesh, you'll need a viewer (not view first, save later; it's the other way around). You should be able to use any 3D drawing program, for example Blender can view (and edit) many file types.
1
0
1
I am looking for a library, example or similar that allows me to loads a set of 2D projections of an object and then converts it into a 3D volume. For example, I could have 6 pictures of a small toy and the program should allow me to view it as a 3D volume and eventually save it. The object I need to convert is very similar to a cylinder (so the program doesn't have to 'understand' what type of object it is).
2D image projections to 3D Volume
0.379949
0
0
1,288
14,233,867
2013-01-09T11:07:00.000
3
0
1
0
python,pylint
14,235,996
3
true
0
0
What you are asking for is not supported in the current version of Pylint. You may want to get in touch with the maintainers and propose them a feature request and an implementation.
1
7
0
I have a problem with pylint, i.e. sometimes it repeats the same message for some variable/class/module etc. and I can't find a workaround for that. What I want is to say pylint "don't check [message XXX|any message] for variable YYY in [this module|module "ZZZ"]" with some option or rcfile directive.
Pylint ignore specific names
1.2
0
0
9,544
14,236,130
2013-01-09T13:20:00.000
0
1
0
1
python
14,236,492
2
false
0
0
If you're happy to stay specific to unix and then you can get the parent PID of the process with os.getppid(). If you wanted to translate it back to a program id, you could can run a subprocess to use the relevant OS-specific PID-to-useful-data tool (odds on - ps)
1
0
0
Is there a way to know which application I'm running a python script from? I can run python from multiple sources, like Textmate, Sublime Text 2 or Terminal (I'm on Mac OSX). How can I know, exactly which tool launched the current python app. I've tried looking into the os and inspect modules, but couldn't find the solution.
How to know which application run a python script
0
0
0
99
14,236,371
2013-01-09T13:31:00.000
2
0
0
0
python,montecarlo
14,236,501
2
false
0
0
When the search space becomes larger, it can become infeasible to do an exhaustive search. So we turn to Monte Carlo methods out of necessity.
1
1
1
I have a relatively simple function with three unknown input parameters for which I only know the upper and lower bounds. I also know what the output Y should be for all of my data. So far I have done a simple grid search in python, looping through all of the possible parameter combinations and returning those results where the error between Y predicted and Y observed is within a set limit. I then look at the results to see which set of parameters performs best for each group of samples, look at the trade-off between parameters, see how outliers effect the data etc.. So really my questions is - whilst the grid search method I'm using is a bit cumbersome, what advantages would there be in using Monte Carlo methods such as metropolis hastings instead? I am currently researching into MCMC methods, but don’t have any practical experience in using them and, in this instance, can’t quite see what might be gained. I’d greatly appreciate any comments or suggestions Many Thanks
Advantage of metropolis hastings or MonteCarlo methods over a simple grid search?
0.197375
0
0
983
14,236,644
2013-01-09T13:47:00.000
0
0
1
0
python,installation,virtualenv,macports
14,242,163
1
true
0
0
macports only installs python modules in the site-packages directory directly corresponding to the python interpreter. It does this to allow multiple different interpreters to be installed. Macports is also installed under the root user and not your own account. It can't therefore know about virtualenv setups which are controlled by settings in your user environment. what you have to do is install the complex modules e.g. PyQT4 and virtualenv and then create your virtualenv from that ie using --system-site-packages
1
0
0
I can't seem to find the exact situation I have, please point me to a duplicate if there is one. I am using virtualenv and python and trying to install a module but no matter which version of python 'which python' comes up with MacPorts seems to install the modules in the default macports python location (/opt/local/share) for the default macports python (/opt/local/bin). When the virtualenv is activated, 'which python' gives a python version in ~/Documents/.../bin/python (It is a python version 2.7.3), which is correct. If virtualenv not activated, I have tried either switching to either the system python version (Apple default installed version) or the default macports one which is /opt/local/bin (which is also a 2.7.3 version). After installation, in the python interpreter I can successfully import my module when the virtualenv is not activated, but python can't find the module when virtualenv is activated. I can't use pip or easy_install to install this module (PyQt4) b/c there is known bug where they error. How can I get macports to install in the proper location for my virtualenv ?
MacPorts doesn't install Python module in right place for virtualenv
1.2
0
0
414
14,236,918
2013-01-09T14:00:00.000
0
0
0
0
python,gtk
14,327,100
2
false
0
1
You likely have widgets on your pages that are stopping them from shrinking. You may need to put the content of your pages in a ViewPort or ScrolledWindow to get the effect you are looking.
1
0
0
I have a notebook window. I want to change dynamically size. I did expand dynamically, but I couldn't shrink it. I used win.set_size_request(w,h) for expand. How can I shrink dynamically?
Gtk Notebook size dynamically change
0
0
0
329
14,241,239
2013-01-09T16:01:00.000
1
0
0
0
python,html,selenium-webdriver,highlighting
14,241,317
2
false
0
0
[NOTE: I'm leaving this answer for historical purposes but readers should note that the original question has changed from concerning itself with Python to concerning itself with Selenium] Assuming you're talking about a browser based application being served from a Python back-end server (and it's just a guess since there's no information in your post): If you are constructing a response in your Python back-end, wrap the stuff that you want to highlight in a <span> tag and set a class on the span tag. Then, in your CSS define that class with whatever highlighting properties you want to use. However, if you want to accomplish this highlighting in an already-loaded browser page without generating new HTML on the back end and returning that to the browser, then Python (on the server) has no knowledge of or ability to affect the web page in browser. You must accomplish this using Javascript or a Javascript library or framework in the browser.
2
1
0
Can I have any highlight kind of things using Python 2.7? Say when my script clicking on the submit button,feeding data into the text field or selecting values from the drop-down field, just to highlight on that element to make sure to the script runner that his/her script doing what he/she wants. EDIT I am using selenium-webdriver with python to automate some web based work on a third party application. Thanks
Can selenium be used to highlight sections of a web page?
0.099668
0
1
909
14,241,239
2013-01-09T16:01:00.000
3
0
0
0
python,html,selenium-webdriver,highlighting
14,241,261
2
false
0
0
This is something you need to do with javascript, not python.
2
1
0
Can I have any highlight kind of things using Python 2.7? Say when my script clicking on the submit button,feeding data into the text field or selecting values from the drop-down field, just to highlight on that element to make sure to the script runner that his/her script doing what he/she wants. EDIT I am using selenium-webdriver with python to automate some web based work on a third party application. Thanks
Can selenium be used to highlight sections of a web page?
0.291313
0
1
909
14,241,729
2013-01-09T16:27:00.000
-2
0
0
1
python,hadoop,mapreduce,hbase
14,675,698
2
false
1
0
You can very easily do map reduce programming with Python which would interact with thrift server. Hbase client in python would be a thrift client.
1
3
0
We have a HBase implementation over Hadoop. As of now all our Map-Reduce jobs are written as Java classes. I am wondering if there is a good way to use Python scripts to pass to HBase for Map-Reduce.
Pass Python scripts for mapreduce to HBase
-0.197375
0
0
3,776
14,242,764
2013-01-09T17:18:00.000
0
0
1
0
python,numpy,scipy,python-module
14,242,912
1
false
0
0
Use the --user option to easy_install or setup.py to indicate where the installation is to take place. It should point to a directory where you have write access. Once the module has been built and installed, you then need to set the environmental variable PYTHONPATH to point to that location. When you next run the python command, you should be able to import the module.
1
1
1
I am totally new to Python, and I have to use some modules in my code, like numpy and scipy, but I have no permission on my hosting to install new modules using easy-install or pip ( and of course I don't know how to install new modules in a directory where I have permission [ I have SSH access ] ). I have downloaded numpy and used from numpy import * but it doesn't work. I also tried the same thing with scipy : from scipy import *, but it also don't work. How to load / use new modules in Python without installing them [ numpy, scipy .. ] ?
use / load new python module without installation
0
0
0
1,476
14,243,196
2013-01-09T17:41:00.000
1
0
0
0
python,sockets
14,243,915
1
true
0
0
Unfortunately, it's not possible to bind to the subset of interfaces using socket module. This module provides access to BSD socket interface, which allow to specify only single address while binding. For this single address a special value of INADDR_ANY in C exists to allow for binding to all interfaces (Python translates empty string to this value). If you want to bind to more than one, but not all, interfaces using socket module, you'll need to create multiple sockets.
1
1
0
When binding to a socket in python the value for host can be '' which means all interfaces. Or it can be a string containing a real ip address eg '192.168.1.5'. So its possible to bind to all or 1 interface. What if I have 3 interfaces and I want to bind to only 2 of them. Is this possible? What value do I give host, I have tired a list, a tuple, a comma separated string.
Python Sockets Bind to 2 out of 3 network interfaces
1.2
0
1
885
14,244,195
2013-01-09T18:41:00.000
1
0
0
0
python,slider,wxpython,color-mapping
14,260,324
1
true
0
1
I think you're looking for one of the following widgets: ColourDialog, ColourSelect, PyColourChooser or CubeColourDialog They all let you choose colors in different ways and they have a slider to help adjust the colours too. You can see each of them in action in the wxPython demo (downloadable from the wxPython web page)
1
0
0
I haven't seen an example of this but I wanted to know if any knows how to implement a colorbar with an adjustable slider using wxpython. Basically the slider should change the levels of the colorbar and as such adjust the colormap. If anyone has an idea of how to do and possible some example code it would be much appreciated.
colorbar with a slider using wxpython
1.2
0
0
386
14,247,373
2013-01-09T22:06:00.000
14
0
1
0
python,comparison,nonetype
14,247,424
6
false
0
0
PEP 8 defines that it is better to use the is operator when comparing singletons.
1
268
0
My editor warns me when I compare my_var == None, but no warning when I use my_var is None. I did a test in the Python shell and determined both are valid syntax, but my editor seems to be saying that my_var is None is preferred. Is this the case, and if so, why?
Python None comparison: should I use "is" or ==?
1
0
0
168,994
14,250,658
2013-01-10T04:05:00.000
0
0
1
1
python,python-3.x
14,250,723
3
false
0
0
All arguments that are passed when running your script will be placed in sys.argv. You have to import sys first. And then go through the arguments as you would like to. You might consider counting how many arguments you have to decide what to do. And note that the first argument is always the name of your script.
1
0
0
I'm newbie to Python. I'd like to code a script running on Linux. To test if user enter all the script arguments: If user type: myscript => print "Usage: myscript [Dir] [Old] [New]" If user type: myscript Dir => print "Please enter Old and New" If user type: myscript Dir Old => print "Please enter New" If user type all the required arguments, then execute the main(). How to code myscript?
Python 3: test command line arguments
0
0
0
8,127
14,250,664
2013-01-10T04:06:00.000
0
0
0
1
java,python,cassandra,cql
25,874,976
5
false
0
0
What was the replication factor that you used for the keyspace? How many rows of data does the "users" column family have? I found myself in a similar situation (yesterday) with replication factor set to 1 and "users" column family having only one row. Cluster Information: 3 nodes on AWS Same datacenter name Keyspace name: rf1 SimpleStrategy Replication factor 1 Column Family name: users Querying using cqlsh, default consistency Scenario 1: One or two nodes in the cluster were down I found that the query "select * from users" would return "Unable to complete request: one or more nodes were unavailable" if any of the 3 nodes was down. Scenario 2: Node 1 was down. Node 2 was down. Node 3 was up. The query "select * from users where user_name='abhishek'" would return me the row. I figured this was the case because the row seemed to be on node 3. My understanding of the scenario: When you make the query "select * from users", you are asking Cassandra to return all the rows from the column family. It would not be able to do so since one or more nodes are down and it cannot give you the whole column family since there might be some rows on the nodes that were down. But the query with the where clause would return the row because it was available on node 3 and node 3 was up. Does that make sense? One flaw with this explanation is that I would expect Cassandra to return all the rows that are AVAILABLE in the cluster with "select * from users" I am going to do some more digging now and will update if I find anything useful.
3
2
0
Cassandra works in cluster model with 3 nodes.When all nodes are "UP", I use cql “select * from User” in cqlsh,Cassandra returns the right result.But after a node is dead,when I use "select" again,no result returns but reports:"Unable to complete request: one or more nodes were unavailable" . I turned to use cassandra-cli command:"get Users", it returns me the right data without any error. any ideas?
Cassandra reports:"Unable to complete request: one or more nodes were unavailable" when I use CQL:"select * from User"
0
0
0
3,429
14,250,664
2013-01-10T04:06:00.000
0
0
0
1
java,python,cassandra,cql
14,264,889
5
false
0
0
cqlsh and cli both default to CL.ONE. I suspect the difference is actually that your cqlsh query says "select all the users" while a "get" in the cli is "select exactly one user."
3
2
0
Cassandra works in cluster model with 3 nodes.When all nodes are "UP", I use cql “select * from User” in cqlsh,Cassandra returns the right result.But after a node is dead,when I use "select" again,no result returns but reports:"Unable to complete request: one or more nodes were unavailable" . I turned to use cassandra-cli command:"get Users", it returns me the right data without any error. any ideas?
Cassandra reports:"Unable to complete request: one or more nodes were unavailable" when I use CQL:"select * from User"
0
0
0
3,429
14,250,664
2013-01-10T04:06:00.000
2
0
0
1
java,python,cassandra,cql
14,258,686
5
false
0
0
I expect that when you are using CQL you are having a request with a Consistency-Level being "ALL". In this case, it will wait for a reply from all the servers (that host a replica of that node) before returning. As one node is down it fail because it cannot contact the down node. When you are doing it through Cassandra-cli, I expect that the consistency-level is defaulting to either "QUORUM" or "ONE" or "ANY" and so will happily return you data, even if one replica is down.
3
2
0
Cassandra works in cluster model with 3 nodes.When all nodes are "UP", I use cql “select * from User” in cqlsh,Cassandra returns the right result.But after a node is dead,when I use "select" again,no result returns but reports:"Unable to complete request: one or more nodes were unavailable" . I turned to use cassandra-cli command:"get Users", it returns me the right data without any error. any ideas?
Cassandra reports:"Unable to complete request: one or more nodes were unavailable" when I use CQL:"select * from User"
0.07983
0
0
3,429
14,254,053
2013-01-10T08:59:00.000
0
0
0
0
python,openerp,erp
14,257,259
2
false
1
0
I find another solution. When I click on button Create for res.users in right side bar, in section Customize action Set Default appears. Here you can choose default value which appears when Create button was pressed. UPDATE: All these values you can see in Settings --> Customization --> Low Level Objects --> Actions --> User-defined Defaults Of course, here you can create new default values.
2
0
0
I need to set up default Home Action for res.user. Currently is Home Page but I want set my custom action. So, I tried create new record for Settings --> Configuration --> Configuration Parameters , but when I set field for Home Action in field Field and set type Many2One in field Type, field Value remains empty list. I can't choose my custom action for new users! Please, correct me if I'm doing something wrong. Is this a bug or normal behavior? Any other solution is welcome. Cheers
OpenERP 6.1 web client - set default value (Configuration parameters)
0
0
0
762
14,254,053
2013-01-10T08:59:00.000
0
0
0
0
python,openerp,erp
14,424,152
2
false
1
0
just an additional note:you can also apply this user defined dafaults with many2many fields like taxes_id field in product model. however there is a small bug, if you set a default value for many2many fields when you create a new record, many2many field is shown empty, when you save the record you will see it is well recorded with your default value, so if you want to make a record different than the default you have to first save than edit again.
2
0
0
I need to set up default Home Action for res.user. Currently is Home Page but I want set my custom action. So, I tried create new record for Settings --> Configuration --> Configuration Parameters , but when I set field for Home Action in field Field and set type Many2One in field Type, field Value remains empty list. I can't choose my custom action for new users! Please, correct me if I'm doing something wrong. Is this a bug or normal behavior? Any other solution is welcome. Cheers
OpenERP 6.1 web client - set default value (Configuration parameters)
0
0
0
762
14,254,203
2013-01-10T09:08:00.000
15
0
0
0
python,machine-learning,data-mining,classification,scikit-learn
34,036,255
6
false
0
0
The simple answer: multiply result!! it's the same. Naive Bayes based on applying Bayes’ theorem with the “naive” assumption of independence between every pair of features - meaning you calculate the Bayes probability dependent on a specific feature without holding the others - which means that the algorithm multiply each probability from one feature with the probability from the second feature (and we totally ignore the denominator - since it is just a normalizer). so the right answer is: calculate the probability from the categorical variables. calculate the probability from the continuous variables. multiply 1. and 2.
3
76
1
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age", "Length of membership" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
1
0
0
29,596
14,254,203
2013-01-10T09:08:00.000
0
0
0
0
python,machine-learning,data-mining,classification,scikit-learn
69,929,209
6
false
0
0
You will need the following steps: Calculate the probability from the categorical variables (using predict_proba method from BernoulliNB) Calculate the probability from the continuous variables (using predict_proba method from GaussianNB) Multiply 1. and 2. AND Divide by the prior (either from BernoulliNB or from GaussianNB since they are the same) AND THEN Divide 4. by the sum (over the classes) of 4. This is the normalisation step. It should be easy enough to see how you can add your own prior instead of using those learned from the data.
3
76
1
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age", "Length of membership" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
0
0
0
29,596
14,254,203
2013-01-10T09:08:00.000
74
0
0
0
python,machine-learning,data-mining,classification,scikit-learn
14,255,284
6
true
0
0
You have at least two options: Transform all your data into a categorical representation by computing percentiles for each continuous variables and then binning the continuous variables using the percentiles as bin boundaries. For instance for the height of a person create the following bins: "very small", "small", "regular", "big", "very big" ensuring that each bin contains approximately 20% of the population of your training set. We don't have any utility to perform this automatically in scikit-learn but it should not be too complicated to do it yourself. Then fit a unique multinomial NB on those categorical representation of your data. Independently fit a gaussian NB model on the continuous part of the data and a multinomial NB model on the categorical part. Then transform all the dataset by taking the class assignment probabilities (with predict_proba method) as new features: np.hstack((multinomial_probas, gaussian_probas)) and then refit a new model (e.g. a new gaussian NB) on the new features.
3
76
1
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age", "Length of membership" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
1.2
0
0
29,596
14,255,289
2013-01-10T10:08:00.000
2
0
0
1
python,python-2.7,twisted,failover
14,266,178
2
false
0
0
ReconnectingClientFactory doesn't have this capability. You can build your own factory which implements this kind of reconnection logic, mostly by hooking into the clientConnectionFailed factory method. When this is called and the reason seems to you like that justifies switching servers (eg, twisted.internet.error.ConnectionRefused), pick the next address on your list and use the appropriate reactor.connectXYZ method to try connecting to it. You could also try constructing this as an endpoint (which is the newer high-level connection setup API that is preferred by some), but handling reconnection with endpoints is not yet a well documented topic.
1
4
0
I have a twisted ReconnectingClientFactory and i can successfully connect to given ip and port couple with this factory. And it works well. reactor.connectTCP(ip, port, myHandsomeReconnectingClientFactory) In this situation, when the server is gone, myHandsomeReconnectingClientFactory tries to connect same ip and port (as expected). My goal is, when the server which serves on given ip and port couple is gone, connecting to a backup server (which have different ip and port). Any ideas/comments on how to achieve this goal will be appreciated.
Twisted: ReconnectingClientFactory connection to different servers
0.197375
0
0
1,504
14,260,447
2013-01-10T14:43:00.000
1
0
0
0
python,flask,http-post
14,270,654
1
true
1
0
Access them from request.data just like any other form data.
1
0
0
I want to send a image together with some text parameters to a Flask server (HTTP POST). How can I use Flask to receive both (e.g. save an image and print the text)?
How can I upload images with text parameters using Flask
1.2
0
0
925
14,260,923
2013-01-10T15:06:00.000
18
0
0
0
python,random,python-2.7,numbers
14,260,955
8
false
0
0
I would generate a list of n random numbers then sort them highest to lowest.
1
4
1
I'm wondering if there's a way to generate decreasing numbers within a certain range? I want to program to keep outputting until it reaches 0, and the highest number in the range must be positive. For example, if the range is (0, 100), this could be a possible output: 96 57 43 23 9 0 Sorry for the confusion from my original post
How to randomly generate decreasing numbers in Python?
1
0
0
5,706
14,261,512
2013-01-10T15:36:00.000
0
0
0
0
python,intuit-partner-platform
27,932,175
1
false
1
0
I received this error as well and am posting this as pointer for other who stumble upon this. Error Code 22 (Authentication required) for me meant that the OAuth signature was wrong. This was confusing because I couldn't find this error listed in the Quickbooks documents for reconnect. I was signing the request as a "POST" request instead of a "GET" request which is what Quickbooks requires for calls to the reconnect endpoint.
1
2
0
After completing the oAuth handshake with Intuit Anywhere (AI), I use the API to get the HTML for the blue dot menu. Sometimes, the expected HTML is returned. Other times, I get this message This API requires Authorization. 22 2013-01-10T15:32:33.43741Z Typically, this message is returned when the oAuth token is expired. However, on the occasions when I get it, I can click around in my website for a bit or do a refresh, and the expected HTML is returned. I checked the headers being sent and, in both cases (i.e., when the expected HTML is returned, and an error is returned), the request is exactly the same. I wouldn't be surprised if this was a bug in Intuit's API, but I'm trying to rule out any other possibilities first. Please let me know if you have any thoughts on how to fix this. Thanks. Update: It seems the problem is occurring only when I do a refresh. This seems to be the case both in Firefox and Safari on OSX. It sounds to be like a Javascript caching issue.
Sometimes getting "API requires authorization" from intuit anywhere api after a fresh oAuth handshake
0
0
1
377
14,262,433
2013-01-10T16:20:00.000
-2
0
0
0
python,mongodb,pandas,hdf5,large-data
59,647,574
16
false
0
0
At the moment I am working "like" you, just on a lower scale, which is why I don't have a PoC for my suggestion. However, I seem to find success in using pickle as caching system and outsourcing execution of various functions into files - executing these files from my commando / main file; For example i use a prepare_use.py to convert object types, split a data set into test, validating and prediction data set. How does your caching with pickle work? I use strings in order to access pickle-files that are dynamically created, depending on which parameters and data sets were passed (with that i try to capture and determine if the program was already run, using .shape for data set, dict for passed parameters). Respecting these measures, i get a String to try to find and read a .pickle-file and can, if found, skip processing time in order to jump to the execution i am working on right now. Using databases I encountered similar problems, which is why i found joy in using this solution, however - there are many constraints for sure - for example storing huge pickle sets due to redundancy. Updating a table from before to after a transformation can be done with proper indexing - validating information opens up a whole other book (I tried consolidating crawled rent data and stopped using a database after 2 hours basically - as I would have liked to jump back after every transformation process) I hope my 2 cents help you in some way. Greetings.
4
1,156
1
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about "big data" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive. My first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this: What are some best-practice workflows for accomplishing the following: Loading flat files into a permanent, on-disk database structure Querying that database to retrieve data to feed into a pandas data structure Updating the database after manipulating pieces in pandas Real-world examples would be much appreciated, especially from anyone who uses pandas on "large data". Edit -- an example of how I would like this to work: Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory. In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory. I would create new columns by performing various operations on the selected columns. I would then have to append these new columns into the database structure. I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem. Edit -- Responding to Jeff's questions specifically: I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns. Typical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset. Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model. A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case. It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations. The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns. It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics/machine learning parlance).
"Large data" workflows using pandas
-0.024995
0
0
341,120
14,262,433
2013-01-10T16:20:00.000
21
0
0
0
python,mongodb,pandas,hdf5,large-data
29,910,919
16
false
0
0
One more variation Many of the operations done in pandas can also be done as a db query (sql, mongo) Using a RDBMS or mongodb allows you to perform some of the aggregations in the DB Query (which is optimized for large data, and uses cache and indexes efficiently) Later, you can perform post processing using pandas. The advantage of this method is that you gain the DB optimizations for working with large data, while still defining the logic in a high level declarative syntax - and not having to deal with the details of deciding what to do in memory and what to do out of core. And although the query language and pandas are different, it's usually not complicated to translate part of the logic from one to another.
4
1,156
1
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about "big data" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive. My first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this: What are some best-practice workflows for accomplishing the following: Loading flat files into a permanent, on-disk database structure Querying that database to retrieve data to feed into a pandas data structure Updating the database after manipulating pieces in pandas Real-world examples would be much appreciated, especially from anyone who uses pandas on "large data". Edit -- an example of how I would like this to work: Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory. In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory. I would create new columns by performing various operations on the selected columns. I would then have to append these new columns into the database structure. I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem. Edit -- Responding to Jeff's questions specifically: I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns. Typical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset. Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model. A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case. It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations. The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns. It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics/machine learning parlance).
"Large data" workflows using pandas
1
0
0
341,120
14,262,433
2013-01-10T16:20:00.000
167
0
0
0
python,mongodb,pandas,hdf5,large-data
20,690,383
16
false
0
0
I think the answers above are missing a simple approach that I've found very useful. When I have a file that is too large to load in memory, I break up the file into multiple smaller files (either by row or cols) Example: In case of 30 days worth of trading data of ~30GB size, I break it into a file per day of ~1GB size. I subsequently process each file separately and aggregate results at the end One of the biggest advantages is that it allows parallel processing of the files (either multiple threads or processes) The other advantage is that file manipulation (like adding/removing dates in the example) can be accomplished by regular shell commands, which is not be possible in more advanced/complicated file formats This approach doesn't cover all scenarios, but is very useful in a lot of them
4
1,156
1
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about "big data" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive. My first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this: What are some best-practice workflows for accomplishing the following: Loading flat files into a permanent, on-disk database structure Querying that database to retrieve data to feed into a pandas data structure Updating the database after manipulating pieces in pandas Real-world examples would be much appreciated, especially from anyone who uses pandas on "large data". Edit -- an example of how I would like this to work: Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory. In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory. I would create new columns by performing various operations on the selected columns. I would then have to append these new columns into the database structure. I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem. Edit -- Responding to Jeff's questions specifically: I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns. Typical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset. Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model. A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case. It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations. The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns. It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics/machine learning parlance).
"Large data" workflows using pandas
1
0
0
341,120
14,262,433
2013-01-10T16:20:00.000
72
0
0
0
python,mongodb,pandas,hdf5,large-data
19,739,768
16
false
0
0
If your datasets are between 1 and 20GB, you should get a workstation with 48GB of RAM. Then Pandas can hold the entire dataset in RAM. I know its not the answer you're looking for here, but doing scientific computing on a notebook with 4GB of RAM isn't reasonable.
4
1,156
1
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about "big data" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive. My first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this: What are some best-practice workflows for accomplishing the following: Loading flat files into a permanent, on-disk database structure Querying that database to retrieve data to feed into a pandas data structure Updating the database after manipulating pieces in pandas Real-world examples would be much appreciated, especially from anyone who uses pandas on "large data". Edit -- an example of how I would like this to work: Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory. In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory. I would create new columns by performing various operations on the selected columns. I would then have to append these new columns into the database structure. I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem. Edit -- Responding to Jeff's questions specifically: I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns. Typical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset. Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model. A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case. It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations. The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns. It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics/machine learning parlance).
"Large data" workflows using pandas
1
0
0
341,120
14,266,235
2013-01-10T19:58:00.000
2
0
1
0
python
14,266,285
6
false
0
0
One algorithm is this: Read from the file until you encounter the text "City" Open a second file in write mode Stream from the first file into the second Close both files Move the second file into the location previously occupied by the first Although files can be truncated to remove contents after a certain point, they cannot be resized-in-place with contents before a certain point. You could do this using a single file by repeatedly seeking back and forth, but it's probably not worthwhile. If the files are small enough, you can just read the whole of the first file into memory and then write the portion of it you want back to the same on-disk file.
1
2
0
I have over 5000 textfiles (also in csv format) with quite a few hundred lines in each. Everything above a particular phrase, "City" is unnecessary and I need everything beneath it, is there a way (python or batch) to delete everything?
How to delete everything before a key phrase in text files?
0.066568
0
0
2,371
14,267,427
2013-01-10T21:18:00.000
0
0
1
0
python,emacs
18,646,178
6
false
0
0
This is a bit of a hack, but it worked for me as a quick work-around: to a "M-X replace-string", " " -> " ". Then you have to close and re-open if your emacs does an automatic idnent-detection on the file. Then you have to go through and fix mult-line code (with tab), and strings that have lots of spaces.
3
6
0
I changed python-indent from 3 to 4. I then mark-whole-buffer and indent-for-tab-command. It gave me garbage.
How to re-indent Python code after changing indent width in Emacs?
0
0
0
2,389
14,267,427
2013-01-10T21:18:00.000
0
0
1
0
python,emacs
14,267,505
6
false
0
0
Try indent-region instead on the buffer. Initially bounded to C-M-\
3
6
0
I changed python-indent from 3 to 4. I then mark-whole-buffer and indent-for-tab-command. It gave me garbage.
How to re-indent Python code after changing indent width in Emacs?
0
0
0
2,389
14,267,427
2013-01-10T21:18:00.000
6
0
1
0
python,emacs
14,267,477
6
true
0
0
There is the indent-region function. So I'd try mark the whole buffer, then M-x and type indent-region. It's usually bound to C-M-\, as far as I know. Edit Re-indentation does not work for a tab-width change. As I wrote in the comments changing spaces to tabs and then altering the tab-width is a solution: "Guessing you are indenting with space and not tabs, you'd first do tabify on the buffer content with your tab-width set to 3. Then change tab-width to 4 and run untabify."
3
6
0
I changed python-indent from 3 to 4. I then mark-whole-buffer and indent-for-tab-command. It gave me garbage.
How to re-indent Python code after changing indent width in Emacs?
1.2
0
0
2,389
14,268,123
2013-01-10T22:05:00.000
3
0
1
0
python
14,268,275
3
false
0
0
The import statement causes a module to be executed, with all variables being kept in the namespace of the module executed. That is to say, if you import a, then all of a's variables will be under a.[variable]. The from keyword gives slightly different behavior: it puts the variable in the current namespace. For instance, from a import foo puts the variable foo in the current namespace. from a import * imports all variables from a into the current namespace. The as keyword allows you to rename variables when you import them; thus from a import foo as bar allows you to access a.foo, but you must call it bar; import a.foo as foo is equivalent to from a import foo.
1
1
0
this is something that I can't quite figure out in pythion's imports. Let's say I have a module 'a' that imports module 'b' with 'import b' Then, there is a module 'c' that imports module 'a'. Will the names form the module 'b' be available in 'c'? I've checked that it actually depends on how you import the module 'a' in module 'c'. If you do 'import a' then names from 'b' will not be available in 'c'. However, if you do 'from a import *' then they will be available. Can someone pls clarify the difference?
How enclosed import works in Python
0.197375
0
0
225
14,270,163
2013-01-11T01:18:00.000
1
0
1
0
python,object,pandas,dataframe,storage
14,271,696
2
false
0
0
Redis with redis-py is one solution. Redis is really fast and there are nice Python bindings. Pytables, as mentioned above, is a good choice as well. PyTables is HDF5, and is really really fast.
1
1
1
I am working on a large project that does SPC analysis and have 1000's of different unrelated dataframe objects. Does anyone know of a module for storing objects in memory? I could use a python dictionary but would like it more elaborate and functional mechanisms like locking, thread safe, who has it and a waiting list etc? I was thinking of creating something that behaves like my local public library system. The way it checks in and out books to one owner ...etc.
Pandas storing 1000's of dataframe objects
0.099668
0
0
1,701
14,271,489
2013-01-11T04:03:00.000
2
1
0
1
python,git,api,version
14,271,577
1
false
0
0
I think you want to use tags in your git repository. For each version of your api, use git tag vn and you don't need to maintain earlier versions of your files. You can access all files at a certain version just using git checkout vn. If you use a remote repository, you need to use the flag --tags to send the tags to the remote repository, ie, git push --tags.
1
1
0
I currently have a v1 API and have updated and created new scripts for v2. The API is consumed by other developers and consists of a bunch of scripts. Before migrating and adding v2 I want to make sure I have a successful versioning strategy to go ahead with. Currently, there is a bash script called before using the API, with which you can supply the version # or by default gives you the most recent version. Originally, I intended to have different subfolders for each different version, but for scripts that do not change between revisions and scripts that get content added to them, the git history will not be preserved correctly as the original file will still reside in the v1 subdir and will not be 'git mv'ed. This is obviously not the best way but I can't think of a better way currently. Any recommendations will be helpful but one restriction is that we cannot have a git submodule with different branches. There are no other restrictions (e.g. the bash file used for setup can be deleted) as long as the scripts are accessible. Thanks! EDIT: We also have scripts above the "API" directory that are part of the same repo that call into the API (we are consumers of our own API). The changes to these files need to be visible when using any version of the API and cannot just be seen in the latest version (related to tags in the repo)
API Versioning while maintaining git history
0.379949
0
0
605
14,271,653
2013-01-11T04:25:00.000
2
0
0
1
python,linux,perl,sed,awk
14,271,786
2
false
0
0
This thread is going to start a war on which is best :) As you know python, you should definitely go with that. I myself have done a lot of text manipulation using python, where everything else tend to become complex. Even though awk can do what you need, you won't like what you see in the code.
1
0
0
I know bit of all sed, awk , python , but not perl. I need to parse around 100s of different files , find patterns , match multiple columns with each other and put in new files. and i have to do that on regular basis. I just want to know which tool will be best for that scenario. based on that i will buy that books and get more advanced knowledge on that subject
Which linux tool is best for parsing multiple files simultaneously
0.197375
0
0
217
14,271,697
2013-01-11T04:30:00.000
1
0
1
0
python,c,signals,shared-libraries,ctypes
14,271,698
3
false
0
0
You will have to declare a signal handler for SIGINT, within the C, which is, hopefully, your project.
1
9
0
When calling a loop being performed in a C shared-library (dynamic library), Python will not receive a KeyboardInterrupt, and nothing will respond (or handle) CTRL+C. What do I do?
CTRL+C doesn't interrupt call to shared-library using CTYPES in Python
0.066568
0
0
1,606
14,273,593
2013-01-11T07:29:00.000
-1
0
0
0
python,django
14,273,720
5
false
1
0
If you use FireFox you can install FireBug on it and when you for example submit ajax form you can see at which url send you request after what you can easily find controller which work with this form data. At chrome this utility embedded by default and call by F12 key.
3
3
0
I have a new job and a huge django project (15 apps, more than 30 loc). It's pretty hard to understand it's architecture from scratch. Are there any techniques to simplify my work in the beginning? sometimes it's even hard to understand where to find a form or a view that I need... thnx in advance.
Huge Django project
-0.039979
0
0
228
14,273,593
2013-01-11T07:29:00.000
4
0
0
0
python,django
14,273,986
5
true
1
0
When I come to this kind of problem I open up a notebook and answer the following: 1. Infrastructure Server configuration, OS etc Check out the database type (mysql, postgres, nosql) External APIS (e.g Facebook Connect) 2. Backend Write a simple description Write its input/output from user (try to be thorough; which fields are required and which aren't) Write its FK and its relation to any other apps (and why) List down each plugin the app is using. And for what purpose. For example in rails I'd write: 'gem will_paginate - To display guestbook app results on several pages' 3. Frontend Check out the JS framework Check the main stylesheet files (for the template) The main html/haml (etc) files for creating a new template based page. When you are done doing that. I think you are much more prepared and able go deeper developing/debugging the app. Good luck.
3
3
0
I have a new job and a huge django project (15 apps, more than 30 loc). It's pretty hard to understand it's architecture from scratch. Are there any techniques to simplify my work in the beginning? sometimes it's even hard to understand where to find a form or a view that I need... thnx in advance.
Huge Django project
1.2
0
0
228
14,273,593
2013-01-11T07:29:00.000
2
0
0
0
python,django
14,274,066
5
false
1
0
1) Try to install the site from scratch. You will find what external apps are needed for the site to run. 2) Reverse engineer. Browse through the site and try to find out what you have to do to change something to that page. Start with the url, look up in urls.py, read the view, check the model. Are there any hints to other processes? 3) Try to write down everything you don't understand, and document the answers for future reference.
3
3
0
I have a new job and a huge django project (15 apps, more than 30 loc). It's pretty hard to understand it's architecture from scratch. Are there any techniques to simplify my work in the beginning? sometimes it's even hard to understand where to find a form or a view that I need... thnx in advance.
Huge Django project
0.07983
0
0
228
14,277,088
2013-01-11T11:19:00.000
13
0
0
0
python,pip,firewall
14,277,298
2
true
0
0
You need to open up your firewall to the download locations of any package you need to install, or connect to a proxy server that has been given access. Note that the download location is not necessarily on PyPI. The Python package index is a metadata service, one that happens to also provide storage for the indexed packages. As such, not all packages indexed on PyPI are actually downloaded from PyPI, the download location could be anywhere on the internet. I'd say you start with opening pypi.python.org, then as individual package installions fail, check their PyPI page and add the download location listed for those.
2
28
0
I have a server, onto which I want to use Python, that is behind a company firewall. I do not want to mess with it and the only thing I can do is to make a firewall exception for specific URL/domains. I also want to access packages located on PYPI, using pip or easy_install. Therefore, do you know which URL should I ask to be listed in the exception rules for the firewall, except *.pypi.python.org?
what url should I authorize to use pip behind a firewall?
1.2
0
1
29,142
14,277,088
2013-01-11T11:19:00.000
5
0
0
0
python,pip,firewall
67,416,056
2
false
0
0
I've solved it adding these domains to the firewall whitelist: pypi.python.org pypi.org pythonhosted.org
2
28
0
I have a server, onto which I want to use Python, that is behind a company firewall. I do not want to mess with it and the only thing I can do is to make a firewall exception for specific URL/domains. I also want to access packages located on PYPI, using pip or easy_install. Therefore, do you know which URL should I ask to be listed in the exception rules for the firewall, except *.pypi.python.org?
what url should I authorize to use pip behind a firewall?
0.462117
0
1
29,142
14,277,172
2013-01-11T11:24:00.000
0
0
0
0
python,wsgi
14,285,052
2
false
1
0
in uWSGI (if using the uwsgi protocol) you can pass additional variables with uwsgi_param key value in nginx, SetEnv in apache (both mod_uwsgi and mod_proxy_uwsgi), cgi vars with Cherokee and --http-var with the uwsgi http router. For the http protocol (in gunicorn or uWSGI http-socket) the only solution popping in my mind is adding special headers in the proxy configuration that you will parse in your wsgi app (http headers are rewritten as cgi vars prefixed with HTTP_)
1
0
0
Is there a way to distribute a WSGI application that will work out of the box with any server and that will be configurable using the server's built-in features only? This means that the only configuration file the administrator would have to touch would be the server's configuration file. It wouldn't be necessary to write a custom WSGI script in Python. mod_wsgi adds configuration variables set with SetEnv to the WSGI environ dictionary that gets passed to the app, but I didn't find a way to do something similar with Gunicorn or uWSGI. Using os.environ works with Gunicorn and uWSGI but not with mod_wsgi because SetEnv doesn't affect os.environ.
How to make a WSGI app configurable by any server?
0
0
0
253
14,278,009
2013-01-11T12:14:00.000
0
0
1
0
python,virtualenv,pycharm,pylint
71,898,652
4
false
0
0
In Tool Settings, set Program: to $PyInterpreterDirectory$/pylint
3
9
0
Using the PyCharm IDE, when setting up an external tool, how can you set up the external tools with a path relative to use the current virtual env defaults.? An example being pylint - where I'd want the virtual env version and not the system one to run.
Pycharm External tools relative to Virtual Environment
0
0
0
3,302
14,278,009
2013-01-11T12:14:00.000
0
0
1
0
python,virtualenv,pycharm,pylint
14,508,728
4
false
0
0
just found your post while looking for documentation about the "variables" that could bew used when setting parameters for external tools. No documentation but you can see a list of all the available stuff after pressing thE "INSERT MACRO" button in the Edit Tool dialog. I don't see any reference to the interpreter path there but I usually use the virtualenv as my project path. If you are doing that too you could infer the python interpreter path from there.
3
9
0
Using the PyCharm IDE, when setting up an external tool, how can you set up the external tools with a path relative to use the current virtual env defaults.? An example being pylint - where I'd want the virtual env version and not the system one to run.
Pycharm External tools relative to Virtual Environment
0
0
0
3,302
14,278,009
2013-01-11T12:14:00.000
16
0
1
0
python,virtualenv,pycharm,pylint
33,673,270
4
false
0
0
Not sure about older versions, but in PyCharm 5 one can use $PyInterpreterDirectory$ macro. It's exactly that we want
3
9
0
Using the PyCharm IDE, when setting up an external tool, how can you set up the external tools with a path relative to use the current virtual env defaults.? An example being pylint - where I'd want the virtual env version and not the system one to run.
Pycharm External tools relative to Virtual Environment
1
0
0
3,302
14,281,308
2013-01-11T15:27:00.000
0
1
1
0
python,performance
14,281,528
1
false
0
0
Most newer Python versions bring new features. Existing code parts are probably updated as well, either for performance or for extended functionality. The former kind of changes bring a performance benefit, but extended functionality might lead to a poorer performance. I don't know what is the relationship between these kinds of changes. Probably you will have to do some profiling on yourself.
1
2
0
I'm working on an open source project (Master of Mana, a mod for Civilization 4) which uses Python 2.4.1 for several game mechanics. Is there a chance for a performance improvement if I try to upgrade to Python 2.7.3 or even 3.3.0? Related to this, has anyone done a performance analysis on different Python versions?
Upgrading to a newer Python version - performance improvements?
0
0
0
379
14,284,492
2013-01-11T18:32:00.000
30
0
0
0
python,tkinter,colors,text-cursor
14,284,594
3
true
0
1
You can change the insertbackground option of the text widget to whatever you want.
1
21
0
I have a text widget with dark background and I can't see the cursor's position. Is there any way to change the (blinking) text cursor's color?
How to change text cursor color in Tkinter?
1.2
0
0
12,336
14,287,141
2013-01-11T21:43:00.000
5
1
1
0
python,algorithm,structure
14,287,198
3
false
0
0
Learning the syntax of a programming language to express an algorithm is like learning the syntax of English to express a thought. Sure, there are nuances in English that allow you to express some thoughts better than others or in other languages. However, a command of English does not automatically enable you to be able to think some thoughts. Similarly, if you want to pick up an algorithms book, go for it! Your understanding of python is only very loosely connected with your ability to develop and algorithm to solve a problem. Once you learn how to solve problems, you will be able to develop an algorithm to solve the specific problem at hand, and then choose the language best suited to express that algorithm … And as you design more and more algorithms, you'll get better at developing better algorithms; and as you write more python code, you'll get better at writing python code. I don't know what book you're currently reading, but beginner books tend to orient themselves at teaching the language (it's syntax, semantics, etc) using simple algorithmic examples. If you're having a tough time understanding the algorithms that govern the solutions to these examples, you should probably do some beginner reading on algorithms. It's somewhat of a cycle, really - in order to learn algorithms, you need to be able to express them (and algorithms are most easily expressed in code). Thus to understand algorithms, you need to understand code. This is not entirely true - pseudocode solves this problem quite well. But you'll need to understand at least the pseudocode. Hope this helps
1
0
0
I'm a complete Noob, having studied Python 2.7 for less than four days using eclipse on a mac, and I have managed to write a "FizzBang" from scratch in about 20 minutes, but....I'm having one heck of a time with basic algorithms. I'm wondering if this is something I'll speed up at in time, or if there is some sort of "logical thinking" practice that is above me without instruction. Memorizing syntax has been no problem so far and I really enjoy the feeling when it all works out. My question is, should I detour from my current beginner book and read something about basic algorithms (maybe something specific to Python algorithms)? If so, what beginner text would 'yall recommend? I searched for this topic and didn't find anything that matched, so if this is a duplicative post, or whatever you call it, my bad. I'd appreciate any help I get from you Pro's. Thanks
Beginning Python trouble with basic algorithms
0.321513
0
0
590
14,288,177
2013-01-11T23:13:00.000
1
0
1
0
python,automation,interop,concept
14,288,210
5
false
0
0
You should look into a package called selenium for interacting with web browsers
1
27
0
I'm having the idea of writing a program using Python which shall find a lyric of a song whose name I provided. I think the whole process should boil down to couple of things below. These are what I want the program to do when I run it: prompt me to enter a name of a song copy that name open a web browser (google chrome for example) paste that name in the address bar and find information about the song open a page that contains the lyrics copy that lyrics run a text editor (like Microsoft Word for instance) paste the lyrics save the new text file with the name of the song I am not asking for code, of course. I just want to know the concepts or ideas about how to use python to interact with other programs To be more specific, I think I want to know, fox example, just how we point out where is the address bar in Google Chrome and tell python to paste the name there. Or how we tell python how to copy the lyrics as well as paste it into the Microsof Word's sheet then save it. I've been reading (I'm still reading) several books on Python: Byte of python, Learn python the hard way, Python for dummies, Beginning Game Development with Python and Pygame. However, I found out that it seems like I only (or almost only) learn to creat programs that work on itself (I can't tell my program to do things I want with other programs that are already installed on my computer) I know that my question somehow sounds rather silly, but I really want to know how it works, the way we tell Python to regconize that this part of the Google chrome browser is the address bar and that it should paste the name of the song in it. The whole idea of making python interact with another program is really really vague to me and I just extremely want to grasp that. Thank you everyone, whoever spend their time reading my so-long question. ttriet204
Interact with other programs using Python
0.039979
0
0
89,546
14,289,656
2013-01-12T02:41:00.000
0
0
0
0
c++,python
14,289,688
2
false
0
1
Due to complexities of the C++ ABI (such as name mangling), it's generally difficult and platform-specific to load a C++ library directly from Python using ctypes. I'd recommend you either create a simple C API which can be easily wrapped with ctypes, or use SWIG to generate wrapper types and a proper extension module for Python.
1
0
0
I'm a Python guy building a Linux-based web service for a client who wants me to interface with a small C++ library that they're currently using with a bunch of Windows based VB applications. They have assured me that the library is fairly simple (as far as they go I guess), and that they just need to know how best to compile and deliver it to me so that I can use it in Python under Linux. I've read a bit about the ctypes library and other options (SWIG, etc), but for some reason I haven't really been able to wrap my head around the concept and still don't know how to tell them what I need. I'm pretty sure having them re-write it with Python.h, etc is out, so I'm hoping there's a way I can simply have them compile it on Linux as a .so and just import it into Python. Is such a thing possible? How does one accomplish this?
How does one get a C++ library loaded into Python as a shared object file (.so)?
0
0
0
201
14,289,657
2013-01-12T02:41:00.000
3
1
1
0
python,c,python-imaging-library
14,289,791
2
false
0
1
Yes, coding the same algorithm in Python and in C, the C implementation will be faster. This is definitely true for the usual Python interpreter, known as CPython. Another implementation, PyPy, uses a JIT, and so can achieve impressive speeds, sometimes as fast as a C implementation. But running under CPython, the Python will be slower.
2
0
0
I was using PIL to do image processing, and I tried to convert a color image into a grayscale one, so I wrote a Python function to do that, meanwhile I know PIL already provides a convert function to this. But the version I wrote in Python takes about 2 seconds to finish the grayscaling, while PIL's convert almost instantly. So I read the PIL code, figured out that the algorithm I wrote is pretty much the same, but PIL's convert is written in C or C++. So is this the problem making the performance's different?
performance concern, Python vs C
0.291313
0
0
318
14,289,657
2013-01-12T02:41:00.000
2
1
1
0
python,c,python-imaging-library
14,290,456
2
false
0
1
If you want to do image processing, you can use OpenCV(cv2), SimpleCV, NumPy, SciPy, Cython, Numba ... OpenCV, SimpleCV SciPy have many image processing routines already. NumPy can do operations on arrays in c speed. If you want loops in Python, you can use Cython to compile your python code with static declaration into an external module. Or you can use Numba to do JIT convert, it can convert your python code into machine binary code, and will give you near c speed.
2
0
0
I was using PIL to do image processing, and I tried to convert a color image into a grayscale one, so I wrote a Python function to do that, meanwhile I know PIL already provides a convert function to this. But the version I wrote in Python takes about 2 seconds to finish the grayscaling, while PIL's convert almost instantly. So I read the PIL code, figured out that the algorithm I wrote is pretty much the same, but PIL's convert is written in C or C++. So is this the problem making the performance's different?
performance concern, Python vs C
0.197375
0
0
318
14,294,643
2013-01-12T15:10:00.000
1
1
0
0
python,performance,gevent,pypy
14,294,862
3
false
1
0
Builtin flask server is a BaseHTTPServer or so, never use. The best scenario is very likely tornado + pypy or something like that. Benchmark before using though. It also depends quite drastically on what you're doing. The web server + web framework benchmarks are typically hello world kind of benchmarks. Is your application really like that? Cheers, fijal
1
25
0
Both 'pypy' and 'gevent' are supposed to provide high performance. Pypy is supposedly faster than CPython, while gevent is based on co-routines and greenlets, which supposedly makes for a faster web server. However, they're not compatible with each other. I'm wondering which setup is more efficient (in terms of speed/performance): The builtin Flask server running on pypy or: The gevent server, running on CPython
Which setup is more efficient? Flask with pypy, or Flask with gevent?
0.066568
0
0
15,866
14,296,401
2013-01-12T18:12:00.000
0
0
0
0
php,python,pygame
14,296,430
1
false
0
1
If I got you right, then answer is: no, it's impossible. I think — Flash is most acceptable choice for you, or Unity3D with browser plugin.
1
0
0
Is there a method to add python pygame with images to PHP and display it in a local browser? What are the necessary requirements to do something like this?
Is there a way to add Pygame into a local PHP file?
0
0
0
71
14,296,403
2013-01-12T18:12:00.000
3
0
1
0
python,list,loops,python-3.x
14,296,587
2
false
0
0
Try using filter: newlist = filter(lambda a: a>0, [1,2,3]) or [i for i in original_list if i > 0] (as mentioned in comments above)
1
2
0
I am trying to create a new list inside a looping (without change the name) which will cut all the negative or zero elements, eventually changing its length. Who is the fastest way to do that? I have lost the last days trying to do...
Create new list with positive elements
0.291313
0
0
4,133
14,296,531
2013-01-12T18:24:00.000
1
0
1
0
python,pip,distribute
63,071,271
7
false
0
0
I tried the above solutions. However, I only can resolve the problem until I do: sudo pip3 install -U pip (for python3)
2
116
0
I seem to have suddenly started to encounter the error error: option --single-version-externally-managed not recognized when using pip install with various packages (including PyObjC and astropy). I've never seen this error before, but it's now also showing up on travis-ci builds for which nothing has changed. Does this error indicate an out of date distribution? Some incorrectly specified option in setup.py? Something else entirely?
What does "error: option --single-version-externally-managed not recognized" indicate?
0.028564
0
0
86,694
14,296,531
2013-01-12T18:24:00.000
9
0
1
0
python,pip,distribute
26,632,384
7
false
0
0
Try upgrading setuptools like this: pip install --upgrade setuptools
2
116
0
I seem to have suddenly started to encounter the error error: option --single-version-externally-managed not recognized when using pip install with various packages (including PyObjC and astropy). I've never seen this error before, but it's now also showing up on travis-ci builds for which nothing has changed. Does this error indicate an out of date distribution? Some incorrectly specified option in setup.py? Something else entirely?
What does "error: option --single-version-externally-managed not recognized" indicate?
1
0
0
86,694
14,297,741
2013-01-12T20:39:00.000
3
0
1
1
python,ipython
32,446,382
3
false
0
0
I assume you don't want to run the program as root. So this is my modified version that runs as <username> (put in /etc/rc.local before the exit 0 line): su <username> -c "/usr/bin/ipython notebook --no-browser --profile <profilename> &" You can check to make sure your ipython is at that path with which ipython. Though you may just be able to get away with not putting the full path.
1
20
0
I love ipython, especially the notebook feature. I currently keep a screen session running with the notebook process in it. How would I add ipython's notebook engine/webserver to my system's (CentOS5) startup procedures?
How to start ipython notebook server at boot as daemon
0.197375
0
0
20,905
14,304,328
2013-01-13T14:16:00.000
8
0
0
0
python
60,393,428
2
false
0
0
I had the exact same problem. winsound.Beep used to work just fine and then it suddenly stopped working (or that's what I thought). The problem was that someone (or some update) had turned off the System sounds, which prevented windows from playing the Beep sound, either manually or through my program. Try right clicking on the Speaker symbol, Open volume mixer and check whether System sounds is off or minimum volume. I hope that helps!
1
5
0
I am trying to make some Beep noises with the winsound.Beep command. However when I am using winsound.Beep(500, 500) I do not hear anything. With winsound.MessageBeep and winsound.PlaySound however, I DO get to play sounds. Any ideas what I should do? What I am trying to do: I want to write a little practicing program for training intervals: the computer sounds a first tone, then a second tone and you will have to guess what the tone interval is. For this I need pitched tones, tones for which I can set the frequency. I want to keep it as simple as possible, any pitched sound will do. I do not want to have to collect a set of .wav files or whatever. I want to make use of a tone generator which I think is available on most soundcards. winsound.Beep seems like something that can do this trick, but any other suggestions are welcome.
Can't get winsound.Beep to work
1
0
0
8,764
14,304,420
2013-01-13T14:27:00.000
0
0
0
0
python-2.7,scikit-learn
14,309,807
1
true
0
0
Maybe you could extract the top n important features and then compute pairwise Spearman's or Pearson's correlations for those in order to detect redundancy only for the top informative features as it might not be feasible to compute all pairwise feature correlations (quadratic with the number of features). There might be more clever ways to do the same by exploiting the statistics of the relative occurrences of the features as nodes in the decision trees though.
1
1
1
I am using the Scikit-learn Extremely Randomized Trees algorithm to get info about the relative feature importances and I have a question about how "redundant features" are ranked. If I have two features that are identical (redundant) and important to the classification, the extremely randomized trees cannot detect the redundancy of the features. That is, both features get a high ranking. Is there any other way to detect that two features are actualy redundant?
Feature importance based on extremely randomize trees and feature redundancy
1.2
0
0
203
14,307,581
2013-01-13T20:03:00.000
1
0
0
1
python,google-app-engine,django-nonrel
14,368,275
2
false
1
0
Did you update djangoappengine without updating django-nonrel and djangotoolbox? While I haven't upgraded to GAE 1.7.4 yet, I'm running 1.7.2 with no problems. I suspect your problem is not related to the GAE SDK but rather your django-nonrel installation has mismatching pieces.
2
0
0
I had GAE 1.4 installed in my local UBUNTU system and everything was working fine. Only warning I was getting at that time was something like "You are using old GAE SDK 1.4." So, to get rid of that I have done following things: I removed old version of GAE and installed GAE 1.7. Along with that I have also changed my djangoappengine folder with latest version. I have copied new version of GAE to /usr/local directory since my ~/bashrc file PATH variable pointing to GAE to this directory. Now, I am getting error django.core.exceptions.ImproperlyConfigured: 'djangoappengine.db' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: No module named utils I don't think there is any problem of directory structure since earlier it was running fine. Does anyone has any idea ? Your help will be highly appreciated. -Sunil .
Django-nonrel broke after installing new version of Google App Engine SDK
0.099668
1
0
191
14,307,581
2013-01-13T20:03:00.000
0
0
0
1
python,google-app-engine,django-nonrel
14,382,654
2
true
1
0
Actually I changed the google app engine path in /.bashrc file and restarted the system. It solved the issue. I think since I was not restarting the system after .bashrc changes, hence it was creating problem.
2
0
0
I had GAE 1.4 installed in my local UBUNTU system and everything was working fine. Only warning I was getting at that time was something like "You are using old GAE SDK 1.4." So, to get rid of that I have done following things: I removed old version of GAE and installed GAE 1.7. Along with that I have also changed my djangoappengine folder with latest version. I have copied new version of GAE to /usr/local directory since my ~/bashrc file PATH variable pointing to GAE to this directory. Now, I am getting error django.core.exceptions.ImproperlyConfigured: 'djangoappengine.db' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: No module named utils I don't think there is any problem of directory structure since earlier it was running fine. Does anyone has any idea ? Your help will be highly appreciated. -Sunil .
Django-nonrel broke after installing new version of Google App Engine SDK
1.2
1
0
191
14,308,889
2013-01-13T22:23:00.000
2
0
0
0
python,arrays
14,309,992
1
true
0
0
Given your description, a sparse representation may not be very useful to you. There are many other options, though: Make sure your values are represented using the smallest data type possible. The example you show above is best represented as single-byte integers. Reading into a numpy array or python array will give you good control over data type. You can trade memory for performance by only reading a part of the data at a time. If you re-write the entire dataset as binary instead of CSV, then you can use mmap to access the file as if it were already in memory (this would also make it faster to read and write). If you really need the entire dataset in memory (and it really doesn't fit), then some sort of compression may be necessary. Sparse matrices are an option (as larsmans mentioned in the comments, both scipy and pandas have sparse matrix implementations), but these will only help if the fraction of zero-value entries is large. Better compression options will depend on the nature of your data. Consider breaking up the array into chunks and compressing those with a fast compression algorithm like RLE, SZIP, etc.
1
0
1
I read in an large python array from a csv file (20332 *17009) using window7 64 bit OS machine with 12 G ram. The array has values in the half of places, like the example below. I only need the array where has values for analysis, rather than the whole array. [0 0 0 0 0 0 0 0 0 3 8 0 0 4 2 7 0 0 0 0 5 2 0 0 0 0 1 0 0 0] I am wondering: is it possible to ignore 0 value for analysis and save more memory? Thanks in advance!
How to save memory for a large python array?
1.2
0
0
733
14,309,149
2013-01-13T22:50:00.000
0
0
1
0
c++,python,algorithm,dynamic-programming
14,326,063
2
false
0
0
I think there is a dynamic programming solution that is just about tractable if path length is just the number of links in the paths (so links don't have weights). Work from the leaves up. At each node you need to keep track of the best pair of solutions confined to the subtree with root at that node, and, for each k, the best solution with a path of length k terminating in that node and a second path of maximum length somewhere below that node and not touching the path. Given this info for all descendants of a node, you can produce similar info for that node, and so work your way up to the route. You can see that this amount of information is required if you consider a tree that is in fact just a line of nodes. The best solution for a line of nodes is to split it in two, so if you have only worked out the best solution when the line is of length 2n + 1 you won't have the building blocks you need for a line of length 2n + 3.
1
0
0
I have a simple, non-dirictional tree T. I should find a path named A and another named B that A and B have no common vertex. The perpuse is to maxmize the Len(A)*Len(B). I figured this problem is similer to Partition Problem, except in Partition Problem you have a set but here you have a Equivalence set. The solution is to find two uncrossed path that Len(A) ~ Len(B) ~ [n-1/2]. Is this correnct? how should I impliment such algorithm?
Partition in Equivalence sets
0
0
0
163
14,314,019
2013-01-14T06:12:00.000
1
0
0
1
python,linux,share,cifs
14,489,863
2
false
0
0
What about this?: Change the windows share to point to an actual Linux directory reserved for the purpose. Then, with simple Linux scripts, you can readily determine if any files there have any writers. Once there is a file not being written to, copy it to the windows folder—if that is where it needs to be.
1
9
0
I'm trying to write a script to take video files (ranging from several MB to several GB) written to a shared folder on a Windows server. Ideally, the script will run on a Linux machine watching the Windows shared folder at an interval of something like every 15-120 seconds, and upload any files that have fully finished writing to the shared folder to an FTP site. I haven't been able to determine any criteria that allows me to know for certain whether a file has been fully written to the share. It seems like Windows reserves a spot on the share for the entire size of the file (so the file size does not grow incrementally), and the modified date seems to be the time the file started writing, but it is not incremented as the file continues to grow. LSOF and fuser do not seem to be aware of the file, and even Samba tools don't seem to indicate it's locked, but I'm not sure if that's because I haven't mounted with the correct options. I've tried things like trying to open the file or rename it, and the best I've been able to come up with is a "Text File Busy" error code, but this seems to cause major delays in file copying. Naively uploading the file without checking to see if it has finished copying not only does not throw any kind of error, but actually seems to upload nul or random bytes from the allocated space to the FTP resulting in a totally corrupt file (if the network writing process is slower than the FTP) . I have zero control over the writing process. It will take place on dozens of machines and consist pretty much exclusively of Windows OS file copies to a network share. I can control the share options on the Windows server, and I have full control over the Linux box. Is there some method of checking locks on a Windows CIFS share that would allow me to be sure that the file has completely finished writing before I try to upload it via FTP? Or is the only possible solution to have the Linux server locally own the share? Edit The tldr, I'm really looking for the equivalent of something like 'lsof' that works for a cifs mounted share. I don't care how low level, though it would be ideal if it was something I could call from Python. I can't move the share or rename the files before they arrive.
How to tell if a file is being written to a Windows CIFS share from Linux
0.099668
0
0
3,260
14,323,390
2013-01-14T17:21:00.000
0
0
0
0
python,qt,video,pyside,phonon
14,509,771
3
true
0
1
OK - for others out there looking for the same info, I found Hachoir-metadata and Hachoir-parser (https://bitbucket.org/haypo/hachoir/wiki/Home). They provide the correct info but there is a serious lack of docs for it and not that many examples that I can find. Therefore, while I have parsed a video file and returned the metadata for it, I'm now struggling to 'get' that information in a usable format. However, I will not be defeated!
1
0
0
Can anybody tell me how I can return the dimensions of a video (pixel height/width) using Qt (or any other Python route to that information). I have googled the hell out of it and cannot find a straight answer. I assumed it would either be mediaobject.metadata() or os.stat() but neither appear to return the required info.
qt phonon - returning video dimensions
1.2
0
0
305
14,325,773
2013-01-14T20:03:00.000
7
0
0
0
python,matplotlib
53,153,007
2
false
0
0
This is several years after you asked the question, but the only way I've found to do it is to change the matplotlib.rc. You can do this either in the actual .rc file or within your python script, e.g. import matplotlib as mpl mpl.rc('hatch', color='k', linewidth=1.5) This will make all of the hatch lines in your script black and thickness 1.5 rather than the default 1.0.
1
52
0
In this example of a marker from my scatter plot I have set the color to green, and edge color to black, and hatch to "|". For the hatch pattern to show up at all I must set the edgecolor, however when I do, I get a very thick border around the marker. Two questions: 1) How can I to set the size of this border (preferably to 0)? 2) How can I increase the thickness of the hatch lines?
How to change marker border width and hatch width?
1
0
0
99,634
14,327,036
2013-01-14T21:29:00.000
29
0
0
0
python,django,performance
14,327,315
5
false
1
0
I think using len(qs) makes more sense here as you need to iterate over the results. qs.count() is a better option if all that you want to do it print the count and not iterate over the results. len(qs) will hit the database with select * from table whereas qs.count() will hit the db with select count(*) from table. also qs.count() will give return integer and you cannot iterate over it
1
113
0
In Django, given that I have a QuerySet that I am going to iterate over and print the results of, what is the best option for counting the objects? len(qs) or qs.count()? (Also given that counting the objects in the same iteration is not an option.)
Count vs len on a Django QuerySet
1
0
0
92,515
14,327,219
2013-01-14T21:39:00.000
1
0
1
0
emacs,ipython,python-mode
14,430,802
1
false
0
0
Well it seems that the latest build of ipython (2927) in conjuction with python-mode 6.0.1. solves this issue.
1
0
0
I am using python-mode 6.0.10 with the ipython console. When I try to bring up a previous history item it is shifted to the left preceeding the console input prompt e.g. In [51]: (hit M-p to get previous item) plt.plot(u_class)In[51]: (hitting M-p after this yields a "not a command line" error). What I have to do now is to effectively kill the badly formatted previous item text and yank it in front of the "In[51]:" . After that repeated use of M-p works as desired. Is there a way to fix this? This behaviour happens both on OS X and windows.
previous item in ipython console emacs shifts text to the left before console prompt
0.197375
0
0
92
14,328,369
2013-01-14T22:59:00.000
1
0
1
0
python,python-docx
24,855,520
3
false
0
0
As of v0.7.2, python-docx translates '\n' and '\r' characters in a string to <w:br/> elements, which provides the behavior you describe. It also translates '\t' characters into <w:tab/> elements. This behavior is available for strings provided to: Document.add_paragraph() Paragraph.add_run() and for strings assigned to: Paragraph.text Run.text
1
9
0
Python Docx is a pretty good library for generating Microsoft Word documents for something that doesn't directly deal with all of the COM stuff. Nonetheless, I'm running into some limitations. Does anyone have any idea how one would put a carriage return in a string of text? I want a paragraph to have multiple lines without there being extra space between them. However, writing out a string that separates the lines with the usual \n is not working. Nor is using &#10 or &#13. Any other thoughts, or is this framework too limited for something like that?
Python Docx Carriage Return
0.066568
0
0
13,703
14,334,222
2013-01-15T09:03:00.000
0
0
0
1
python,webserver,subdomain,tornado,saas
14,342,790
2
false
1
0
Tornado itself does not handle subdomains. You will need to something like NGNIX to control subdomain access.
2
2
0
I have a web app which runs at www.mywebsite.com. I am asking user to register and enter a subdomain name for their login. e.g. if user enters subdomain as "demo", then his login url should be "www.demo.mywebsite.com". How this can be done in tornado web app, as tornado itself is a web server. Or serving the app with nginx or other web serving services is the only way ? Thanks for you help in advance.
sub domains in tornado web app for SAAS
0
0
0
791
14,334,222
2013-01-15T09:03:00.000
3
0
0
1
python,webserver,subdomain,tornado,saas
14,419,302
2
false
1
0
self.request.host under tornado.web.RequestHandler will contain subdomain so you can change application logic according to subdomain eg. load current_user based on cookie + subdomain.
2
2
0
I have a web app which runs at www.mywebsite.com. I am asking user to register and enter a subdomain name for their login. e.g. if user enters subdomain as "demo", then his login url should be "www.demo.mywebsite.com". How this can be done in tornado web app, as tornado itself is a web server. Or serving the app with nginx or other web serving services is the only way ? Thanks for you help in advance.
sub domains in tornado web app for SAAS
0.291313
0
0
791
14,334,667
2013-01-15T09:31:00.000
0
0
0
0
python,django,django-forms,django-widget
14,334,871
1
false
1
0
This seems overly complicated. Apart from anything else, tying up an entire process waiting for someone to fill in a form is a bad idea. Although I can't really understand exactly what you want to do, it seems likely that there are better solutions. Here's a few possibilities: Page A redirects to Page B before initializing the form, and B redirects back to A on submit; Page A loads the popup then loads the form via Ajax; Page B dynamically fills in the form fields in A on submit via client-side Javascript; and so on.
1
0
0
I have a page "A" with some CharField to fill programmatically. The value to fill come from another page "B", opened by javascript code executed only when the page is showed (after the init). This is the situation: page A __init__ during the init, start a thread listening on the port 8080 page A initialized and showed --> javascript in the template is executed the javascript tag opens a new webpage, that sends data to the 8080 the thread reads data sent by page B, and try to fill CharFields Is there a way to do this? I don't know...a refresh method.. If it is not possible... I need a way to call the javascript function before the init of the form OR A way to modify the HTML code of the page created
Change CharField value after init
0
0
0
232
14,335,832
2013-01-15T10:34:00.000
1
0
0
0
python,django,user-interface,django-views
14,335,895
3
false
1
0
This is a very wide-ranging question. One solution would be to store a trial flag on each user. On an authenticated request, check for User.trial in your controller (and probably view) and selectively allow/deny access to the endpoint or selectively render parts of the page. If you wish to use built-in capabilities of Django, you could view 'trial' as a permission, or a user group.
1
1
0
I am using Django, and require certain 'trial' users only to activate a certain part of the website-any ideas on an efficient way to do this? I was thinking about giving a paying customer a certain ID and linking this to the URL of the sites for permission. Thanks, Tom
Restrict so users only view a certain part of website-Django
0.066568
0
0
1,610
14,340,366
2013-01-15T14:55:00.000
69
0
1
0
python
14,340,509
3
true
0
0
If you don't want environment variable substitution, then use RawConfigParser, not ConfigParser.
1
41
0
Stupid question with (for sure) simple answer... I am using configparser to read some strings from a file. When the string has the '%' symbol ($%& for example) it complains: ConfigParser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: "%&'" Anybody familiar with this? Thanks!
Configparser and string with %
1.2
0
0
26,866
14,341,737
2013-01-15T16:03:00.000
0
0
0
1
python,unix
14,341,933
2
false
0
0
If your program is *nix-specific, I suppose your best bet is parsing the output of mount command. It gives you mount points, user names, and FS names. Of them you could filter points mounted or at least writable by the current user, with a right FS on it (possibly vfat?).
1
0
0
I'm writing a Python program that uses dd to write an OS image to a USB flash drive. Drives /dev/sda and /dev/sdb are mounted, in my case, with sdb being the flash drive I want to write to. However, on someone else's system, the drive they want to write to might be /dev/sdc. How do I let the user choose what drive to write to? Preferably letting them choose by disk label, for user friendliness. EDIT: Let me rephrase this: I've got the USB flash drives /dev/sdb and /dev/sdc inserted. I want to basically tell the user; "Which flash drive do you want to write to, sdb or sdc?", then write to the disk that the user chose. So far, I've found no way to do this.
Choose from disks in Python?
0
0
0
170
14,343,871
2013-01-15T17:59:00.000
0
0
0
1
python,google-app-engine,python-2.7
14,365,980
3
false
1
0
I would say precompute those structures and output them into hardcoded python structures that you save in a generated python file. Just read those structures into memory as part of your instance startup. From your description, there's no reason to compute these values at runtime, and there's no reason to store it in the datastore since that has a cost associated with it, as well as some latency for the RPC.
2
1
0
I'm working on an NDB based Google App Engine application that needs to keep track of the day/night cycle of a large number (~2000) fixed locations. Because the latitude and longitude don't ever change, I can precompute them ahead of time using something like PyEphem. I'm using NDB. As I see it, the possible strategies are: To precompute a year's worth of sunrises into datetime objects, put them into a list, pickle the list and put it into a PickleProperty , but put the list into a JsonProperty Go with DateTimeProperty and set repeated=True Now, I'd like the very next sunrise/sunset property to be indexed, but that can be popped from the list and places into it's own DateTimeProperty, so that I can periodically use a query to determine which locations have changed to a different part of the cycle. The whole list does not need to be indexed. Does anyone know the relative effort -in terms of indexing and CPU load for these three approaches? Does repeated=True have an effect on the indexing? Thanks, Dave
Best strategy for storing precomputed sunrise/sunset data?
0
1
0
537
14,343,871
2013-01-15T17:59:00.000
1
0
0
1
python,google-app-engine,python-2.7
14,345,283
3
false
1
0
For 2000 immutable data points - just calculate them when instance starts or on first use, then keep it in memory. This will be the cheapest and fastest.
2
1
0
I'm working on an NDB based Google App Engine application that needs to keep track of the day/night cycle of a large number (~2000) fixed locations. Because the latitude and longitude don't ever change, I can precompute them ahead of time using something like PyEphem. I'm using NDB. As I see it, the possible strategies are: To precompute a year's worth of sunrises into datetime objects, put them into a list, pickle the list and put it into a PickleProperty , but put the list into a JsonProperty Go with DateTimeProperty and set repeated=True Now, I'd like the very next sunrise/sunset property to be indexed, but that can be popped from the list and places into it's own DateTimeProperty, so that I can periodically use a query to determine which locations have changed to a different part of the cycle. The whole list does not need to be indexed. Does anyone know the relative effort -in terms of indexing and CPU load for these three approaches? Does repeated=True have an effect on the indexing? Thanks, Dave
Best strategy for storing precomputed sunrise/sunset data?
0.066568
1
0
537
14,344,473
2013-01-15T18:33:00.000
0
0
0
0
python,mysql,django
27,122,957
2
false
1
0
Used django-celery package and created job in it to update the data periodically
2
1
0
I have income table which contain recurrence field. Now if user select recurrence_type as "Monthly" or "Daily" then I have to add row into income table "daily" or "monthly" . Is there any way in Mysql which will add data periodically into table ? I am using Django Framework for developing web application.
add data to table periodically in mysql
0
1
0
214
14,344,473
2013-01-15T18:33:00.000
1
0
0
0
python,mysql,django
14,344,610
2
true
1
0
As I know there is no such function in MySQL. Even if MySQL could do it, this should not be its job. Such functions should be part of the business logic in your application. The normal way is to setup the cron job in server. The cron job will wake up at the time you set, and then call your python script or SQL to fulfil the adding data work. And scripts are much better than direct SQL.
2
1
0
I have income table which contain recurrence field. Now if user select recurrence_type as "Monthly" or "Daily" then I have to add row into income table "daily" or "monthly" . Is there any way in Mysql which will add data periodically into table ? I am using Django Framework for developing web application.
add data to table periodically in mysql
1.2
1
0
214