Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
13,134,353
2012-10-30T07:19:00.000
4
0
0
0
python,encoding,openerp
13,135,341
1
false
1
0
The comment # -*- coding: utf-8 -*- tells the python parser the encoding of the source file. It affects how the bytecode compiler converts unicode literals in the source code. It has no effect on the runtime environment. You should explicitly define the encoding when converting strings to unicode. If you are getting UnicodeDecodeError, post your problem scenario and I'll try to help.
1
0
0
Do you guys know how to change the default encoding of an openerp file? I've tried adding # -*- coding: utf-8 -*- but it doesn't work (is there a setup that ignore this command? just a wild guess). When I try to execute sys.getdefaultencoding() still its in ASCII. Regards
Setting default encoding Openerp/Python
0.664037
0
0
803
13,136,828
2012-10-30T10:15:00.000
0
0
1
0
python,3d,geometry
13,242,515
2
false
0
0
I'll assume the "geometry library for python" already answered in the comments on the question. So once you have a transformation that takes 'a' parallel to 'b', you'll just apply it to 'c' The vectors 'a' and 'b' uniquely define a plane. Each vector has a canonical representation as a point difference from the origin, so you have three points: the head of 'a', the head of 'b', and the origin. First compute this plane. It will have an equation in the form Ax + By + Cz = 0. A normal vector to this plane defines both the axis of rotation and the sign convention for the direction of rotation. All you need is one normal vector to the plane, since they're all collinear. You can solve for such a vector by picking at two non-collinear vectors in the plane and taking the dot product with the normal vector. This gives a pair of linear equations in two variables that you can solve with standard methods such as Cramer's rule. In all of these manipulations, if any of A, B, or C are zero, you have a special case to handle. The angle of the rotation is given by the cosine relation for the dot product of 'a' and 'b' and their lengths. The sign of the angle is determined by the triple product of 'a', 'b', and the normal vector. Now you've got all the data to construct a rotation matrix in one of the many canonical forms you can look up.
1
1
1
I have three vectors in 3D a,b,c. Now I want to calculate a rotation r that when applied to a yields a result parallel to b. Then the rotation r needs to be applied to c. How do I do this in python? Is it possible to do this with numpy/scipy?
Rotations in 3D
0
0
0
1,659
13,136,885
2012-10-30T10:18:00.000
1
0
0
1
python,ssh,cloud-foundry
13,139,010
1
false
1
0
I assume you mean get the result from outside of CloudFoundry (i.e. not one app launching another app and getting result, stdout and stderr). You can only access CloudFoundry apps over http(s), so you would have to find a way to wrap your invocation into something that exposes everything you need as http.
1
2
0
We need to run arbitrary commands on cloudfoundry. (The deployed apps are Python/Django, but the language for this solution does not matter). Ideally over ssh, but the protocol does not matter. We need a reliable way to get the exit code of the command that was run, as well as its stderr and stdout. If possible, the command running should be synchronous (as in, blocks the client until the command finished on the cloudfoundry app). Is there a solution out there that allows us to do this, or what would be a good way to approach this issue?
How to run a command on an app and get the exit code on cloudfoundry
0.197375
0
0
127
13,137,341
2012-10-30T10:46:00.000
2
1
0
0
android,python,sl4a,android-scripting
13,215,655
1
true
0
1
No, because the SL4A package provides the facade to make device API calls, acting as a middleman. Without it you might be able to import it, but you could not be able to make any API calls.
1
2
0
Is there a way to use android.py module without installing SL4A? I mean I have Python running on android successfully from Terminal Emulator. Can I use that module without installing that layer (or if I can't install it anymore)?
Using Python android.py module without SL4A
1.2
0
0
1,256
13,138,077
2012-10-30T11:30:00.000
7
0
1
1
python,pip,package-managers
13,138,198
1
true
0
0
There are two main reasons that Pythonistas generally recommend pip. The first is that it is the one package manager that is pretty much guaranteed to come with a Python installation, and therefore, independent of the OS on which you are using it. This makes it easier to provide instructions that work on Windows and OS X, as well as your favourite Linux. Perhaps more importantly, pip works very nicely with virtualenv, which allows you to easily have multiple conflicting package configurations and test them without breaking the global Python installation. If I remember correctly, this is because pip is itself a Python program and it automatically runs within the current virtualenv sandbox. OS-level package managers obviously do not do this, since it isn't their job. Also, as @Henry points out below, it's easier to just list your package in a single place (PyPI) rather than depending on the Debian/Ubuntu/Fedora maintainers to include it in their package list. Less popular packages will almost never make it into a distribution's package list. That said, I often find that installing global libraries (numpy, for example) is easier and less painful with apt-get or your favourite alternative.
1
5
0
When reading tutorials and readmes I often see people advertising to install python packages with pip, even when they are using an operating system, which has a nice package manager like apt. Yet in real life I only met people who would only install things with their OS's package manager, reasoning that this package manager will treat all packages the same, no matter if python or not.
Why do pythonics prefer pip over their OS's package managers?
1.2
0
0
207
13,139,192
2012-10-30T12:40:00.000
0
0
0
0
python,django,templates,view
13,139,288
2
false
1
0
Map an url to that view. Redirect to that url from template.
1
1
0
I want to call a Django view from a Django template. Is there a tag to do that ? For example, I want to create a view which manage my login form. So basically, this view will be used just to send the Django form to the html template. Thank you for your help.
Call Django view from a template
0
0
0
1,663
13,139,949
2012-10-30T13:22:00.000
1
1
0
0
python,xlrd
13,562,703
2
true
0
0
Apart from the username (last person to save the worksheet) the Book instance as returned by open_workbook does not seem to have any properties. I recursively dumped the Book ( dumping its dict if a xlrd.BaseObject) and could not find anything in that way. The test files for sure had an author, company and some custom metadata. FWIW: LibreOffice does not seem to be able to find author and company either (or does not display them), but it does show custom metadata in the properties.
1
4
0
Is there a way to read the excel file properties using xlrd? I refer not to cell presentation properties, but general workbook properties. Thanks a lot in advance.
Read workbook properties using python and xlrd
1.2
0
0
1,896
13,140,185
2012-10-30T13:34:00.000
0
0
1
0
python,lxml
13,355,413
2
true
0
0
iterparse() is strictly forward-only, I'm afraid. If you want to read a tree in reverse, you'll have to read it forward, while writing it to some intermediate store (be it in memory or on disc) in some form that's easier for you to parse backwards, and then read that. I'm not aware of any stream parsers that allow XML to be parsed back-to-front. Off the top of my head, you could use two files, one containing the data and the other an index of offsets to the records in the data file. That would make reading backwards relatively easy once it's been written.
1
1
0
I am parsing a large file (>9GB) and am using iterparse of lxml in Python to parse the file while clearing as I go forward. I was wondering, is there a way to parse backwards while clearing? I could see I how would implement this independently of lxml, but it would be nice to use this package. Thank you in advance!
lxml, parsing in reverse
1.2
0
1
1,552
13,140,539
2012-10-30T13:54:00.000
1
0
1
0
python,mercurial,centos
13,142,315
1
false
0
0
You should not have messed with your system Python -- it is incrdible you can still login at all. Python 2.4 is ancient, but it is what is used in a lot of CentOS versions in the wild - what is installed by it's package management. Maybe you had installed a ",meta package" taht upgrades the system Python to 2.6, along with everything that depends on it (yum included). Anyway, 2.4 would be sub-optimal to install mercurial. Since your system is a mess already, you can simply easy_install mercurial into your system Python instead of trying to use yum for it. "sudo easy_install mercurial" -- if you don't have easy_install, try "yum install setuptools" first. If this does not work, search on pipy.python.org for setuptools, install it manually - -and think seriously on rebuilding this machinne - you will have to do it soon.
1
0
0
I try to install mercurial on a centos vps-server with "yum install mercurial" but it says it needs python2.4 in order to install it. I have python2.6 installed. Is there a way to get past this?
Mercurial and python 2.6
0.197375
0
0
591
13,141,796
2012-10-30T15:01:00.000
0
0
1
0
python,app-store,semantics,feedback,google-prediction
13,392,517
1
false
0
0
You can use the Google Prediction API to characterize your comments as important or unimportant. What you'd want to do is manually classify a subset of your comments. Then you upload the manually classified model to Google Cloud Storage and, using the Prediction API, train your model. This step is asynchronous and can take some time. Once the trained model is ready, you can use it to programmatically classify the remaining (and any future) comments. Note that the more comments you classify manually (i.e. the larger your training set), the more accurate your programmatic classifications will be. Also, you can extend this idea as follows: instead of a binary classification (important/unimportant), you could use grades of importance, e.g. on a 1-5 scale. Of course, that entails more manual labor in constructing your model so the best strategy will be a function of your needs and how much time you can spend building the model.
1
1
0
I have received tens of thousands of user reviews on the app. I know the meaning of many of the comments are the same. I can not read all these comments. Therefore, I would like to use a python program to analyze all comments, Identify the most frequently the most important feedback information. I would like to ask, how can I do that? I can download an app all comments, also a preliminary understanding of the Google Prediction API.
How to automatic classification of app user reviews?
0
0
0
235
13,142,676
2012-10-30T15:44:00.000
7
0
1
0
python,ipython
13,144,541
1
true
0
0
I believe the flag you're looking for is --notebook-dir=/home/foo/wherever. I found this by running ipython notebook --help. Navigating directories while the server is running is still 'coming', unfortunately. It's one of those things that will need to be done the right way, and the people who can do it (which doesn't include me) are rather busy.
1
4
0
A previous question, "Multiple directories and/or subdirectories in IPython Notebook session," asked about directory traversal. The answer given was that the feature is coming. I have also seen a command line flag that starts the server in a specific directory. I can no longer find this post. The flag is not mentioned in the official documentation. What is this command line flag to set the working directory? When can we expect to see directory functionality in IPython Notebook? Thank you.
IPython notebook directory traversal
1.2
0
0
1,657
13,144,158
2012-10-30T17:05:00.000
2
0
0
0
python,gis,geospatial,shapely
63,239,049
6
false
0
1
I tried the method of @jozef but failed even I imported the folder to the path. A straightforward solution: add geos_c.dll, geos.dll to the library folder of your python environment. Then it works.
2
23
0
When trying to install Shapely on my Windows 64bit computer, I cannot get the GEOS library to work. So far, I have run the OSGeo4W installer from which I installed GDAL (I believe the geos library is included in that package). After that, I checked and I have geos_c.dll on my C:\OSGeo4W\bin directory, but either I have missed some configuration steps or the library does not work. I need Shapely to work, so I also ran pip install shapely after installing GDAL, and it apparently worked (although it could not find the C library for GEOS). In my code, I can import Shapely, but when I try to use it, I get an error telling me "geos.dll" is not found. Any help with this will be very appreciated. Thanks!
Python, GEOS and Shapely on Windows 64
0.066568
0
0
49,925
13,144,158
2012-10-30T17:05:00.000
1
0
0
0
python,gis,geospatial,shapely
59,217,091
6
false
0
1
I used the command below and it did work; pip install Shapely==1.3.0
2
23
0
When trying to install Shapely on my Windows 64bit computer, I cannot get the GEOS library to work. So far, I have run the OSGeo4W installer from which I installed GDAL (I believe the geos library is included in that package). After that, I checked and I have geos_c.dll on my C:\OSGeo4W\bin directory, but either I have missed some configuration steps or the library does not work. I need Shapely to work, so I also ran pip install shapely after installing GDAL, and it apparently worked (although it could not find the C library for GEOS). In my code, I can import Shapely, but when I try to use it, I get an error telling me "geos.dll" is not found. Any help with this will be very appreciated. Thanks!
Python, GEOS and Shapely on Windows 64
0.033321
0
0
49,925
13,145,282
2012-10-30T18:18:00.000
2
0
0
0
python,django
13,145,394
1
true
1
0
Short answer: no. The longer version is that it really depends on how your application is deployed. In Java for example, it's not Spring (the equivalent of Django in this analogy) that gives you an onStart hook, it's Tomcat or Jetty. The usual interface for deploying Django, WSGI, doesn't define such hooks. A WSGI process will generally be launched from a standalone process supervisor or service script, or via an external server such as Apache. In that case you might be able to hook into some lifecycle, but that is highly dependent on the server that's wrapping your requests. It sounds like you're trying to do something unorthodox. What exactly are you looking to accomplish?
1
0
0
I am trying to understand if there are standard means to handle the specific django application start (and reload). Currenly I would like to use it to start a parallel thread, but the question for me is more general: is this allowed or not allowed for some reason. For example, such handlers are a part of the application interface in case of Java Servlets and .Net web applications. Are they a part of the interface of a django application? UPD In this case I am just trying to implement a small proxy which keeps an open connection. I do understand that the interface I want would initially be a part of WSGI, but it is not, and I though that django might provide its own solution, since in most cases (in all except plain CGI) the application serves more than a single request and obviously does have a life cycle.
Django app life cycle: standart hooks to handle start and reload?
1.2
0
0
925
13,148,512
2012-10-30T22:29:00.000
3
0
0
1
python,google-app-engine
13,638,216
2
false
1
0
A similiar issue happens with appcfg.py in SDK 1.73, where it skips uploading some files sometimes. It looks like this only happens if appcfg.py is run under python 2.7. The workaround is to simply run appcfg.py under python 2.5. Then the upload works reliably. The code uploaded can still be 2.7 specific - it is only necessary to revert 2.5 in the step of running the uploader function in appcfg.py.
1
3
0
I just updated to SDK 1.7.3 running on Linux. At the same time I switched to the SQLite datastore stub, suggested by the depreciation message. After this, edits to source files are not always detected, and I have to stop and restart the SDK after updating, probably one time in ten. Is anyone else seeing this? Any ideas on how to prevent it? UPDATE: Changes to python source files are not being detected. I haven't made any modifications to yaml files, and I believe that jinja2 template file modifications are being detected properly. UPDATE: I added some logging to the dev appserver and found that the file I'm editing is not being monitored. Continuing to trace what is happening.
Appengine SDK 1.7.3 not detecting updated files
0.291313
0
0
294
13,149,663
2012-10-31T00:41:00.000
2
0
0
1
python,google-app-engine
13,151,123
2
false
1
0
If comments are threaded, storing them as separate entities might make sense. If comments can be the target of voting, storing them as separate entities makes sense. If comments can be edited, storing them as separate entities reduces contention, and avoids having to either do pessimistic locking on all comments, or risk situations where the last edit overwrites prior edits. If you can page through comments, storing them as separate entities makes sense for multiple reasons, indexing being one.
1
2
0
With Google App Engine, an entity is limited to 1 MB in size. Say I have a blog system, and expect thousands of comments on each article, some paragraphs in lengths. Typically, without a limit, you'd just store all the comments in the same entity as the blog post. But here, there would be concerns about reaching the 1 MB limit. The other possible way, though far less efficient, is to store each comment as a separate entity, but that would require several, several reads to get all comments instead of just 1 read to get the blog post and its comments (if they were in the same entity). What's an efficient way to handle a case like this?
Efficient way to store comments in Google App Engine?
0.197375
0
0
171
13,150,020
2012-10-31T01:28:00.000
2
0
0
0
python,numpy
13,150,059
2
false
0
0
Add columns to ndarray(or matrix) need full copy of the content, so you should use other method such as list or the array module, or create a large matrix first, and fill data in it.
1
1
1
My objective is to start with an "empty" matrix and repeatedly add columns to it until I have a large matrix.
Is it possible to create a numpy matrix with 10 rows and 0 columns?
0.197375
0
0
255
13,150,497
2012-10-31T02:39:00.000
0
0
0
0
python,user-interface,mouseevent,pyqt4,global
13,151,661
2
true
0
1
I've assumed this isn't possible and am instead using pyHook, letting Qt pump the messages.
1
0
0
A part of a small project I am working on involves 'calibrating' the coordinates of the screen of which to take a screen capture of. By the 'screen', I refer to the entire desktop, not my GUI window. The coordinates are calibrated when a QDialog window appears (which I've subclassed). The user is prompted to click several locations on the screen. I need the program to record the locations of all mouse clicks occuring anywhere on the screen - ones that don't natively trigger a QDialog mouseEvent, since they are outside this window. Obviously overwriting the mouseEvent method does not work, since the QDialog doesn't recieve the clicks. How can I capture global mouse clicks, so that an event is triggered and sent to the QDialog when any part of the screen is clicked? (I'd prefer a Qt based solution, but am open to other libraries if need be). Thanks!
PyQt4 - detect global mouse click
1.2
0
0
2,065
13,150,638
2012-10-31T02:56:00.000
3
0
1
0
python,list
13,150,648
2
false
0
0
You could try x[-1:] which returns a list containing either the last element, or an empty list if x is the empty list. Of course, if you want to use that element for something, you'll still have to check to see whether it is present or not.
1
0
0
Say I have the list x = []. I need to keep track of the last element in that list. I could do this with the variable last = x[-1], but of course this gives an index error if x is empty. How can I account for this so that I don't get an error when x is empty?
In Python, how can I store the last element of a list?
0.291313
0
0
122
13,151,428
2012-10-31T04:47:00.000
2
0
0
0
python,image-processing,opencv,image-segmentation
13,162,109
2
true
0
0
Are you willing to write your own image processing logic? Your best option will likely be to optimize the segmentation/feature extraction for your problem, instead of using previous implementations like opencv meant for more general use-cases. An option that I've found to work well in noisy/low-contrast environments is to use a sliding window (i.e. 10x10 pixels) and build a gradient orientation histogram. From this histogram you can recognize the presence of more dominant edges (they accumulate in the histogram) and their orientations (allowing for detection for things like corners) and see the local maximum/minimums. (I can give more details if needed) If your interested in segmentation as a whole AND user interaction is possible, I would recommend graph cut or grab cut. In graph cut users would be able to fine tune the segmentation. Grab cut is already in opencv, but may result in the same problems as it takes a single input from the user, then automatically segments the image.
1
5
1
I have a set of butterfly images for training my system to segment a butterfly from a given input image. For this purpose, I want to extract the features such as edges, corners, region boundaries, local maximum/minimum intensity etc. I found many feature extraction methods like Harris corner detection, SIFT but they didn't work well when the image background had the same color as that of the butterfly's body/boundary color. Could anyone please tell whether there is any good feature extraction method which works well for butterfly segmentation? I'm using the Python implementation of OpenCV.
Feature extraction for butterfly images
1.2
0
0
2,622
13,151,907
2012-10-31T05:45:00.000
0
0
0
0
python,graph,matplotlib,tkinter,wxwidgets
13,156,356
2
false
0
0
You can do what you want with Tkinter, though there's no specific widget that does what you ask. There is a general purpose canvas widget that allows you to draw objects (rectangles, circles, images, buttons, etc), and it's pretty easy to add the ability to drag those items around.
1
1
1
I am looking for a GUI python module that is best suited for the following job: I am trying to plot a graph with many columns (perhaps hundreds), each column representing an individual. The user should be able to drag the columns around and drop them onto different columns to switch the two. Also, there are going to be additional dots drawn on the columns and by hovering over those dots, the user should see the values corresponding to those dots. What is the best way to approach this?
Looking for a specific python gui module to perform the following task
0
0
0
244
13,155,509
2012-10-31T10:07:00.000
-1
0
1
0
windows,pdf,python-sphinx
52,086,907
3
false
0
0
As you have figured out: use Sphinx to generate LaTeX source, and then run it through a LaTex compiler to produce your PDF. Instead of troubling yourself with installing LaTeX (which can be daunting) and getting an editor set up, I suggest that you use one of the on-line LaTeX services. You then only have to create a project in ShareLaTeX or Overleaf, for example (which are in the process of merging), upload the contents of the Sphinx build\latex directory, compile on-line, and download the finished PDF. This works reasonably well, but since the output targets are very different (HTML vs a formal document), you may have to fiddle with the reST to get things the way you like it.
1
16
0
I am using Sphinx to create documentation for my Python project in Windows. I need to generate PDF documentation. I found many explanation how to do this in Linux, but no good explanation how to do this in Windows. As far as i understand I need to create Latex format with Sphinx, and than use Texworks to convert Latex to PDF. Can someone provide step by step explanation how can I do this, assuming I created documentation in Latex format and installed Texworks?
How to create PDF documentation with Sphinx in Windows
-0.066568
0
0
8,815
13,156,730
2012-10-31T11:21:00.000
0
0
0
0
python,excel,xlrd,xlwt
13,998,563
2
false
0
0
Work on 2 cells among tens of thousands...quite meager. Normally,one should present an iteration over rows x columns.
1
1
0
I am using the modules xlwd, xlwt and xlutil to do some Excel manipulations in Python. I am not able to figure out how to copy the value of cell (X,Y) to cell (A,B) in the same sheet of an Excel file in Python. Could someone let me know how to do that?
Copying value of cell (X,Y) to cell (A,B) in same sheet of an Excel file using Python
0
1
0
472
13,156,981
2012-10-31T11:35:00.000
2
0
1
0
python,naming-conventions
13,157,156
2
false
0
0
If you want to package your application to allow a ZIP file containing it or its directory to be passed as an argument to the python interpreter to run the application, name your main script __main__.py. If you don't care about being able to do this (and most python applications do not), name it whatever you want.
1
0
0
I started a new Python project and I want to have a good structure from the beginning. I'm reading some convention Python guides but I don't find any info about how the main script must be named. Is there any rules for this? Is there any other kind of convention for folders or text files inside the project (like readme files)? By the way, I'm programming a client-server app so there is no way for this to become a package (at least in the way a think a package is).
Python init file name convention
0.197375
0
0
664
13,159,051
2012-10-31T13:25:00.000
0
0
0
1
python,google-app-engine,asynchronous,google-cloud-datastore
13,181,950
1
false
1
0
You cannot start an async API call in one request and get its result in another. The HTTP serving infrastructure will wait for all API calls started in a request to complete before the HTTP response is sent back; the data structure representing the async API call will be useless in the second request (even if it hits the same instance). You might try Appstats to figure out what API calls your request is making and see if you can avoid some, use memcache for some, or parallellize. You might also use NDB which integrates memcache in the datastore API.
1
0
0
I'm trying to optimize my GAE webapp for latency. The app has two requests which usually come one after another. Is it safe to start an async db/memcache request during the first request and then use its results inside the following request? (I'm aware that the second request might hit another instance. It would be handled as a cache miss)
async db access between requests on GAE/Python
0
0
0
82
13,160,217
2012-10-31T14:28:00.000
2
1
0
0
python,django,emacs,ide
13,160,450
3
false
1
0
I also switched from Eclipse to Emacs and I must say that after adjusting to more text-focused ways of exploring code, I don't miss this feature at all. In Emacs, you can just open a shell prompt (M-x shell). Then run IPython from within the Emacs shell and you're all set. I typically split my screen in half horizontally and make the bottom window thinner, so that it's like the Eclipse console used to be. I added a feature in my .emacs that lets me "bring to focus" the bottom window and swap it into the top window. So when I am coding, if I come across something where I want to see the source code, I just type C-x c to swap the IPython shell into the top window, and then I type %psource < code thing > and it will display the source. This covers 95%+ of the use cases I ever had for quickly getting the source in Eclipse. I also don't care about the need to type C-x b or C-x C-f to open the code files. In fact, after about 2 or 3 hours of programming, I find that almost every buffer I could possibly need will already be open, and I just type C-x b < start of file name > and then tab-complete it. Since I have become more proficient at typing and not needing to move attention away to the mouse, I think this is now actually faster than the "quick" mouse-over plus F3 tactic in Eclipse. And to boot, having IPython open at the bottom is way better than the non-interactive Eclipse console. And you can use things like M-p and M-n to get the forward-backward behavior of IPython in terms of going back through commands. The one thing I miss is tab completion in IPython. And for this, I think there are some add-ons that will do it but I haven't invested the time yet to install them. Let me know if you want to see any of the elisp code for the options I mentioned above.
1
5
0
I'm looking into emacs as an alternative to Eclipse. One of my favorite features in Eclipse is being able to mouse over almost any python object and get a listing of its source, then clicking on it to go directly to its code in another file. I know this must be possible in emacs, I'm just wondering if it's already implemented in a script somewhere and, if so, how to get it up and running on emacs. Looks like my version is Version 24.2. Also, since I'll be doing Django development, it would be great if there's a plugin that understands Django template syntax.
Link to python modules in emacs
0.132549
0
0
314
13,161,659
2012-10-31T15:41:00.000
4
0
0
1
python,windows-7,command-line,copy,robocopy
26,222,233
5
false
0
0
Like halfs13 said use subprocess but you might need to format it like so from subprocess import call call(["robocopy",'fromdir', 'todir',"/S"]) Or else it may read the source as everything
1
1
0
I am trying to move multiple large folders (> 10 Gb , > 100 sub folders, > 2000 files ) between network drives. I have tried using shutil.copytree command in python which works fine except that it fails to copy a small percentage (< 1 % of files ) for different reasons. I believe robocopy is the best option for me as i can create a logfile documenting the transfer process. However as i need to copy > 1000 folders manual work is out of question. So my question is essentially how can i call robocopy (i.e. command line ) from within a python script making sure that logfile is written in an external file. I am working on a Windows 7 environment and Linux/Unix is out of question due to organizational restrictions. If someone has any other suggestions to bulk copy so many folders with a lot of flexibility they are welcome.
How can i call robocopy within a python script to bulk copy multiple folders?
0.158649
0
0
16,661
13,162,409
2012-10-31T16:21:00.000
1
1
1
0
python,nlp,classification,tagging,folksonomy
13,162,597
4
false
0
0
I am not an expert but it seems like you really need to define a notion of "key term", "relevance", etc, and then put a ranking algorithm on top of that. This sounds like doing NLP, and as far as I know there is a python package called NLTK that might be useful in this field. Hope it helps!
1
2
0
Not sure how to phrase this question properly, but this is what I intend to achieve using the hypothetical scenario outlined below - A user's email to me has just the SUBJECT and BODY, the subject being the topic of email, and the body being a description of the topic in just one paragraph of max 1000 words. Now I would like to analyse this paragraph (in the BODY) using some computer language (python, maybe), and then come up with a list of most important words from the paragraph with respect to the topic mentioned in the SUBJECT field. For example, if the topic of email is say iPhone, and the body is something like "the iPhone redefines user-interface design with super resolution and graphics. it is fully touch enabled and allows users to swipe the screen" So the result I am looking for is a sort of list with the key terms from the paragraph as related to iPhone. Example - (user-interface, design, resolution, graphics, touch, swipe, screen). So basically I am looking at picking the most relevant words from the paragraph. I am not sure on what I can use or how to use to achieve this result. Searching on google, I read a little about Natural Language Processing and python and classification etc. I just need a general approach on how to go about this - using what technology/language, which area I have to read on etc.. Thanks! EDIT::: I have been reading up in the meantime. To be precise, I am looking at HOW TO do this, using WHAT TOOL: Generate related tags from a body of text using NLP which are based on synonyms, morphological similarity, spelling errors and contextual analysis.
picking the most relevant words from a paragraph
0.049958
0
0
1,918
13,163,990
2012-10-31T17:59:00.000
2
0
0
1
python,web,webserver,flask,tornado
13,219,183
4
false
1
0
instead of using Apache as your server, you'll use Tornado (of course as blocking server due to the synchronous nature of WSGI).
1
43
0
As far as I can tell Tornado is a server and a framework in one. It seems to me that using Flask and Tornado together is like adding another abstraction layer (more overhead). Why do people use Flask and Tornado together, what are the advantages?
Why use Tornado and Flask together?
0.099668
0
0
46,076
13,168,523
2012-10-31T22:45:00.000
6
0
0
0
python,django,scalability
13,168,612
1
true
1
0
count is fast and admin pagination for objects is limited to 100 records per page. Sorting on 2 million records could take some time though, but with 2 million users you can afford some ram to put some indexes on these fields ;) to be serious: this is no problem at all as long as your database can handle it.
1
1
0
I am wondering if Django admin can handle a huge amount of records. Say I have a 2 million users, how the pagination will be handled for example? Will it be dead slow because every time all the records will be counted to be provided to the offset? Note: I plan to use PostgreSQL. Thanks.
Can Django admin handle millions of records?
1.2
0
0
943
13,171,860
2012-11-01T06:11:00.000
2
0
0
0
python,django,apache,concurrency,mod-wsgi
13,172,015
1
true
1
0
This partly depends on your mod_wsgi configuration. If you configure it to use only one thread per process, then global variables are safe--although I wouldn't recommend using them, for a variety of reasons. In a multi-thread configuration, there is nothing guaranteeing that requests won't get mixed up if you use global variables. You should be able to find some more local place to stash the data you need between pre_save and post_save. I'd recommend putting some more thought into your design.
1
7
0
I have a django instance hosted via apache/mod_wsgi. I use pre_save and post_save signals to store the values before and after save for later comparisons. For that I use global variables to store the pre_save values which can be accessed in the post_save signal handler. My question is, if two requests A and B come together simultaneously requesting a same web service, will it be concurrent? The B should not read the global variable which is written by A and vice versa. PS: I don't use any threading Lock on variables.
How django handles simultaneous requests with concurrency over global variables?
1.2
0
0
4,217
13,172,331
2012-11-01T06:54:00.000
3
0
0
0
python,django
13,172,382
1
false
1
0
You need to use the database's table and field names in the raw query--the string you provide will be passed to the database, not interpreted by the Django ORM.
1
0
0
I have few things to ask for custom queries in Django DO i need to use the DB table name in the query or just the Model name if i need to join the various tables in raw sql. do i need to use db field name or model field name like Person.objects.raw('SELECT id, first_name, last_name, birth_date FROM Person A inner join Address B on A.address = B.id ') or B.id = A.address_id
Using raw sql in django python
0.53705
1
0
408
13,173,029
2012-11-01T07:52:00.000
0
0
0
0
python,c,wxpython,pyqt
13,175,997
3
false
0
1
If you have an option between C or C#(Sharp) then go with C# and use visual studio, you can build the GUI by dragging and dropping components easy. If you want to do something in python look up wxPython. Java has a built in GUI builder known as swing. You'll need some tutorials, but unless this program doesn't need to be portable just go with C# and build it in 10 minutes . Also, you can write your code in C and export it as a python module which you can load from python. It`s not very complicated to set up some C functions and have a python GUI which calls them. To achieve this you can use SWIG, Pyrex, BOOST.
1
6
0
My friend has an application written in C that comes with a GUI made using GTK under Linux. Now we want to rewrite the GUI in python (wxpython or PyQT). I don't have experience with Python and don't know how to make Python communicate with C. I'd like to know if this is possible and if yes, how should I go about implementing it?
Can C programs have Python GUI?
0
0
0
4,672
13,176,027
2012-11-01T11:24:00.000
2
0
1
0
python,client-server,code-organization,project-organization
13,176,078
1
true
0
0
Yes, your project will be a package. A module is a collection of related code. Most non-trivial projects will be a collection of modules in a package (potentially with sub-packages).
1
2
0
Yesterday I started an important Python project and since then I've been searching for documentation on how to organize the code to have a "high-quality" project. There is a lot of articles and official documentation about how to organize packages and modules but, as I'm very new to this language, I think that is not my case. The project is a client-server platform to distribute files in a local network (ok, is a lot more than this but its the basic idea). The thing is that is not going to be a module and I think that is not a package. At least not as described in the Python documentation: Packages are a way of structuring Python’s module namespace by using “dotted module names” I searched too in Git to see what popular project do to organize its code but most of them are modules and the rest... I don't even know how to run them. So the question is, what is my code (module, package, ...) and which is the best way to organize it? Do you know any good article about this? Thank you.
Python project organization
1.2
0
0
415
13,176,173
2012-11-01T11:34:00.000
19
0
0
0
python,django,google-app-engine,logging,django-nonrel
37,881,645
5
false
1
0
If the use case is that you have a python program that should flush its logs when exiting, use logging.shutdown(). From the python documentation: logging.shutdown() Informs the logging system to perform an orderly shutdown by flushing and closing all handlers. This should be called at application exit and no further use of the logging system should be made after this call.
1
41
0
I'm working with Django-nonrel on Google App Engine, which forces me to use logging.debug() instead of print(). The "logging" module is provided by Django, but I'm having a rough time using it instead of print(). For example, if I need to verify the content held in the variable x, I will put logging.debug('x is: %s' % x). But if the program crashes soon after (without flushing the stream), then it never gets printed. So for debugging, I need debug() to be flushed before the program exits on error, and this is not happening.
Python- How to flush the log? (django)
1
0
0
53,163
13,177,087
2012-11-01T12:28:00.000
0
0
0
0
python,django,django-views,subprocess
13,177,553
2
false
1
0
Easiest I think will be use ajax to start he simulator. Response for start request can be updated on the same page. However, you will still have to think about how to pause,resume and stop the simulator started by earlier requests. i.e how to manage and manipulate state of the simulator. May be you want to update that in DB.
1
1
0
I have a HTML page which has four buttons Start, Stop, Pause and Resume in it. The functionality of the buttons are: Start Button: Starts the backend Simulator. (The Simulator takes around 3 minutes for execution.) Stop Button: Stops the Simulator. Pause Button: Pauses the Simulator. Resume Button: Resumes the Simulator from the paused stage. Each of the button when clicked goes on to calling a separate view function. The problem I'm facing is that when I click the start button, it starts up the Simulator through a function call in the Python view. But as I mentioned that the simulator takes around 3 minutes for completing it's execution. So, for the 3 minutes my UI is totally unresponsive. I cannot press Stop, Pause or Resume button untill the current view of Django is rendered. So what is the best way to approach this problem ? Shall I spawn a non-blocking process for the Simulator and If so how can I get to know after the view has rendered that the new spawned process has completed it's execution.
Getting Responsive Django View
0
0
0
457
13,178,116
2012-11-01T13:29:00.000
2
1
0
0
c++,python,eclipse,cmake,swig
13,178,922
1
false
0
0
some more fancy features such as the ability to step through the code and set breakpoints, for example using Eclipse how are those features "fancy"? You can already do those in pdb for Python, or gdb for C++. I'd suggest running the python code with pdb (or using pdb.set_trace() to interrupt execution at an interesting point), and attach gdb to the process in a separate terminal. Use pdb to set breakpoints in, and step through, your Python code. Use gdb to set breakpoints in, and step through, your C++ code. When pdb steps over a native call, gdb will take over. When gdb continue allows Python execution to resume, pdb will take over. This should let you jump between C++ and Python breakpoints without needing to trace through the interpreter. Disclaimer: I largely think IDEs are rubbish bloatware, so if Eclipse does have a good way to integrate this, I wouldn't know about it anyway.
1
6
0
I have a C++ project with a SWIG-generated Python front-end, which I build using CMake. I am now trying to find a convenient way to debug my mixed Python/C++ code. I am able to get a stack-trace of errors using gdb, but I would like to have some more fancy features such as the ability to step through the code and set breakpoints, for example using Eclipse. Using the Eclipse generator for CMake I am able to generate a project that I am able to import into Eclipse. This works fine and I am also able to step through the pure C++ executables. But then the problem starts. First of all, I am not able to build the Python front-end from inside Eclipse. From command line I just do "make python", but there is no target "python" in the Eclipse project. Secondly, once I've compiled the Python front-end, I have no clue how to step through a Python script that contains calls to my wrapped C++ classes. Eclipse has debugging both for Python and for C++, but can they be combined?
Debugging mixed Python/C++ code in Eclipse
0.379949
0
0
3,277
13,179,515
2012-11-01T14:46:00.000
2
0
1
1
python,windows,batch-file
13,179,540
4
false
0
0
No, I don't think you can reasonably expect to do this. Batch files are executed by the Windows command interpreter, which is way way more primitive. Python is a full-blown programming language with a rich and powerful library of standard modules for all sorts of tasks. All the Windows command interpreter can do is act like a broken shell. On the other hand, Python is available on Windows, so just tell the user to install it and run your program directly.
1
5
0
I would like a user on Windows to be able to run my Python program, so I want to convert it to a .bat file. Is there a way to convert it? I've tried searching, but didn't find anything.
Python to .bat conversion
0.099668
0
0
19,619
13,183,980
2012-11-01T19:16:00.000
1
1
0
0
python,filter,zeromq,pyzmq
13,200,669
1
false
0
0
A zmq publisher doesn't do any queueing... it drops messages when there isn't a SUB available to receive those messages. The better way in your situation would be to create a generic sub who only will subscribe to certain messages of interest. That way you can spin up all of the different SUBs (even within one thread and using a zmq poller) and they will all process messages as they come from the PUB.... This is what the PUB/SUB pattern is primarily used for. Subs only subscribe to messages of interest, thus eliminating the need to cycle through a queue of messages at every loop looking for messages of interest.
1
2
0
I have a python messaging application that uses ZMQ. Each object has a PUB and a SUB queue, and they connect to each other. In some particular cases I want to wait for a particular message in the SUB queue, leaving the ones that I am not interested for later processing. Right now, I am getting all messages and queuing those I am not interested in a Python Queue, until I found the one I am waiting for. But his means that in each processing routing I need to check first in the Python Queue for old messages. Is there a better way?
Get a particular message from ZMQ Sub Queue
0.197375
0
0
249
13,186,494
2012-11-01T22:29:00.000
1
0
0
0
google-app-engine,python-2.7,google-cloud-datastore,blobstore
13,187,373
4
false
1
0
You can create an entity that links blobs to users. When a user uploads a blob, you immediately create a new record with the blob id, user id (or post id), and time created. When a user submits a post, you add a flag to this entity, indicating that a blob is used. Now your cron job needs to fetch all entities of this kind where a flag is not equal to "true" and time created is more one hour ago. Moreover, you can fetch keys only, which is a more efficient operation that fetching full entities.
3
2
0
What is the most efficient way to delete orphan blobs from a Blobstore? App functionality & scope: A (logged-in) user wants to create a post containing some normal datastore fields (e.g. name, surname, comments) and blobs (images). In addition, the blobs are uploaded asynchronously before the resto of the data is sent via a POST This leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale. Possible, yet inefficient solutions: Whenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of "used blobs". Then, you can run a cron every so often and compare all of the blobs with the table of "used blobs". Those that have been uploaded over an hour ago yet are still "not used" are deleted. My understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of "used blob keys" is very inefficient. Is there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions. Thanks in advance!
Deleting Blobstore orphans
0.049958
1
0
1,014
13,186,494
2012-11-01T22:29:00.000
3
0
0
0
google-app-engine,python-2.7,google-cloud-datastore,blobstore
13,247,039
4
true
1
0
Thank for the comments. However, I understood those solutions well, I find them too inefficient. Querying thousands of entries for those that are flagged as "unused" is not ideal. I believe I have come up with a better way and would like to hear your thoughts on it: When a blob is saved, immediately a deferred task is created to delete the same blob in an hour’s time. If the post is created and saved, the deferred task is deleted, thus the blob will not be deleted in an hour’s time. I believe this saves you from having to query thousands of entries every single hour. What are your thoughts on this solution?
3
2
0
What is the most efficient way to delete orphan blobs from a Blobstore? App functionality & scope: A (logged-in) user wants to create a post containing some normal datastore fields (e.g. name, surname, comments) and blobs (images). In addition, the blobs are uploaded asynchronously before the resto of the data is sent via a POST This leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale. Possible, yet inefficient solutions: Whenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of "used blobs". Then, you can run a cron every so often and compare all of the blobs with the table of "used blobs". Those that have been uploaded over an hour ago yet are still "not used" are deleted. My understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of "used blob keys" is very inefficient. Is there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions. Thanks in advance!
Deleting Blobstore orphans
1.2
1
0
1,014
13,186,494
2012-11-01T22:29:00.000
0
0
0
0
google-app-engine,python-2.7,google-cloud-datastore,blobstore
16,378,785
4
false
1
0
Use Drafts! Save as draft after each upload. Then dont do the cleaning! Let the user for himself chose to wipe out. If you're planning on posts in a Facebook style use drafts either or make it private. Why bother deleting users' data?
3
2
0
What is the most efficient way to delete orphan blobs from a Blobstore? App functionality & scope: A (logged-in) user wants to create a post containing some normal datastore fields (e.g. name, surname, comments) and blobs (images). In addition, the blobs are uploaded asynchronously before the resto of the data is sent via a POST This leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale. Possible, yet inefficient solutions: Whenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of "used blobs". Then, you can run a cron every so often and compare all of the blobs with the table of "used blobs". Those that have been uploaded over an hour ago yet are still "not used" are deleted. My understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of "used blob keys" is very inefficient. Is there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions. Thanks in advance!
Deleting Blobstore orphans
0
1
0
1,014
13,187,697
2012-11-02T00:45:00.000
5
0
1
0
python,immutability
13,187,701
3
true
0
0
List, Dictionary: Mutable Tuple: Immutable
2
1
0
Are the following containers mutable or immutable in Python? List Tuple Dictionary
Python mutable/immutable containers
1.2
0
0
1,696
13,187,697
2012-11-02T00:45:00.000
6
0
1
0
python,immutability
33,158,642
3
false
0
0
Mutable means value of a variable can be changed in the same memory block itself where the variable is stored or referenced. Immutable means, when you try to change the value of a variable, it creates a new memory block and stores the new value over there. Immutable -- Strings, Tuples, Numbers etc Mutable -- Lists, Dictionaries, Classes etc Example: Let us consider a List, which is mutable... a=[1,2,3] suppose list 'a' is at a memory block named "A0XXX" now if you want to add 4,5 to the list... b=[4,5] append them both a +=b now a=[1,2,3,4,5] So, now final list 'a' is also stored at same memory block "A0XXX" As list is mutable, it is stored at the same memory block. If it is immutable, final list 'a' will be stored at the some other memory block "B0XXX" So mutable objects can be changed and stored at the same memory block, when you try to do the same with the immutable objects, a new memory block is created to store the changed values.
2
1
0
Are the following containers mutable or immutable in Python? List Tuple Dictionary
Python mutable/immutable containers
1
0
0
1,696
13,188,791
2012-11-02T03:14:00.000
1
0
1
0
python-3.x
13,191,512
1
true
0
0
Either you create a class with these attributes as attributes, or you just use a dictionary. It's a matter of taste, both works. Having a class gives you the possibility to also make methods like "is_old" or "is_modified" etc. The same goes for the structure to hold all the data. It's basically a giant dictionary, but you might want to wrap it in a class with methods like "purge_old" etc.
1
0
0
i want to cache my object in the memory for performance,but also need to recycle them if it's too long not accessed. every 30 minutes a function would be called , check every cache object , like a file's attributes in windows operation system, each object should have two attributes:last modified time stamp(or a flag) and last accessed time stamp, if the last modified time is great than zero, create a sql for update them in datebase ,and reset the modified time to 0, if the last accessed time is larger than 30 minutes, then delete them from the cache system. What is the best way to implement them?and is there already a similar system in python so i don't have to reinvent the wheel. ps. no Memcached. the object should be accessed directly, no serialization and deserialization .
find a data type suitable for my cache
1.2
0
0
42
13,190,187
2012-11-02T06:08:00.000
0
0
0
0
python,matplotlib-basemap,chaco
16,198,408
1
false
0
0
Chaco and matplotlib are completely different tools. Basemap has been built on top of matplotlib so it is not possible to add a Basemap map on a Chaco plot. I'm afraid I couldn't find any mapping layer to go with Chaco. Is there a reason you cannot use matplotlib for you plot?
1
1
1
I had developed scatter and lasso selection plots with Chaco. Now, I need to embed a BaseMap [with few markers on a map] onto the plot area side by side. I created a BaseMap and tried to add to the traits_view; but it is failing with errors. Please give me some pointers to achieve the same.
How to use BaseMap with chaco plots
0
0
0
156
13,190,253
2012-11-02T06:13:00.000
0
1
0
0
python,redis
13,205,548
1
false
0
0
I've figure it out. I'm using win7. Redis server doesn't load the config file each time it runs. I need to load it manually or change save frequency in redis-cli using 'save 60 50000'.
1
1
0
I'm using redis for python to store and process about 4 million keys and their values. Then I found Redis writes to disk too often. It really cost time. So I change "save 60 10000" in redis config file to "save 60 50000". But it still write to disk every 10000 key changes. I've reboot Redis server. PS: I want to use dispy and Redis to make my application a distributed program. Is it feasible? I use "redis dispy distributed system" as keyword and get nothing from Google. Thank you very much.
Redis save time config. It operates hardisk too often
0
0
0
310
13,195,697
2012-11-02T12:58:00.000
0
0
1
0
python,scons
13,196,445
1
true
0
0
I have done exactly what you are explaining. If your SConstruct and/or SConscript scripts simply import your common Python code, then there is nothing special you have to do, except import the appropriate SCons modules in your Python code. If, on the other hand, you have a Python script, from which you want to invoke SCons (as opposed to launching scons from the command line) then much more effort will be needed. I originally looked into doing this, but later decided it wasnt worth the effort.
1
0
0
I want to make "common" script which I will use in all my sconscripts This script must use some scons functions like Object() or SharedObject() Is there any scons file that i can import or maybe another useful hack. Im new to python and scons.
How to make python script for scons that use scons functions and variables
1.2
0
0
228
13,197,097
2012-11-02T14:22:00.000
5
0
0
0
python,time-series,forecasting
13,198,574
1
true
0
0
Yes, you could use the [no longer developed or extended, but maintained] package RPy, or you could use the newer package RPy2 which is actively developed. There are other options too, as eg headless network connections to Rserve.
1
1
1
I found forecast package from R the best solution for time series analysis and forecasting. I want to use it in Python. Could I use rpy and after take the forecast package in Python?
Forecast Package from R in Python
1.2
0
0
1,673
13,198,198
2012-11-02T15:22:00.000
1
0
0
0
python,django,templates,gis
13,198,418
3
false
1
0
Perhaps you could put it in the view which renders the search page. asuuming you have a view function like search you could: get users radius request.user.get_radius search for places based on that radius relevant_places = Places.get_all_places_in_radius Render those places to a user
2
0
0
I apologize in advance is this question is too broad, but I need some help conceptualizing. The end result is that I want to enable radius-based searching. I am using Django. To do this, I have two classes: Users and Places. Inside the Users class is a function that defines the radius in which people want to search. Inside the Places class I have a function that defines the midpoint if someone enters a city and state and not a zip (i.e., if someone enters New York, NY a lot of zipcodes are associated with that so I needed to find the midpoint). I have those two parts down. So now I have a radius where people want to search and I know (the estimate) of the places. Now, I am having a tremendous amount of difficulty combining the two, or even thinking about HOW to do this. I attempted doing the searching against each other in the view, but I ran into a lot of trouble when I was looping through one model in the template but trying to display results based on an if statement of the other model. It seemed like a custom template tags would be the solution for that problem, but I wanted to make sure I was conceptualizing the problem correctly in the first place. I.e., Do I want to do the displays based on an if statement in the template? Or should I be creating another class based on the other two in my models file? Or should I create a new column for one of the classes in the models file? I suppose my ultimate question is, based on what it is I want to do (enable radius based searching), where/how should most of the work be done? Again, I apologize if the question is overly broad.
Search Between Two Models in Django
0.066568
0
0
88
13,198,198
2012-11-02T15:22:00.000
0
0
0
0
python,django,templates,gis
13,202,154
3
false
1
0
I just decided to add the function to the view so that the information can be input directly into the model after a user enters it. Thanks for the help. I'll probably wind up looking into geodjango.
2
0
0
I apologize in advance is this question is too broad, but I need some help conceptualizing. The end result is that I want to enable radius-based searching. I am using Django. To do this, I have two classes: Users and Places. Inside the Users class is a function that defines the radius in which people want to search. Inside the Places class I have a function that defines the midpoint if someone enters a city and state and not a zip (i.e., if someone enters New York, NY a lot of zipcodes are associated with that so I needed to find the midpoint). I have those two parts down. So now I have a radius where people want to search and I know (the estimate) of the places. Now, I am having a tremendous amount of difficulty combining the two, or even thinking about HOW to do this. I attempted doing the searching against each other in the view, but I ran into a lot of trouble when I was looping through one model in the template but trying to display results based on an if statement of the other model. It seemed like a custom template tags would be the solution for that problem, but I wanted to make sure I was conceptualizing the problem correctly in the first place. I.e., Do I want to do the displays based on an if statement in the template? Or should I be creating another class based on the other two in my models file? Or should I create a new column for one of the classes in the models file? I suppose my ultimate question is, based on what it is I want to do (enable radius based searching), where/how should most of the work be done? Again, I apologize if the question is overly broad.
Search Between Two Models in Django
0
0
0
88
13,200,900
2012-11-02T18:17:00.000
2
0
1
1
python,windows,cmd
13,200,981
3
false
0
0
The reason why it works from the Menu Item but not from the command prompt is that the menu item specifies the "Start in" directory where the Python executable can be found. Chances are the Win 7 -> Win 8 upgrade failed to preserve the PATH environmental variable, where the path the Python was previously specified, allowing you to invoke Python from any command prompt console.
2
0
0
I can run the command line version of Python but I cannot seem to run it from the command prompt. I have recently upgraded from Windows 7 to Windows 8 and it worked fine with Windows 7. Now Windows 8 will not recognize Python. Thanks, William
Python not running from cmd in Windows 8 after upgrade
0.132549
0
0
3,542
13,200,900
2012-11-02T18:17:00.000
0
0
1
1
python,windows,cmd
15,230,805
3
false
0
0
Go to C:\python33 or wherever you installed it. Right click on "pythonw" and pin to taskbar, Run from taskbar.
2
0
0
I can run the command line version of Python but I cannot seem to run it from the command prompt. I have recently upgraded from Windows 7 to Windows 8 and it worked fine with Windows 7. Now Windows 8 will not recognize Python. Thanks, William
Python not running from cmd in Windows 8 after upgrade
0
0
0
3,542
13,202,944
2012-11-02T20:57:00.000
4
0
1
0
python,linux,ms-word,rtf
13,202,972
2
true
0
0
Not without rendering the actual page. The number of pages will depend on many things, such as the size of the fonts being used, the margins in all four directions on the page, and the insertion of any other sized artifacts such as images. So what you would have to do is render the document in an RTF library of some sort, and let that library tell you how many pages there are.
1
4
0
I would like to count the number of pages in a RTF or MS Word document, using python. Is this possible?
Number of pages in a doc, docx or rtf file using python
1.2
0
0
1,633
13,203,601
2012-11-02T21:59:00.000
10
0
1
0
python,list,big-o
13,203,622
3
false
0
0
For a list of size N, and a slice of size M, the iteration is actually only O(M), not O(N). Since M is often << N, this makes a big difference. In fact, if you think about your explanation, you can see why. You're only iterating from i_1 to i_2, not from 0 to i_1, then I_1 to i_2.
1
61
1
Say I have some Python list, my_list which contains N elements. Single elements may be indexed by using my_list[i_1], where i_1 is the index of the desired element. However, Python lists may also be indexed my_list[i_1:i_2] where a "slice" of the list from i_1 to i_2 is desired. What is the Big-O (worst-case) notation to slice a list of size N? Personally, if I were coding the "slicer" I would iterate from i_1 to i_2, generate a new list and return it, implying O(N), is this how Python does it? Thank you,
Big-O of list slicing
1
0
0
47,961
13,203,745
2012-11-02T22:13:00.000
1
1
0
0
python,crontab,boto
13,203,938
1
false
1
0
First, I would try updating boto; a commit to the development branch mentions logging when a multipart upload fails. Note that doing so will require using s3put instead, as s3multiput is being folded into s3put.
1
3
0
I'm trying to automate a process that collects data on one (or more) AWS instance(s), uploads the data to S3 hourly, to be retrieved by a decoupled process for parsing and further action. As a first step, I whipped up some crontab-initiated shell script (running in Ubuntu 12.04 LTS) that calls the boto utility s3multiput. For the most part, this works fine, but very occasionally (maybe once a week) the file fails to appear in the s3 bucket, and I can't see any error or exception thrown to track down why. I'm using the s3multiput utility included with boto 2.6.0. Python 2.7.3 is the default python on the instance. I have an IAM Role assigned to the instance to provide AWS credentials to boto. I have a crontab calling a script that calls a wrapper that calls s3multiput. I included the -d 1 flag on the s3multiput call, and redirected all output on the crontab job with 2>&1 but the report for the hour that's missing data looks just like the report for the hour before and the hour after, each of which succeeded. So, 99% of the time this works, but when it fails I don't know why and I'm having trouble figuring where to look. I only find out about the failure later when the parser job tries to pull the data from the bucket and it's not there. The data is safe and sound in the directory it should have uploaded from, so I can do it manually, but would rather not have to. I'm happy to post the ~30-40 lines of related code if helpful, but wondered if anybody else had run into this and it sounded familiar. Some grand day I'll come back to this part of the pipeline and rewrite it in python to obviate s3multiput, but we just don't have dev time for that yet. How can I investigate what's going wrong here with the s3multiput upload?
Silent failure of s3multiput (boto) upload to s3 from EC2 instance
0.197375
0
1
515
13,204,907
2012-11-03T00:42:00.000
1
0
0
0
python,html,pyramid
13,205,833
1
true
1
0
You can serve static html from Pyramid by using views that return pre-fabricated responses. You'll have a more fun time doing it though by just having your web server serve static html if it finds it, otherwise proxying the request to your Pyramid app.
1
1
0
I have a Pyramid app using Mako templates and am wondering if it is possible to serve static HTML pages within the app? For the project I'm working on, we want to have relatively static pages for the public "front-facing" bits, and then the application will dynamically serve the meat of the site. We would like one of our internal users to be able to edit some of the HTML content for these pages to update them. I have my static folder that I'm serving CSS and scripts from, but that doesn't seem to really fit what I'd like to do. I could create views for the pages and basically have static content in the mako templates themselves but I think the application would need to be restarted if someone were to update the template for the changes to appear? Maybe that's not the case? Long term I would probably do something like store the content in a db and have it dynamically served but that's outside of the scope at this time. Is there a reasonable way to accomplish this or should I not even bother and set up the public pages as just a regular static HTML site and just link to my app altogether? Thanks!
Can you serve static HTML pages from Pyramid?
1.2
0
0
1,016
13,205,215
2012-11-03T01:48:00.000
12
0
1
0
python
13,205,230
1
true
0
0
False. O(1) means constant time. This means that no matter what the size of the input is, the function will run in more or less the same amount of time - the runtime does not scale with the input. This means that two O(1) functions will each run in constant time, though their constants may be different. So if you have two O(1) functions f and g, each of which compute the same result, expecting similar inputs (lets say they expect lists, for the sake of discussion), the runtime of f does not depend on the size of the list; nor does the runtime of g. However, if f makes more complicated (or otherwise time-consuming) steps to compute the answer than does g, then f's runtime will be more than g's - the number of seconds required for f to terminate (let's call this value fsec) will be more than the number of seconds required for g to terminate (let's call this value gsec). Still, neither fsec nor gsec depend on the size of the input list - they'll be the same no matter how big or small the input list - but gsec will always be smaller than fsec. It is because the runtime does not depend on the size of the input list that they are classified as O(1) algorithms - NOT because they perform a particular number of operations.
1
3
0
"All O(1) functions take exactly the same amount of time to run." True or false? Can anybody explain the answer to me?
All O(1) functions take exactly the same amount of time to run. True or False?
1.2
0
0
3,107
13,205,245
2012-11-03T01:54:00.000
-1
0
0
0
python,azure,webapp2
13,206,994
1
false
1
0
To my knowledge, Windows Azure Web Sites don't support Python.
1
2
0
I would like to try to develop website using python, upload to azure website. I saw there is a tutorial using django. Is there a tutorial using webapp2?
python in azure website using webapp2
-0.197375
0
0
231
13,206,304
2012-11-03T05:31:00.000
0
0
1
0
python,image,imagemagick
13,206,430
1
false
0
0
You mentioned that you were having problems installing PIL on a Mac. Have you considered using Macports?
1
0
0
I have a group of .png files where most of the image is transparent (alpha channel), but there is image in the middle (non-transparent pixels) that I need to extract. What I need to do is crop the image down to just the non-transparent pixels, but I need to know how many pixels were cropped off the left and bottom so when it comes time to render the cropped images, it's position can be adjusted back to were it was in the larger image. Is there a way to do the cropping and get the x,y offset using ImageMagick? I know how to crop the .png file, but the location of the non-transparent image within the larger image is lost and I need this information. It seems I can do this using PIL and python, but getting PIL installed on a Mac is proving to be a hair pulling experience. I've spent hours trying to get rid of the jpeg_resync_to_restart errors and it seems everyone has a different solution that worked for them, but none of them work for me... so I've given up on PIL. ImageMagick is already installed and working. Is there another set of tools I can call from a bash or python script that will do what I need? This isn't just a one-time operation I need to preform, so I need a script that can be run over and over when the source .png files change. Thanks.
Cropping away transparent pixels but preserving the offset
0
0
0
247
13,206,438
2012-11-03T05:53:00.000
1
0
0
1
python,google-app-engine
13,236,236
2
true
1
0
After 3 days of endless searching, I have figured out the problem, if you are facing this issue, first thing you have to check is your system time, mine was incorrect due to daylight saving changes. Thanks
1
0
0
I am working on an application (python based), which is deployed on GAE, it was working fine till last day, but I can't seem to update any code on app engine since this morning, it is complaining about some sort of issue with password, I have double checked and email id and password are correct. here is the stack trace which I receive: 10:47 PM Cloning 706 static files. 2012-11-03 22:47:07,913 WARNING appengine_rpc.py:542 ssl module not found. Without the ssl module, the identity of the remote host cannot be verified, and connections may NOT be secure. To fix this, please install the ssl module from http://pypi.python.org/pypi/ssl . To learn more, see https://developers.google.com/appengine/kb/general#rpcssl Password for [email protected]: 2012-11-03 22:47:07,913 ERROR appcfg.py:2266 An unexpected error occurred. Aborting. Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2208, in DoUpload missing_files = self.Begin() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1934, in Begin CloneFiles('/api/appversion/cloneblobs', blobs_to_clone, 'static') File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1929, in CloneFiles result = self.Send(url, payload=BuildClonePostBody(chunk)) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1841, in Send return self.rpcserver.Send(url, payload=payload, **self.params) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 403, in Send self._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 543, in _Authenticate super(HttpRpcServer, self)._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 293, in _Authenticate credentials = self.auth_function() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2758, in GetUserCredentials password = self.raw_input_fn(password_prompt) EOFError: EOF when reading a line 10:47 PM Rolling back the update. 2012-11-03 22:47:08,818 WARNING appengine_rpc.py:542 ssl module not found. Without the ssl module, the identity of the remote host cannot be verified, and connections may NOT be secure. To fix this, please install the ssl module from http://pypi.python.org/pypi/ssl . To learn more, see https://developers.google.com/appengine/kb/general#rpcssl Password for [email protected]: Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 171, in run_file(__file__, globals()) File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 167, in run_file execfile(script_path, globals_) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4322, in main(sys.argv) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4313, in main result = AppCfgApp(argv).Run() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2599, in Run self.action(self) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4048, in __call__ return method() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3065, in Update self.UpdateVersion(rpcserver, self.basepath, appyaml) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3047, in UpdateVersion lambda path: self.opener(os.path.join(basepath, path), 'rb')) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2267, in DoUpload self.Rollback() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2150, in Rollback self.Send('/api/appversion/rollback') File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 1841, in Send return self.rpcserver.Send(url, payload=payload, **self.params) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 403, in Send self._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 543, in _Authenticate super(HttpRpcServer, self)._Authenticate() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 293, in _Authenticate credentials = self.auth_function() File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2758, in GetUserCredentials password = self.raw_input_fn(password_prompt) EOFError: EOF when reading a line 2012-11-03 22:47:09 (Process exited with code 1) You can close this window now. Any Help will be appreciated. P.S, I have tried from command line as well as Google app engine launcher.
Google App Engine Update Issue
1.2
0
0
937
13,206,698
2012-11-03T06:36:00.000
0
0
1
0
python
13,206,777
5
false
0
0
In Python2, a leading 0 on a numeric literal signifies that it is an octal number. This is why 0100 == 64. This also means that 0800 and 0900 are syntax errors It's a funny thing that's caught out many people at one time or another. In Python3, 0100 is a syntax error, you must use the 0o prefix if you need to write a octal literal. eg. 0o100 == 64
2
0
0
I meet a problem about converting number to string. I want to get a string "0100" from str(0100), but what I got is "64". Does any way I can do to get a string "0100" from 0100. UPDATE Thanks for many people's pointing out that the leading "0" is a indicator for octal. I know it, what I want is to convert 0100 to "0100", any suggestion? Best Regards,
convert number to string in python
0
0
0
8,919
13,206,698
2012-11-03T06:36:00.000
1
0
1
0
python
13,206,733
5
false
0
0
Your first issue is that the literal "0100", because it begins with a digit 0, is interpreted in octal instead of decimal. By contrast, str(100) returns "100" as expected. Secondly, it sounds like you want to zero-fill your numbers to a fixed width, which you can do with the zfill method on strings. For example, str(100).zfill(4) returns "0100".
2
0
0
I meet a problem about converting number to string. I want to get a string "0100" from str(0100), but what I got is "64". Does any way I can do to get a string "0100" from 0100. UPDATE Thanks for many people's pointing out that the leading "0" is a indicator for octal. I know it, what I want is to convert 0100 to "0100", any suggestion? Best Regards,
convert number to string in python
0.039979
0
0
8,919
13,207,710
2012-11-03T09:11:00.000
0
0
0
0
python,django,django-templates
13,207,944
2
true
1
0
Easiest/quickest would be update django's url tag to fail silently. You can update function definition on def url(parser, token): in <your_django_path>/templatetags/future.py to have all code in try ... except and do not raise exception when there is any. However, this is quickest hack that I could think of, I'm not sure there is any better solution.
1
4
0
Is there any way to disable throwing NoReverseMatch exceptions from url tags in Django templates (just make it silently fail, return an empty string or smth... temporarily, for development, of course)? (I'm working on a Django project that is kind of a mess as far as things are organized (bunch of remote-workers, contractors plus the local team with lots of overlapping tasks assigned to different people and even front-end and back-end work tends to get mixed as part of the same task...) and I really need to just ignore/hide/disable the NoReverseMatch thrown by template url tags in order to efficiently do my part of the job and not end up doing other peoples' jobs in order to be able to do mine...)
How can I "ignore" NoReverseMatch in Django template?
1.2
0
0
1,152
13,208,657
2012-11-03T11:19:00.000
1
0
0
0
python,sockets,smalltalk
13,605,902
5
false
0
1
I remember a ruby->dotNet bridge which was ported to some smalltalk as far as I remember. (Ruby/.Net Bridge / 2004 Benjamin Schroeder and John R. Pierce). It covered passing of exceptions, callbacks etc. May make a good starting point for your work. Don't know where and how to get it, though.
2
3
0
I am making a project in pharo that will extend it and make it more visual for also further extending the 3d application Blender. Blender uses mainly python for extensions called "Addons" , to be precise python 3.2. So what I want is to make a bridge between pharo (smalltalk) and blender (python). For now I have focused on sockets and XMLRPC but I was wondering if there are tools out there and choices to further help on my saga. I dont have high demands, for now a simple access to class attributes and call of python methods should be enough, but if I can add additional power to my bridge later it will so much better. Ideally the bridge later one could be used for making pharo use libraries from other language like Java , C# etc
Making a bridge for communication between python and smalltalk
0.039979
0
0
415
13,208,657
2012-11-03T11:19:00.000
2
0
0
0
python,sockets,smalltalk
13,210,746
5
true
0
1
WebSockets sending JSON messages between Smalltalk and Python could be the bleeding edge yet long-term quite a promising way to go. Smalltalk has quite good WebSockets support, I suppose Python as well.
2
3
0
I am making a project in pharo that will extend it and make it more visual for also further extending the 3d application Blender. Blender uses mainly python for extensions called "Addons" , to be precise python 3.2. So what I want is to make a bridge between pharo (smalltalk) and blender (python). For now I have focused on sockets and XMLRPC but I was wondering if there are tools out there and choices to further help on my saga. I dont have high demands, for now a simple access to class attributes and call of python methods should be enough, but if I can add additional power to my bridge later it will so much better. Ideally the bridge later one could be used for making pharo use libraries from other language like Java , C# etc
Making a bridge for communication between python and smalltalk
1.2
0
0
415
13,208,743
2012-11-03T11:32:00.000
1
0
0
0
python,3d,isometric
13,214,374
2
false
0
1
I found this code: www.pygame.org/wiki/OBJFileLoader It let me load and display an .obj file of a cube from Blender with ease. It runs PyOpenGL so it should let me do everything OpenGL can. Never knew OpenGL was so low-level, didn't realize I'd have to write my own loaders and everything. Anyway, I'm pretty sure I can modify this to project isometrically, rotate the object and grab shots and combine them into sprites. Thanks you guys!
1
0
0
I want to write a tool in Python that will help me create isometric tiles from 3D-models. You see, I'm not a very proficient artist and free 3D-models are plentisome, and creating something like a table or chair is much easer in 3D than in painting. This script will load a 3D model in orthographic projection and take pictures from four directions so it can be used in a game. I've tried this in Blender, but the results are inconsistent, very difficult to control and take very long time to create simple sprites. Rolling my own script will probably let me do neat things too, especially batch-genetration, maybe on texture changes, shading, etc. The game itself will probably be made in Python tpp, so maybe I could generate on the fly. (Edit: and automatically creat cut out see-through walls for when they face camera) Now my question, what Python libraries can do something like this? I've checked both Pyglet and Panda3D, but I haven't even been able to load a model, let alone set it to orthographic projection.
Render 3D model orthographically in Python
0.099668
0
0
1,835
13,212,673
2012-11-03T19:28:00.000
1
1
0
0
python,computer-forensics
13,214,640
1
false
0
0
Your problem can be formulated as "how do I search in a very long file with no line structure." It's no different from what you'd do if you were reading line-oriented text one line at a time: Imagine you're reading a text file block by block, but have a line-oriented regexp to search with; you'd search up to the last complete line in the block you've read, then hold on to the final incomplete line and read another block to extend it with. So, you don't start afresh with each new block read. Think of it as a sliding window; you only advance it to discard the parts you were able to search completely. Do the same here: write your code so that the strings you're matching never hit the edge of the buffer. E.g., if the header you're searching for is 100 bytes long: read a block of text; check if the complete pattern appears in the block; advance your reading window to 100 bytes before the end of the current block, and add a new block's worth of text after it. Now you can search for the header without the risk of missing it. Once you find it, you're extracting text until you see the stop pattern (the footer). It doesn't matter if it's in the same block or five blocks later: your code should know that it's in extracting mode until the stop pattern is seen.
1
1
0
I'm looking to write a script in Python 2.x that will scan physical drive (physical and not logical) for specific strings of text that will range in size (chat artifacts). I have my headers and footers for the strings and so I am just wondering how is best to scan through the drive? My concern is that if I split it into, say 250MB chunks and read this data into RAM before parsing through it for the header and footer, it may be the header is there but the footer is in the next chunk of 250MB. So in essence, I want to scan PhysicalDevice0 for strings starting with "ABC" for example and ending in "XYZ" and copy all content from within. I'm unsure whether to scan the data as ascii or Hex too. As drives get bigger, I'm looking to do this in the quickest manner possible. Any suggestions?
Extracting strings from a physical drive
0.197375
0
0
197
13,213,496
2012-11-03T21:07:00.000
-1
0
1
0
python,random,floating-point,rounding
13,213,686
2
false
0
0
rounding is: if i round 3.45 upwards, i will get 4. but if i round it downwards, i will get 3. so in this case, the values of ranges coming from [ or ] will change depending on round
1
7
0
About random.uniform, docstring says: Get a random number in the range [a, b) or [a, b] depending on rounding. But I do not know what does 'depending on rounding' exactly mean.
What does 'depending on rounding' exactly mean?
-0.099668
0
0
322
13,214,349
2012-11-03T22:58:00.000
1
0
0
1
python,open-source,driver,libraries
13,214,376
3
true
0
0
The smaller the better. A 10 line function to convert HSV to RGB or find the closest point to a triangle or something like a CAN/GPIB driver is far more likely to be read and used than a massive complicated poorly documented framework
3
2
0
I have a fair number of smaller projects / libraries that I have been using over the past 2 years. I am thinking about moving them to Google Code to make it easier to share with co-workers and easier to import them into new projects on my own environments. The are things like a simple FSMs, CAN (Controller Area Network) drivers, and GPIB drivers. Most of them are small (less than 500 lines), so it makes me wonder are these types of things too small for a stand alone open-source project? Note that I would like to make it opensource because it does not give me, or my company, any real advantage.
How small is *too small* for an opensource project?
1.2
0
0
119
13,214,349
2012-11-03T22:58:00.000
0
0
0
1
python,open-source,driver,libraries
13,214,378
3
false
0
0
500 lines? That's a lot in my opinion. Sounds just fine to publish them as a project. I mean how many blog posts have you read with just some code that saved you hours? Now imagine that but 500 lines of code and a permanent host designed for the purpose
3
2
0
I have a fair number of smaller projects / libraries that I have been using over the past 2 years. I am thinking about moving them to Google Code to make it easier to share with co-workers and easier to import them into new projects on my own environments. The are things like a simple FSMs, CAN (Controller Area Network) drivers, and GPIB drivers. Most of them are small (less than 500 lines), so it makes me wonder are these types of things too small for a stand alone open-source project? Note that I would like to make it opensource because it does not give me, or my company, any real advantage.
How small is *too small* for an opensource project?
0
0
0
119
13,214,349
2012-11-03T22:58:00.000
1
0
0
1
python,open-source,driver,libraries
13,214,412
3
false
0
0
Do not think about number of lines of code, think about utility of your code. If your code is useful for somebody, upload your code to a repository or repositories, write wiki, examples, etc. I saw a useful Python library that was less than 100 lines.
3
2
0
I have a fair number of smaller projects / libraries that I have been using over the past 2 years. I am thinking about moving them to Google Code to make it easier to share with co-workers and easier to import them into new projects on my own environments. The are things like a simple FSMs, CAN (Controller Area Network) drivers, and GPIB drivers. Most of them are small (less than 500 lines), so it makes me wonder are these types of things too small for a stand alone open-source project? Note that I would like to make it opensource because it does not give me, or my company, any real advantage.
How small is *too small* for an opensource project?
0.066568
0
0
119
13,215,104
2012-11-04T01:04:00.000
-2
0
1
0
python
13,215,122
6
true
0
0
You can use rstrip. Check the python docs.
1
2
0
I have a variable text whose value is like below,I need strip of trailing digits,is there a python built-in function to do it..if not,please suggest how can this be done in python e.g. -text=A8980BQDFZDD1701209.3 => -A8980BQDFZDD
How to strip trailing digits from a string
1.2
0
0
10,613
13,215,873
2012-11-04T03:43:00.000
3
0
1
0
python,python-3.x,python-3.3
13,215,904
1
true
0
0
You've been seeing things that somewhat conflict based on Python 2 vs 3. In Python 3, isinstance(foo, str) is almost certainly what you want. bytes is for raw binary data, which you probably can't include in an argument string like that. The python 2 str type stored raw binary data, usually a string in some specific encoding like utf8 or latin-1 or something; the unicode type stored a more "abstract" representation of the characters that could then be encoded into whatever specific encoding. basestring was a common ancestor for both of them so you could easily say "any kind of string". In python 3, str is the more "abstract" type, and bytes is for raw binary data (like a string in a specific encoding, or whatever raw binary data you want to handle). You shouldn't use bytes for anything that would otherwise be a string, so there's not a real reason to check if it's either str or bytes. If you absolutely need to, though, you can do something like isinstance(foo, (str, bytes)).
1
0
0
I'm trying to determine whether a function argument is a string, or some other iterable. Specifically, this is used in building URL parameters, in an attempt to emulate PHP's &param[]=val syntax for arrays - so duck typing doesn't really help here, I can iterate through a string and produce things like &param[]=v&param[]=a&param[]=l, but this is clearly not what we want. If the parameter value is a string (or a bytes? I still don't know what the point of a bytes actually is), it should produce &param=val, but if the parameter value is (for example) a list, each element should receive its own &param[]=val. I've seen a lot of explanations about how to do this in 2.* involving isinstance(foo, basestring), but basestring doesn't exist in 3.*, and I've also read that isinstance(foo, str) will miss more complex strings (I think unicode?). So, what is the best way to do this without causing some types to be lost to unnecessary errors?
Separate strings from other iterables in python 3
1.2
0
0
97
13,219,237
2012-11-04T14:06:00.000
1
0
0
0
python,python-3.x,pygame,portable-executable
13,242,922
3
true
0
1
Install Pygame, then take your entire Python folder and put it where you want it to go. If you mean you want to be able to use python (filename).py on the terminal, then you will have to change the PATH variable in the terminal, or add the shebang line #!usr\bin\python onto your programs.
1
0
0
I am looking for version (custom/beta?) of portable python that is python 3.x and includes pygame. I know pygame hasn't been fully converted to 3.x yet, but for what I need it for it works perfectly.
portable python 3.x and pygame
1.2
0
0
2,264
13,219,380
2012-11-04T14:25:00.000
1
0
1
0
python,pygame,py2exe
13,646,270
2
false
0
1
have you tried compiling with cx_freeze?
1
4
0
I have created a program that takes pictures with your webcam. I used Pygame and VideoCapture. It all works fine as .pyw file, but after it is compiled with py2exe it is just doesn't work. From my research, the error is coming from trying to compile the VideoCapture module. PLEEASSEE HELP ME!!! (The Same error occurs for Pyinstaller)
py2exe not compiling VideoCapture correctly
0.099668
0
0
165
13,220,619
2012-11-04T16:47:00.000
2
0
0
0
python,django,windows,pinax
13,221,181
3
false
1
0
Django allows you to do this super easily without any apps. I'd highly recommend you read up on its basic form processing features.
1
0
0
In my application I want to process some form data and show it later on a different page. My question is: Can you recommend any django apps to quickly set up such a system? I appreciate your answer!!!
What are the best reusable django apps for form processing?
0.132549
0
0
154
13,222,351
2012-11-04T20:03:00.000
2
0
1
0
regex,python-2.7
13,222,389
2
true
0
0
Actually \b is not just for beginning of the word, but also for the end of the word. In regex, \b means word boundary. So \b\w+\b is a pattern for a single word.
1
1
0
I cannot find in the python's RE module special symbol for the end of the word... There is \b - beginning of a word, and \B - opposite of \b, which is match for the any symbol of the word except first... But why there is no just end of the word? Am i missed something?
Regexp's end of a word
1.2
0
0
294
13,222,607
2012-11-04T20:33:00.000
0
0
1
0
python,user-interface,window
13,222,646
3
false
0
1
You can do this, the library is called tkinter.
2
0
0
Hello fellow python users, I bring forth a question that I have been wondering for a while. I love python, and have made many programs with it. Now what I want to do is (or know how to do) make a python program, but run it in a window with buttons that you click on instead of typing in numbers to elect things. I would like to just know weather or not I can do this, and if I can, please tell me where to go to learn how. Ok, it's not for the iPhone, sorry that I wasn't clear on that, and I didn't realize that iPhone was one of the tags.
Can you run python as a windowed program?
0
0
0
336
13,222,607
2012-11-04T20:33:00.000
0
0
1
0
python,user-interface,window
13,222,783
3
false
0
1
There are many GUI libraries to choose from, but I like Tkinter because it's easy and there's nothing to install (it comes with Python). But some people prefer wxPython or others, such as PyQT. You'll have to decide which you like, or just go with Tkinter if you don't want to go through the trouble of installing libraries just to try them out.
2
0
0
Hello fellow python users, I bring forth a question that I have been wondering for a while. I love python, and have made many programs with it. Now what I want to do is (or know how to do) make a python program, but run it in a window with buttons that you click on instead of typing in numbers to elect things. I would like to just know weather or not I can do this, and if I can, please tell me where to go to learn how. Ok, it's not for the iPhone, sorry that I wasn't clear on that, and I didn't realize that iPhone was one of the tags.
Can you run python as a windowed program?
0
0
0
336
13,223,839
2012-11-04T22:57:00.000
1
0
0
0
python,turbogears,repoze.who
13,551,567
1
true
1
0
Storing a timestamp of the last time password got changed inside request.identity['userdata'] should make possible to check it whenever the user gets back and log him out if it's different from the last time the password got changed for real.
1
0
0
I'm interested in improving security of my TurboGears 2.2 application so that when user changes his password, it logs him out from all sessions and he must login again. The goal is when user changes password on browser 1, he must relogin on browser 2, too. Experiments show that this is not the case, especially if browser 2 had "remember me" enabled. It's standard quickstarted app using repoze.who. It seems maybe I need to change AuthTktCookiePlugin, but don't see a way to do it without much rewiring.
Remove all user's cookies/sessions when password is reset
1.2
0
0
282
13,225,540
2012-11-05T03:25:00.000
0
0
0
1
python,google-app-engine
13,228,117
2
false
1
0
XG transactions are limited to a few entity groups for performance reasons. Running an XG transaction across hundreds of entity groups would be incredibly slow. Can you break your function up into many sub-functions, one for each entity group? If so, you should have no trouble running them individually, or on the task queue.
1
1
0
I'm developing on google app engine. The focus of this question is a python function that modifies hundreds of entity groups. The function takes one string argument. I want to execute this function as a transaction because there are instances right now when the same function with the same string argument are simultaneously run, resulting in unexpected results. I want the function to execute in parallel if the string arguments are different, but not if the string arguments are the same, they should be run serially. Is there a way to run a transaction on a function that modifies so many entity groups? So far, the only solution I can think of is flipping a database flag for each unique string parameter, and checking for the flag (deferring execution if the flag is set as True). Is there a more elegant solution?
Is there a way to atomically run a function as a transaction within google app engine when the function modifies more than five entity groups?
0
0
0
66
13,225,890
2012-11-05T04:23:00.000
66
0
0
0
python,django,django-timezone
13,226,368
2
true
1
0
Just ran into this last week for a field that had default=date.today(). If you remove the parentheses (in this case, try default=timezone.now) then you're passing a callable to the model and it will be called each time a new instance is saved. With the parentheses, it's only being called once when models.py loads.
1
28
0
This issue has been occurring on and off for a few weeks now, and it's unlike any that has come up with my project. Two of the models that are used have a timestamp field, which is by default set to timezone.now(). This is the sequence that raises error flags: Model one is created at time 7:30 PM Model two is created at time 10:00 PM, but in the MySQL database it's stored as 7:30 PM! Every model that is created has its time stamp saved under 7:30 PM, not the actual time, until a certain duration passes. Then a new time is set and all the following models have that new time... Bizzare Some extra details which may help in discovering the issue: I have a bunch of methods that I use to strip my timezones of their tzinfo's and replace them with UTC. This is because I'm doing a timezone.now() - creationTime calculation to create a: "model was posted this long ago" feature in the project. However, this really should not be the cause of the problem. I don't think using datetime.datetime.now() will make any difference either. Anyway, thanks for the help!
Django default=timezone.now() saves records using "old" time
1.2
1
0
23,029
13,227,115
2012-11-05T06:38:00.000
0
0
1
0
python-2.7,scipy
13,229,534
3
false
0
0
You can make a change of variables s = t_0 - t, and integrate the differential equation with respect to s. odeint doesn't do this for you.
1
2
1
Is it possible to integrate any Ordinary Differential Equation backward in time using scipy.integrate.odeint ? If it is possible, could someone tell me what should be the arguement 'time' in 'odeint.
Backward integration in time using scipy odeint
0
0
0
4,017
13,228,677
2012-11-05T08:50:00.000
2
1
0
1
python,cruisecontrol.net,nant
13,293,387
1
true
0
0
I don't think there is a built in python task, but you should be able to execute it by crafting an exec task.
1
0
0
Is is posible to run a python script from Cruisecontrol.Net ? Is there a CCNEt task or a nant task that can be used?
Run python script from CruiseControl.NET
1.2
0
0
386
13,228,847
2012-11-05T09:02:00.000
1
0
0
0
python,django,facebook-graph-api
13,228,977
3
false
1
0
Make a Graph API call like this and you get the real URL: https://graph.facebook.com/[fbid]?fields=picture Btw, you don´t need an access token for this.
1
0
0
I'm developing an application that displays a users/friends photos. For the most part I can pull photos from the album, however for user/album cover photos, all that is given is the object ID for which the following URL provides the image: https://graph.facebook.com/1015146931380136/picture?access_token=ABCDEFG&type=picture Which when viewed redirects the user to the image file itself such as: https://fbcdn-photos-a.akamaihd.net/hphotos-ak-ash4/420455_1015146931380136_78924167_s.jpg My question is, is there a Pythonic or GraphAPI method to resolve the final image path an avoid sending the Access Token to the end user?
Resolve FB GraphAPI picture call to the final URL
0.066568
0
1
872
13,230,195
2012-11-05T10:29:00.000
0
0
0
0
python,python-2.7,openerp,accounting
13,824,676
4
false
1
0
IMO its very difficult we are currently migrating some data and its proving to be difficult. I would advice you to pick a date in the future and tell everyone to just use another db with the correct chart of accounts. Your finance dept will be the one to suggest what date is perfect. How about when a period starts.
4
8
0
I have installed chart of accounts A for company1. This chart was used couple months for accounting. How can I convert into chart of accounts B and keep old data for accounts (debit, credit, etc.)? In other words, is possible migrate data from one chart of accounts to another? Solution could be programmatically or trough web client interface (not important). Virtual charts of accounts can't be used. Chart of accounts B must became main chart with old data. Every advice will help me a lot. Thanks
How to convert in OpenERP from one chart of account to another?
0
0
0
963
13,230,195
2012-11-05T10:29:00.000
0
0
0
0
python,python-2.7,openerp,accounting
13,369,353
4
false
1
0
I don't know of any way to install another chart of accounts after you've run the initial configuration wizard on a new database. However, if all you want to do is change the account numbers, names, and parents to match a different chart of accounts, then you should be able to do that with a bunch of database updates. Either manually edit each account if there aren't too many accounts, or write a SQL or Python script to update all the accounts. To do that, you'll need to map each old account to a new account code, name, and parent, then use that map to generate a script.
4
8
0
I have installed chart of accounts A for company1. This chart was used couple months for accounting. How can I convert into chart of accounts B and keep old data for accounts (debit, credit, etc.)? In other words, is possible migrate data from one chart of accounts to another? Solution could be programmatically or trough web client interface (not important). Virtual charts of accounts can't be used. Chart of accounts B must became main chart with old data. Every advice will help me a lot. Thanks
How to convert in OpenERP from one chart of account to another?
0
0
0
963
13,230,195
2012-11-05T10:29:00.000
0
0
0
0
python,python-2.7,openerp,accounting
15,182,814
4
false
1
0
I needed to do similar. It is possible to massage the chart from one form to another but I found in the end that creating a New Database, bringing in modules, assigning the new Chart and then importing all critical elements was the best and safest path. If you have a lot of transactions that will be more difficult to do the import on. If that is the case, then massage your chart from one form to another. I am sure there will be some way to do an active Migration sometime in the future. You defintely don't want to live with a bad chart or with out your history if you can help it.
4
8
0
I have installed chart of accounts A for company1. This chart was used couple months for accounting. How can I convert into chart of accounts B and keep old data for accounts (debit, credit, etc.)? In other words, is possible migrate data from one chart of accounts to another? Solution could be programmatically or trough web client interface (not important). Virtual charts of accounts can't be used. Chart of accounts B must became main chart with old data. Every advice will help me a lot. Thanks
How to convert in OpenERP from one chart of account to another?
0
0
0
963
13,230,195
2012-11-05T10:29:00.000
0
0
0
0
python,python-2.7,openerp,accounting
41,658,981
4
false
1
0
The fastest way to do so is using a ETL like Talend or Pentaho (provided there is a logic as to which account will map to which other during the process). If not you will have to do so by hand. In case there is a logic, you would export it to a format you can transform and re import. Uninstall your account chart and install the new. Then import all the data that you formatted using those tools.
4
8
0
I have installed chart of accounts A for company1. This chart was used couple months for accounting. How can I convert into chart of accounts B and keep old data for accounts (debit, credit, etc.)? In other words, is possible migrate data from one chart of accounts to another? Solution could be programmatically or trough web client interface (not important). Virtual charts of accounts can't be used. Chart of accounts B must became main chart with old data. Every advice will help me a lot. Thanks
How to convert in OpenERP from one chart of account to another?
0
0
0
963
13,233,107
2012-11-05T13:27:00.000
5
0
0
0
python,flask,neo4j,py2neo
13,234,558
2
true
1
0
None of the REST API clients will be able to explicitly support (proper) transactions since that functionality is not available through the Neo4j REST API interface. There are a few alternatives such as Cypher queries and batched execution which all operate within a single atomic transaction on the server side; however, my general approach for client applications is to try to build code which can gracefully handle partially complete data, removing the need for explicit transaction control. Often, this approach will make heavy use of unique indexing and this is one reason that I have provided a large number of "get_or_create" type methods within py2neo. Cypher itself is incredibly powerful and also provides uniqueness capabilities, in particular through the CREATE UNIQUE clause. Using these, you can make your writes idempotent and you can err on the side of "doing it more than once" safe in the knowledge that you won't end up with duplicate data. Agreed, this approach doesn't give you transactions per se but in most cases it can give you an equivalent end result. It's certainly worth challenging yourself as to where in your application transactions are truly necessary. Hope this helps Nigel
1
2
0
I'm currently building a web service using python / flask and would like to build my data layer on top of neo4j, since my core data structure is inherently a graph. I'm a bit confused by the different technologies offered by neo4j for that case. Especially : i originally planned on using the REST Api through py2neo , but the lack of transaction is a bit of a problem. The "embedded database" neo4j doesn't seem to suit my case very well. I guess it's useful when you're working with batch and one-time analytics, and don't need to store the database on a different server from the web server. I've stumbled upon the neo4django project, but i'm not sure this one offers transaction support (since there are no native client to neo4j for python), and if it would be a problem to use it outside django itself. In fact, after having looked at the project's documentation, i feel like it has exactly the same limitations, aka no transaction (but then, how can you build a real-world service when you can corrupt your model upon a single connection timeout ?). I don't even understand what is the use for that project. Could anyone could recommend anything ? I feel completely stuck. Thanks
using neo4J (server) from python with transaction
1.2
1
0
1,075
13,234,094
2012-11-05T14:26:00.000
1
0
0
1
python,google-app-engine,memcached
13,389,191
1
false
1
0
memcache.get should not increase the item count in memcache statistic, and I'm not able to reproduce that behavior in production. The memcache statistic page is global, so if you happen to have other requests (live, or through task queue) going to your application at the same time you're using the remote api, that could increase that count.
1
4
0
I'm looking at the Memcache Viewer in my admin console on my deployed Google App Engine NDB app. For testing, I'm using the remote api. I'm doing something very simple: memcache.get('somekey'). For some reason, everytime I call this line and hit refresh on my statistics, the item count goes up by 2. This happens whether or not the key exists in memcache. Any ideas why this could happen? Is this normal?
Why does calling get() on memcache increase item count in Google App Engine?
0.197375
0
0
195
13,234,196
2012-11-05T14:32:00.000
1
0
0
0
python,oracle,cx-oracle
58,120,873
6
false
0
0
Tip for Ubuntu users After configuring .bashrc environment variables, like it was explained in other answers, don't forget to reload your terminal window, typing $SHELL.
3
12
0
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case. Any suggestions?
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
0.033321
1
0
25,818
13,234,196
2012-11-05T14:32:00.000
2
0
0
0
python,oracle,cx-oracle
28,741,244
6
false
0
0
I got this message when I was trying to install the 32 bit version while having the 64bit Oracle client installed. What worked for me: reinstalled python with 64 bit (had 32 for some reason), installed cx_Oracle (64bit version) with the Windows installer and it worked perfectly.
3
12
0
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case. Any suggestions?
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
0.066568
1
0
25,818
13,234,196
2012-11-05T14:32:00.000
2
0
0
0
python,oracle,cx-oracle
13,234,377
6
false
0
0
I installed cx_Oracle, but I also had to install an Oracle client to use it (the cx_Oracle module is just a common and pythonic way to interface with the Oracle client in Python). So you have to set the variable ORACLE_HOME to your Oracle client folder (on Unix: via a shell, for instance; on Windows: create a new variable if it does not exist in the Environment variables of the Configuration Panel). Your folder $ORACLE_HOME/network/admin (%ORACLE_HOME%\network\admin on Windows) is the place where you would place your tnsnames.ora file.
3
12
0
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case. Any suggestions?
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
0.066568
1
0
25,818
13,236,098
2012-11-05T16:20:00.000
2
0
0
0
python,pandas,csv
13,236,277
2
true
0
0
There's no default way to do this right now. I would suggest chunking the file and iterating over it and discarding the columns you don't want. So something like pd.concat([x.ix[:, cols_to_keep] for x in pd.read_csv(..., chunksize=200)])
1
9
1
Suppose I have a csv file with 400 columns. I cannot load the entire file into a DataFrame (won't fit in memory). However, I only really want 50 columns, and this will fit in memory. I don't see any built in Pandas way to do this. What do you suggest? I'm open to using the PyTables interface, or pandas.io.sql. The best-case scenario would be a function like: pandas.read_csv(...., columns=['name', 'age',...,'income']). I.e. we pass a list of column names (or numbers) that will be loaded.
How to load only specific columns from csv file into a DataFrame
1.2
0
0
7,641
13,236,405
2012-11-05T16:38:00.000
1
0
1
0
python,py2exe
13,237,078
2
true
0
0
There is a feature in the configuration of py2exe that allows you to bundle all the Python files in a single library.zip file. That would considerably reduce the amount of files in the root directory, but there will still remain some files, regardless of all that. These files are generally DLL files, at least from what I saw with GUI applications. You cannot remove these, because they are required to launch the application. A workaround to this problem is to create a batch file that will run the actual program which can be in child directory. The point is that these files should either be in the same directory as the executable, or the current working directory, or a path in the PATH environment variable. At least it's the case of most of these. Another approach might be a batch file which will modify the PATH variable or cd to another directory and run the file afterwards I never tried to do it, so it might break some things for you. Anyway, IMO the best approach is to create an installer and add shortcuts and you won't have to bother with the user messing with these files.
1
1
0
Hi!I made a chess engine in python which i then compiled to .exe using py2exe. The problem is that it doesn't look very neat when i have all the strange files gathered together in the same folder (dist). I'd like to make a new folder inside the dist folder that contains all the helper files, so all my dist folder contains is the folder holding the helper files and the main launch application. However, i can't simply copy the helper files to a new folder, as the computer doesn't find them then and raises an error. How can it be solved? Also, i'm using inno setup to make an installation, but i can't figure out how to find a solution there, either. Thank you very much!
How to put files in folders using py2exe.
1.2
0
0
1,098
13,237,955
2012-11-05T18:16:00.000
-1
0
1
1
python,python-3.x,argparse
13,238,122
2
false
0
0
try looking into using sys.argv[]
1
0
0
So, what i am trying to do is LS Mimic in Python code. I have completed most of the commands, but i have one thing that i cant find the solution So, if the ls command gets this command line, ls test1 It should find the test1 directory and then do the ls in that directory. However, since I can only find ways of creating arguments that needs a keyword before the actual usage, I cant find the way of doing this.. it cant be something like this ls -move_dir test1 it is fine if for the program to think if there is no command, it will treat as above way. ( it will find that directory and run ls ) Please help me !!
Python Argparse without command keyword
-0.099668
0
0
813
13,239,004
2012-11-05T19:30:00.000
0
0
0
0
python,sql,postgresql,pgadmin
52,581,750
3
false
0
0
Although this is quite an old question, it doesn't seem to have a satisfying answer and I was struggling with the exact samen issue. With the arrival of SQL Server Management Studio 2018 edition - and probably somewhat before that - a pretty good solution was offered by Microsoft. In SSMS on a database node in the object explorer, right-click, select 'Tasks' and choose 'Import data'; Choose 'Flat file' as source and, in the General section, browse to your .csv file. An important note here: make sure there's no table in your target SQL server matching the files name; In the Advanced section, click on 'Suggest types' and in the next dialog, enter preferrably the total number of rows in your file or, if that's too much, a large enough number to cover all possible values (this takes a while); Click next, and in the subsquent step, connect to your SQL server. Now, every brand has their own flavour of data types, but you should get a nice set of relevant pointers for your taste later on. I've tested this using the SQL Server Native Client 11.0. Please leave your comments for other providers as a reply to this solution; Here it comes... click 'Edit Mappings'...; click 'Edit SQL' et voila, a nice SQL statement with all the discovered data types; Click through to the end, selecting 'Run immediately' to see all of your .csv columns created with appopriate types in your SQL server. Extra: If you run the above steps twice, exactly the same way with the same file, the first loop will use the 'CREATE TABLE...' statement, but the second go will skip table creation. If you save the second run as an SSIS (Integration Services) file, you can later re-run the entire setup without scanning the .csv file.
2
11
0
I've looked a a number of questions on this site and cannot find an answer to the question: How to create multiple NEW tables in a database (in my case I am using PostgreSQL) from multiple CSV source files, where the new database table columns accurately reflect the data within the CSV columns? I can write the CREATE TABLE syntax just fine, and I can read the rows/values of a CSV file(s), but does a method already exist to inspect the CSV file(s) and accurately determine the column type? Before I build my own, I wanted to check if this already existed. If it doesn't exist already, my idea would be to use Python, CSV module, and psycopg2 module to build a python script that would: Read the CSV file(s). Based upon a subset of records (10-100 rows?), iteratively inspect each column of each row to automatically determine the right column type of the data in the CSV. Therefore, if row 1, column A had a value of 12345 (int), but row 2 of column A had a value of ABC (varchar), the system would automatically determine it should be a format varchar(5) based upon the combination of the data it found in the first two passes. This process could go on as many times as the user felt necessary to determine the likely type and size of the column. Build the CREATE TABLE query as defined by the column inspection of the CSV. Execute the create table query. Load the data into the new table. Does a tool like this already exist within either SQL, PostgreSQL, Python, or is there another application I should be be using to accomplish this (similar to pgAdmin3)?
Create SQL table with correct column types from CSV
0
1
0
7,124
13,239,004
2012-11-05T19:30:00.000
7
0
0
0
python,sql,postgresql,pgadmin
21,917,162
3
false
0
0
I have been dealing with something similar, and ended up writing my own module to sniff datatypes by inspecting the source file. There is some wisdom among all the naysayers, but there can also be reasons this is worth doing, particularly when we don't have any control of the input data format (e.g. working with government open data), so here are some things I learned in the process: Even though it's very time consuming, it's worth running through the entire file rather than a small sample of rows. More time is wasted by having a column flagged as numeric that turns out to have text every few thousand rows and therefore fails to import. If in doubt, fail over to a text type, because it's easier to cast those to numeric or date/times later than to try and infer the data that was lost in a bad import. Check for leading zeroes in what appear otherwise to be integer columns, and import them as text if there are any - this is a common issue with ID / account numbers. Give yourself some way of manually overriding the automatically detected types for some columns, so that you can blend some semantic awareness with the benefits of automatically typing most of them. Date/time fields are a nightmare, and in my experience generally require manual processing. If you ever add data to this table later, don't attempt to repeat the type detection - get the types from the database to ensure consistency. If you can avoid having to do automatic type detection it's worth avoiding it, but that's not always practical so I hope these tips are of some help.
2
11
0
I've looked a a number of questions on this site and cannot find an answer to the question: How to create multiple NEW tables in a database (in my case I am using PostgreSQL) from multiple CSV source files, where the new database table columns accurately reflect the data within the CSV columns? I can write the CREATE TABLE syntax just fine, and I can read the rows/values of a CSV file(s), but does a method already exist to inspect the CSV file(s) and accurately determine the column type? Before I build my own, I wanted to check if this already existed. If it doesn't exist already, my idea would be to use Python, CSV module, and psycopg2 module to build a python script that would: Read the CSV file(s). Based upon a subset of records (10-100 rows?), iteratively inspect each column of each row to automatically determine the right column type of the data in the CSV. Therefore, if row 1, column A had a value of 12345 (int), but row 2 of column A had a value of ABC (varchar), the system would automatically determine it should be a format varchar(5) based upon the combination of the data it found in the first two passes. This process could go on as many times as the user felt necessary to determine the likely type and size of the column. Build the CREATE TABLE query as defined by the column inspection of the CSV. Execute the create table query. Load the data into the new table. Does a tool like this already exist within either SQL, PostgreSQL, Python, or is there another application I should be be using to accomplish this (similar to pgAdmin3)?
Create SQL table with correct column types from CSV
1
1
0
7,124