Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
10,381,843
2012-04-30T10:07:00.000
0
0
1
0
python,import
10,382,180
6
false
0
0
Another approach is to use the standard csv module to read the file lines into the rows like ['Vincent', '18', '190']. If you want the numbers as integers (i.e. not as strings), the strings must be converted explicitly via int(str_variable).
1
0
0
I just started writing a code that should create, compare and load entries of a text file. The program is asking you for your name, your age and your height. It will then create a text file like such: Vincent,18,190 I have gotten this to work but I can not figure out how to load this information back into Python once I have closed it. I want to call load and then it will load all the text file entries and display them as: Name:"name" Age:"age" Height:"height" Ho can I do this?
Reading information from a text document
0
0
0
158
10,382,810
2012-04-30T11:18:00.000
0
0
0
0
python,ios,datetime,heroku,scheduled-tasks
10,383,029
2
true
1
0
There are certainly other ways. Whether they're better is a different matter. For instance: suppose there are n minutes left before the end of a given user's day and they haven't had their notification yet. Then send them a notification now with probability 1/n. This way, you don't need the huge list of random datetimes, but every minute you still need to iterate over all your users, see whether they've been notified yet, and compute random numbers for them all. It's a little more computation in total (though I doubt the difference is significant) and means that all your database updates are small. Or: Each time you notify a user, then you generate their next update time. That way, the next-update times get computed incrementally but are still known in advance. (If your number of users is relatively small, so that on most minutes there isn't a notification, you can make the scheduling smarter -- but I won't say more about that, because if you have that few users then the amount of work your software needs to do is going to be negligible anyway and there's no point optimizing for that case.)
1
0
0
An interesting conundrum. Here's what I want to do: I have a Pyramid (python 2.7.2) website running on Heroku which pushes notifications to my iPhone app users. Each day, every user needs a push notification sent to them at a randomly generated time between 10:00am and 10:00pm (it obviously needs to know the users timezone as well). My current plan is the following: Use a persistent worker process to trigger a function every 1 minute on the minute. Each minute, it will call a function (on a different thread so as not to interrupt the timer) which will do 2 things: Check if it's 11:00pm for each timezone (which will happen 24 times a day, once for each timezone). If true, it will call a function which loops through every user in that respective timezone and generates their random time for the next day, then stores it in the Mongo database. On each minute, the worker will also loop through the users and check if they have their notification due at that time. If it's due, then send the notification. My question is: Is there a better way of doing this that doesn't require generating a huge list of random datetimes every day beforehand?
Executing a function on datetimes generated randomly each day for each user
1.2
0
0
177
10,386,132
2012-04-30T15:20:00.000
1
0
1
1
python,copy,system-administration
10,386,159
2
false
0
0
You need to run the program with escalated privileges. Under Ubuntu, this is normally done with the sudo command, which will prompt the user for their password.
1
0
0
I'd like to place a special file in the /usr/bin folder of Ubuntu. Basically I'm trying to write a setup file in python which would do the job. But administrative privileges are needed to fulfill the job, how to provide my setup with these privileges (provided that I have the password and can use it in my program)?
Python: how to copy files in /bin/ folder
0.099668
0
0
1,732
10,387,816
2012-04-30T17:21:00.000
1
0
0
0
python,web-crawler,mechanize
10,408,283
1
true
1
0
Look for text on the sibling nodes and the parent node's text, because that's where they frequently are. LXML might be able to help if you actually have to parse the html.
1
0
0
I'm writing a crawler, and I keep encountering forms controls for which mechanize can give me no information beyond type. Is there any way that I can get the human-readable text associated with the control? I know this is a bit of a fuzzy area, since there's no perfect way of getting that information, but perhaps something can help?
Can python's mechanize extract the text associated with a control?
1.2
0
0
153
10,388,127
2012-04-30T17:46:00.000
12
0
1
0
python,class,static,instance,class-method
10,388,323
5
true
0
0
In my experience creating a class is a very good solution for a number of reasons. One is that you wind up using the class as a 'normal' class (esp. making more than just one instance) more often than you might think. It's also a reasonable style choice to stick with classes for everthing; this can make it easier for others who read/maintain your code, esp if they are very OO - they will be comfortable with classes. As noted in other replies, it's also reasonable to just use 'bare' functions for the implementation. You may wish to start with a class and make it a singleton/Borg pattern (lots of examples if you googlefor these); it gives you the flexibility to (re)use the class to meet other needs. I would recommend against the 'static class' approach as being non-conventional and non-Pythonic, which makes it harder to read and maintain.
2
29
0
I once read (I think on a page from Microsoft) that it's a good way to use static classes, when you don't NEED two or more instances of a class. I'm writing a program in Python. Is it a bad style, if I use @classmethod for every method of a class?
Static classes in Python
1.2
0
0
34,052
10,388,127
2012-04-30T17:46:00.000
36
0
1
0
python,class,static,instance,class-method
10,388,155
5
false
0
0
Generally, usage like this is better done by just using functions in a module, without a class at all.
2
29
0
I once read (I think on a page from Microsoft) that it's a good way to use static classes, when you don't NEED two or more instances of a class. I'm writing a program in Python. Is it a bad style, if I use @classmethod for every method of a class?
Static classes in Python
1
0
0
34,052
10,389,594
2012-04-30T19:42:00.000
1
0
0
0
python,django,forms
10,389,650
3
false
1
0
Hm. I don't believe any utilty like this exists. It would be nice if there were a reverse ModelForm. It would look at field type and get the data ranges for each field for a search form. I think right now you are stuck with creating a text box and a datepicker range. And processing that data in a view.
1
2
0
I have two models: Director, and Film. I want to create a web query form so that a user can search something like "All films from director 'Steven Spielberg' between 1990 and 1998". Just curious what the best and simplest way to do this would be? Thanks,
Creating a web query form
0.066568
0
0
2,579
10,389,985
2012-04-30T20:15:00.000
0
0
1
0
python,deployment,compiler-construction,pip,binaries
10,390,096
3
false
0
0
It would put a heavy strain on the server?
1
4
0
Where I currently work we've had a small debate about deploying our Python code to the production servers. I voted to build binary dependencies (like the python mysql drivers) on the server itself, just using pip install -r requirements.txt. This was quickly vetoed with no better explanation that "we don't put compilers on the live servers". As a result our deployment process is becoming convoluted and over-engineered simply to avoid this compilation step. My question is this: What's the reason these days to avoid having a compiler on live servers?
What are the reasons for not hosting a compiler on a live server?
0
0
0
1,648
10,390,689
2012-04-30T21:15:00.000
0
0
0
0
python,django
10,393,642
1
false
1
0
You could try returning None or raising NotImplemented in __new__ in the class, I don't know if that would affect anything else but its worth a shot.
1
1
0
In Django, if I do an abstract model class, and then have actual derived classes, only these classes will have an associated table, and the abstract class cannot be instantiated by itself. If I remove the abstract=True meta information, then an actual table is created for the base class, but doing so allows client code to create an object of the base class. Is there a way of forcing client code to always instantiate derived classes, while having a table associated to the base class ?
Abstract model class in django, but with table
0
0
0
657
10,393,531
2012-05-01T04:24:00.000
2
1
0
1
python,google-app-engine
10,398,726
2
false
1
0
Don't forget that the remoteapi executes your code locally and only calls appengine servers for datastore/blobstore/etc. operations. So in essence, you're running code that's hitting a database living over the network. It's definitely slower.
1
2
0
I use the remote API for some utility tasks, and I've noticed that it is orders of magnitude slower than code running on Appengine. A simple get_by_id(list) took a couple of minutes using the remote API, and a couple of seconds running on Appengine. The logs show that the remote API fetched separately taking a couple of seconds each; whereas on Appengine the whole list of objects is retrieved in about the same time. Is there any way to improve this situation?
Remote API is extremely slow
0.197375
0
0
415
10,394,723
2012-05-01T07:20:00.000
0
0
1
0
python
10,394,787
4
false
0
0
i think the linux command unix2dos、dos2unix、iconv will helpful。 such like iconv -f latin-1 -t UTF-8 latin.txt >utf8.txt
1
1
0
How to convert a Non-ISO extended-ASCII English text, with CRLF line terminators to utf-8 in Python
File encoding from English text to UTF-8
0
0
0
2,262
10,395,691
2012-05-01T09:12:00.000
1
0
1
0
python,numpy
10,395,730
1
true
0
0
then it does not recognize the module zeroes in the program Make sure you don't have a file called numpy.py in your subdirectory. If you do, it would shadow the "real" numpy module and cause the symptoms you describe.
1
1
1
I have used import numpy as np in my program and when I try to execute np.zeroes to create a numpy array then it does not recognize the module zeroes in the program. This happens when I execute in the subdirectory where the python program is. If I copy it root folder and execute, then it shows the results. Can someone guide me as to why is this happening and what can I do to get the program executed in the subdirectory it self? Thanks
Numpy cannot be accessed in sub directories
1.2
0
0
73
10,396,315
2012-05-01T10:16:00.000
2
0
0
0
python,django,database-design,mongodb,redis
10,396,700
4
false
1
0
There's a huge distinction to be made between Redis and MongoDB for your particular needs, in that Redis, unlike MongoDB, doesn't facilitate value queries. You can use MongoDB to embed the comments within the post document, which means you get the post and the comments in a single query, yet you could also query for post documents based on tags, the author, etc. You'll definitely want to go with MongoDB. Redis is great, but it's not a proper fit for what I'd believe you'll need from it.
3
0
0
I'm building a social app in django, the architecture of the site will be very similar to facebook There will be posts, posts will have comments Both posts and comments will have meta data like date, author, tags, votes I decided to go with nosql database because of the ease with which we can add new features. I finalized on mongodb as i can easily store a post and its comments in a single document. I'm having second thoughts now, would REDIS be better than mongo for this kind of app? Update: I have decided to go with mongodb, will use redis for user home page and home page if necessary.
mongo db or redis for a facebook like site?
0.099668
1
0
1,158
10,396,315
2012-05-01T10:16:00.000
0
0
0
0
python,django,database-design,mongodb,redis
10,403,789
4
false
1
0
First, loosely couple your app and your persistence so that you can swap them out at a very granular level. For example, you want to be able to move one service from mongo to redis as your needs evolve. Be able to measure your services and appropriately respond to them individually. Second, you are unlikely to find one persistence solution that fits every workflow in your application at scale. Don't be afraid to use more than one. Mongo is a good tool for a set of problems, as is Redis, just not necessarily the same problems.
3
0
0
I'm building a social app in django, the architecture of the site will be very similar to facebook There will be posts, posts will have comments Both posts and comments will have meta data like date, author, tags, votes I decided to go with nosql database because of the ease with which we can add new features. I finalized on mongodb as i can easily store a post and its comments in a single document. I'm having second thoughts now, would REDIS be better than mongo for this kind of app? Update: I have decided to go with mongodb, will use redis for user home page and home page if necessary.
mongo db or redis for a facebook like site?
0
1
0
1,158
10,396,315
2012-05-01T10:16:00.000
1
0
0
0
python,django,database-design,mongodb,redis
10,396,466
4
false
1
0
These things are subjective and can be looked at in different directions. But if you have already decided to go with a nosql solution and is trying to determine between mongodb and redis I think it is better to go with mongodb as I guess you should be able to save a big number of posts and also mongodb documents are better suited to represent posts. Redis can only save upto the max memory limit but is super fast. So if you need to index some kind of things you can save posts in mongodb and then keep the id's of posts in redis to access faster.
3
0
0
I'm building a social app in django, the architecture of the site will be very similar to facebook There will be posts, posts will have comments Both posts and comments will have meta data like date, author, tags, votes I decided to go with nosql database because of the ease with which we can add new features. I finalized on mongodb as i can easily store a post and its comments in a single document. I'm having second thoughts now, would REDIS be better than mongo for this kind of app? Update: I have decided to go with mongodb, will use redis for user home page and home page if necessary.
mongo db or redis for a facebook like site?
0.049958
1
0
1,158
10,397,695
2012-05-01T12:34:00.000
6
0
1
0
python,multithreading
10,397,794
2
false
0
0
a few ideas: web crawler - have a pool of threads getting work from a dispatcher via a queue, download web pages and return the results somewhere. chat server - accepting permanent connections from users and dispatching messages from one to another. mp3 file organizer - rebuild a music library's structure from mp3 tag data, and reorganize them in folders. you can have multiple threads working at once. I'll edit with some more ideas if I think of any. EDIT: Since python is limited to one CPU per process, no matter how many threads, if you want to parallelize CPU consuming stuff, threading will get you nowhere, use the multiprocessing interface instead, it's almost identical to the threading API, but dispatches stuff to sub processes that can use more CPU cores.
2
2
0
I want to learn threading and multiprocessing in Python. I don't know what kind of project to take up for this. I want to be able to deal with all the related objects like Locks, Mutexes, Conditions, Semaphores, etc. Please suggest a project type that's best for me. P.S. Along with the project, please suggest any tools to debug / profile / load-test my app so that I can gauge how good my threaded implementations are.
What type of project will help me learn thread programming
1
0
0
1,969
10,397,695
2012-05-01T12:34:00.000
0
0
1
0
python,multithreading
10,397,753
2
false
0
0
I propose you attempt to program a very simple database server. Each client can connect to the server and do create, read, update, delete on a set of entities. Implementation-wise, the server should have one thread for each client all operating a global set of entities, which are protected using locks. For learning how to use conditional variables, the server should also implement a notify method, which allows a client to be notified when an entity changed. Good luck! NOTE: Using threads is not the most efficient way to program a simple database server, but I think it is a good project for self-improvement.
2
2
0
I want to learn threading and multiprocessing in Python. I don't know what kind of project to take up for this. I want to be able to deal with all the related objects like Locks, Mutexes, Conditions, Semaphores, etc. Please suggest a project type that's best for me. P.S. Along with the project, please suggest any tools to debug / profile / load-test my app so that I can gauge how good my threaded implementations are.
What type of project will help me learn thread programming
0
0
0
1,969
10,398,315
2012-05-01T13:26:00.000
0
1
1
0
javascript,python,google-app-engine
10,406,398
2
false
1
0
To my knowledge, there are no complete and robust implementations of Javascript interpreters on Python. Your best option is probably to deploy an alternate version of your app with the Rhino interpreter in Java, and call this as a web service with the main version of your app.
1
4
0
I am trying to execute simple JavaScript code in a pure Python environment (Google AppEngine). I've tried PYJON, but it does not seem mature enough for real use(it does not handle eg forward referenced functions or do-while and it hangs on array usage). One idea would be to use pynarcissus to convert JavaScript into a syntax tree and than convert this tree jnto a Python AST which could be compiled into Python bytecode. Has anybody done this before? Any problems with this idea?
Converting JavaScript into Python bytecode
0
0
0
846
10,398,470
2012-05-01T13:37:00.000
2
0
1
0
python
10,398,565
4
false
0
0
While you can use timeit to find out (and I encourage you to do so, if for no other reason than to learn how it works), in the end it almost certainly doesn't matter. frozensets are designed specifically to be hashable, so I would be shocked if their hash method is linear time. This kind of micro-optimisation can only matter if you need to get through a fixed (large) number of look-ups in a very short amount of time in a realtime application. Update: Look at the various updates and comments to Lattyware's answer - it took a lot of collective effort (well, relatively), to strip out the confounding factors, and show that the performance of the two approaches is almost the same. The performance hits were not where they were assumed to be, and it will be the same in your own code. Write your code to work, then profile to find the hotspots, then apply algorithmic optimisations, then apply micro-optimisations.
1
7
0
I have a script that makes many calls to a dictionary using a key consisting of two variables. I know that my program will encounter the two variables again in the reverse order which makes storing the key as a tuple feasible. (Creating a matrix with the same labels for rows and columns) Therefore, I was wondering if there was a performance difference in using a tuple over a frozenset for a dictionary key.
Is there a performance difference in using a tuple over a frozenset as a key for a dictionary?
0.099668
0
0
3,598
10,403,720
2012-05-01T20:14:00.000
2
1
0
0
c++,python,boost,windows-7,boost-python
10,486,821
1
false
0
0
Ah, I got it figured out. The problems were python 3 and boost wasn't properly linking the static libraries. I switched to python2.7 and defined BOOST_PYTHON_STATIC_LIB before loading any headers. Everything works fine now. Thanks for the help.
1
2
0
I've been trying to build boost python for about two days now and am incredibly frustrated. When I build the library, it tells me that it was built successfully. When I try to run anything using the library i get errors such as; undefined reference to imp__ZN5boost6python6detail11init_moduleEPKcPFvvE In function ZNK5boost6python9type_info4nameEv: undefined reference to imp__ZN5boost6python6detail12gcc_demangleEPKc I have absolutely no idea why this is happening, but I'd appreciate any ideas BTW, I'm using boost1.49.0 with python 3.0 and the other libraries seem to have been built fine. I've already used the serialization library and it works. Let me know if you need any more info. Thanks.
Building Boost Python with mingw on Windows7 64bit
0.379949
0
0
362
10,407,046
2012-05-02T02:46:00.000
1
0
0
0
python,django,pdf,reportlab
10,422,659
2
false
1
0
If html2pdf doesn't do what you need, you can do everything you want to do with ReportLab. Have a look at the ReportLab manual, in particular the parts on Platypus. This is a part of the ReportLab library that allows you to build PDFs out of objects representing page parts (paragraphs, tables, frames, layouts, etc.).
1
1
0
What I am trying to accomplish is to allow users to view information in the django admin console and allow them to save and print out a PDF of the information infront of them based upon how ever they sorted/filtered the data. I have seen a lot of documentation on report lab but mostly for just drawing lines and what not. How can I simply output the admin results to a PDF? If that is even possible. I am open to other suggestions if report lab is not the ideal way to get this done. Thanks in advance.
Django reportlab - Print current page (admin console) to PDF
0.099668
0
0
976
10,408,208
2012-05-02T05:31:00.000
1
0
0
0
python,django,web,cgi,pyramid
10,414,361
5
false
1
0
If the application needs to talk to a database that already exists, then django won't buy you much value IMO. since the admin interface won't work for that part unless the db schema adheres to what django expects (auto increment int primary keys etc...), the same goes for any other webframework that presumes an expected schema. So then, sqlalchemy is your best bet. It has an orm layer, but you don't have to use it, you can get a lot of bang for the buck just using the query interfaces. So as far as webframeworks go, that narrows it down to anything that can use sqlalchemy. Which is anything but Django, Zope and probably web2py due to the reasons mentioned above. Though for Zope it's value is somewhat derived from the fact that it's backed by zodb. But zodb is not going to help you at all with your existing database and data. So out of what is left of the web frameworks, what I would use a selection criteria is what it's abilities are for routing requests to views. And how well it matches your url generation strategy. IMO, pyramid is very flexible in this area. But you might not need that. You may be able to get by with flask or bottle. Or even straight webob. Another slightly less important criteria is template engine/language, most frameworks will support the more popular ones, such as jinja2 etc... My personal choice is pyramid because it scales nicely from super easy to super hairy in the request routing department. But again, depending on how you want your urls to work, you may not need that.
4
2
0
I'm working on a project which is converting a 50mb python Graduated interval recall rating system for pictures and text program to a website based application. (and then design a website around it) It needs to connect to a database to store user information frequently so it needs to be run server side correct? Assuming I know nothing, what is the best structure to complete this? There seem like many different options and I feel lost. I've been using CGI to create a web UI for the original python code. Is this even possible to implement? What about pyramid / uWSGI / pylon / flask or Django? (although I was told to refrain from it for this project)
Python program to Website Application
0.039979
0
0
575
10,408,208
2012-05-02T05:31:00.000
0
0
0
0
python,django,web,cgi,pyramid
10,408,572
5
false
1
0
Django has a command (./manage.py inspectdb) that can help you make initial models of your current database. If you decide to redesign the database this will still make it easier to move the data into the new schema. Personally, I like Django, but the others maybe very well suited to your application. To communicate back to the server you could probably use AJAX.
4
2
0
I'm working on a project which is converting a 50mb python Graduated interval recall rating system for pictures and text program to a website based application. (and then design a website around it) It needs to connect to a database to store user information frequently so it needs to be run server side correct? Assuming I know nothing, what is the best structure to complete this? There seem like many different options and I feel lost. I've been using CGI to create a web UI for the original python code. Is this even possible to implement? What about pyramid / uWSGI / pylon / flask or Django? (although I was told to refrain from it for this project)
Python program to Website Application
0
0
0
575
10,408,208
2012-05-02T05:31:00.000
7
0
0
0
python,django,web,cgi,pyramid
10,409,701
5
false
1
0
Well, it may be difficult to give you a good advice because the description of your project is quite vague - what in the world is "a 50mb python Graduated interval recall rating system for pictures and text program"??? :) - but I'll try to outline the difference between the options you're listing: Django is a sort of an integrated solution - it includes a templating system, an ORM, forms framework and lots more. Because of the fact that those things are all closely tied together, Django provides some niceties such as built-in admin interface, pluggable apps etc. Which would make kick-starting development of a traditional website easier as you don't need to build those things yourself. For example, to build a blog site with Django, you need to define a database model, a couple of routes, and a couple of views and that's it - you can add and edit blog entries using the built-in admin interface and authenticate using pluggable authentication module. But there's a price, of course - to ensure all those bits work together, Django to some extent requires you to use technologies provided by Django - i.e., you have to define your models using Django ORM and write templates using Django templates. You can swap different bits for something else, but they understandably would not work well with the rest of the framework - i.e. you can use another ORM, such as SQLAlchemy, to access database, but such models won't work with Django's admin interface. To some extent, Django also expects a particular structure of database tables (i.e. it expects to be able to create those tables based on models defined in Python code), which would make working with an existing databases more difficult. Also, my understanding it that it expects you to have an SQL database. So, in my opinion, Django is a very good choice for building a "typical" Django website (it was built for news websites) which could make use of existing plugabble apps and other Django features. Pyramid, on the other hand, does not require you to use any particular technology for database access - in fact, it does not require you to have a database at all - you can build an application which works with data stored on filesystem, in an object database such as ZODB or some distributed NoSQL storage. Maybe even some XML file and a bunch of images... your imagination is your limit When using an SQL database, it does not expect the database to have a certain structure. Also, SQLAlchemy, the recommended Pyramid's ORM, is considered to be more flexible and powerful than Django ORM It does not require you to use any particular templating library or form library, so you can choose whatever suits your needs best. Pyramid does not even require you to use route mapping with is a cornerstone feature of most web frameworks - in addition to route mapping Pyramid supports URL traversal, which can be a very powerful way to work with hierarchical data structures. While not requiring you to use any particular technology, Pyramid does provide some sane templates for typical use cases. The cost of this flexibility is that may be more difficult to find existing "apps" which can be plugged into your very custom Pyramid website without any changes - although excellent WSGI support in Pyramid leverages that. Pylons is called Pyramid now after the project merged with repoze.bfg some time ago. uWSGI is more of an application/protocol to serve a Pyramid application (or other WSGI-conformant application) flask - never used it, maybe someone else will give you some overview. So, in short, the choice between Django and Pyramid boils down to the question "How much of Django's built-in features will I be able to use on my site" - because if you're not going to use Django's automatic admin or to make heavy use of third-party pluggable apps - everything else is better in Pyramid :)
4
2
0
I'm working on a project which is converting a 50mb python Graduated interval recall rating system for pictures and text program to a website based application. (and then design a website around it) It needs to connect to a database to store user information frequently so it needs to be run server side correct? Assuming I know nothing, what is the best structure to complete this? There seem like many different options and I feel lost. I've been using CGI to create a web UI for the original python code. Is this even possible to implement? What about pyramid / uWSGI / pylon / flask or Django? (although I was told to refrain from it for this project)
Python program to Website Application
1
0
0
575
10,408,208
2012-05-02T05:31:00.000
1
0
0
0
python,django,web,cgi,pyramid
10,408,245
5
false
1
0
I have been told that pylons is quite good (newer pyramid), but I personally use Django and I'm quite happy with it. Do not even try to use CGI because that's the same mistake I made - and I figured out later that changing all the html was a pain in the arse.
4
2
0
I'm working on a project which is converting a 50mb python Graduated interval recall rating system for pictures and text program to a website based application. (and then design a website around it) It needs to connect to a database to store user information frequently so it needs to be run server side correct? Assuming I know nothing, what is the best structure to complete this? There seem like many different options and I feel lost. I've been using CGI to create a web UI for the original python code. Is this even possible to implement? What about pyramid / uWSGI / pylon / flask or Django? (although I was told to refrain from it for this project)
Python program to Website Application
0.039979
0
0
575
10,408,826
2012-05-02T06:35:00.000
1
0
0
0
python,django,path,strip
67,345,321
4
false
1
0
you can try: "/get/category".strip("/")
1
88
0
I am using request.path to return the current URL in Django, and it is returning /get/category. I need it as get/category (without leading and trailing slash). How can I do this?
Remove leading and trailing slash / in python
0.049958
0
0
95,279
10,408,927
2012-05-02T06:43:00.000
13
0
0
0
python,xml,elementtree
51,963,017
5
false
0
0
in the pydoc it is mentioned to use list() method over the node to get child elements. list(elem)
1
33
0
I want to find a way to get all the sub-elements of an element tree like the way ElementTree.getchildren() does, since getchildren() is deprecated since Python version 2.7. I don't want to use it anymore, though I can still use it currently.
How to get all sub-elements of an element tree with Python ElementTree?
1
0
1
103,415
10,409,674
2012-05-02T07:46:00.000
1
0
0
0
python,image-processing,numpy,boolean,python-imaging-library
10,409,722
4
false
0
0
Depending on the size of your blob, I would say that dramatically reducing the resolution of your image may achieve what you want. Reduce it to a 1/10 resolution, find the one white pixel, and then you have a precise idea of where to search for the centroid.
2
4
1
I've got a 640x480 binary image (0s and 255s). There is a single white blob in the image (nearly circular) and I want to find the centroid of the blob (it's always convex). Essentially, what we're dealing with is a 2D boolean matrix. I'd like the runtime to be linear or better if possible - is this possible? Two lines of thought so far: Make use of the numpy.where() function Sum the values in each column and row, then find where the max value is based on those numbers... but is there a quick and efficient way to do this? This might just be a case of me being relatively new to python.
Finding a specific index in a binary image in linear time?
0.049958
0
0
1,130
10,409,674
2012-05-02T07:46:00.000
2
0
0
0
python,image-processing,numpy,boolean,python-imaging-library
10,409,877
4
false
0
0
The centroid's coordinates are arithmetic means of coordinates of the points. If you want the linear solution, just go pixel by pixel, and count means of each coordinates, where the pixels are white, and that's the centroid. There is probably no way you can make it better than linear in general case, however, if your circular object is much smaller than the image, you can speed it up, by searching for it first (sampling a number of random pixels, or a grid of pixels, if you know the blob is big enough) and then using BFS or DFS to find all the white points.
2
4
1
I've got a 640x480 binary image (0s and 255s). There is a single white blob in the image (nearly circular) and I want to find the centroid of the blob (it's always convex). Essentially, what we're dealing with is a 2D boolean matrix. I'd like the runtime to be linear or better if possible - is this possible? Two lines of thought so far: Make use of the numpy.where() function Sum the values in each column and row, then find where the max value is based on those numbers... but is there a quick and efficient way to do this? This might just be a case of me being relatively new to python.
Finding a specific index in a binary image in linear time?
0.099668
0
0
1,130
10,411,605
2012-05-02T10:06:00.000
1
1
0
0
python,usb,driver,gpib
14,773,570
3
false
0
0
Install the necessary drivers, probably NI 488.2 and NI Visa. Then use pyvisa, a python wrapper around visa, to talk to the device.
2
2
0
I need to convert the GPIB to USB using NI-488.2 from national instrument and I need to create a software complete with GUI using python. The old machine that my company use for measuring is Model 273A potentiostat/galvanostat from Princeton Applied Research. Im using windows 7 and python 2.7 using wxpython. And I need to program using python. I just need to send simple command for example R to run the machine. Connections : from measuring machine via GPIB to NI-488.2(a card to convert GPIB to usb) from NI-488.2 to pc via usb The questions are : How can I send any command to the machine? From what I know, I need to send it to the driver of the NI-488.2. Is it correct? (if correct see ques. 2 if not jump to ques. 3) How can I send from my own code using python to the NI-488.2 driver? How to see the code of any driver? But in my case the driver for NI-488.2. (the driver can be downloaded for free in the national instrument website but registration needed)
converting GPIB to USB using NI-488.2
0.066568
0
0
2,429
10,411,605
2012-05-02T10:06:00.000
2
1
0
0
python,usb,driver,gpib
10,613,554
3
false
0
0
You need to install the driver for the GPIB-USB cable, and registration process is quite simple. For the registration, basically you just need to leave a email address of yours. After you install the driver, you can find many useful information in their "help". Generally you need to read the user manual of your device. The idea is that you should use ctypes to interface with the GPIB-USB's dll in Python.
2
2
0
I need to convert the GPIB to USB using NI-488.2 from national instrument and I need to create a software complete with GUI using python. The old machine that my company use for measuring is Model 273A potentiostat/galvanostat from Princeton Applied Research. Im using windows 7 and python 2.7 using wxpython. And I need to program using python. I just need to send simple command for example R to run the machine. Connections : from measuring machine via GPIB to NI-488.2(a card to convert GPIB to usb) from NI-488.2 to pc via usb The questions are : How can I send any command to the machine? From what I know, I need to send it to the driver of the NI-488.2. Is it correct? (if correct see ques. 2 if not jump to ques. 3) How can I send from my own code using python to the NI-488.2 driver? How to see the code of any driver? But in my case the driver for NI-488.2. (the driver can be downloaded for free in the national instrument website but registration needed)
converting GPIB to USB using NI-488.2
0.132549
0
0
2,429
10,412,063
2012-05-02T10:39:00.000
0
1
0
0
python,nginx,web,fastcgi
10,412,251
4
false
1
0
All the same you must use wsgi server, as nginx does not support fully this protocol.
2
7
0
I want to have simple program in python that can process different requests (POST, GET, MULTIPART-FORMDATA). I don't want to use a complete framework. I basically need to be able to get GET and POST params - probably (but not necessarily) in a way similar to PHP. To get some other SERVER variables like REQUEST_URI, QUERY, etc. I have installed nginx successfully, but I've failed to find a good example on how to do the rest. So a simple tutorial or any directions and ideas on how to setup nginx to run certain python process for certain virtual host would be most welcome!
How to run nginx + python (without django)
0
0
0
7,440
10,412,063
2012-05-02T10:39:00.000
4
1
0
0
python,nginx,web,fastcgi
10,417,619
4
true
1
0
You should look into using Flask -- it's an extremely lightweight interface to a WSGI server (werkzeug) which also includes a templating library, should you ever want to use one. But you can totally ignore it if you'd like.
2
7
0
I want to have simple program in python that can process different requests (POST, GET, MULTIPART-FORMDATA). I don't want to use a complete framework. I basically need to be able to get GET and POST params - probably (but not necessarily) in a way similar to PHP. To get some other SERVER variables like REQUEST_URI, QUERY, etc. I have installed nginx successfully, but I've failed to find a good example on how to do the rest. So a simple tutorial or any directions and ideas on how to setup nginx to run certain python process for certain virtual host would be most welcome!
How to run nginx + python (without django)
1.2
0
0
7,440
10,415,429
2012-05-02T14:06:00.000
12
0
0
0
python,django,tornado
10,415,532
3
true
1
0
Add the path to the Django project to the Tornado application's PYTHONPATH env-var and set DJANGO_SETTINGS_MODULE appropriately. You should then be able to import your models and use then as normal with Django taking care of initial setup on the first import. You shouldn't require any symlinks.
1
22
0
I have an existing Django application with a database and corresponding models.py file. I have a new Tornado application that provides a web service to other applications. It needs to read/write from that same database, and there is code in the models file I'd like to use. How can I best use the Django database and models in my Tornado request handlers? Is it as simple as making a symbolic link to the models.py Django project folder, importing Django modules, and using it? I guess I'd have to do settings.configure(), right? Thanks!
How can I use the Django ORM in my Tornado application?
1.2
0
0
6,269
10,419,665
2012-05-02T18:34:00.000
11
1
0
0
python,django,postgresql,connection-pooling,pgbouncer
10,419,731
2
false
1
0
PgBouncer reduces the latency in establishing connections by serving as a proxy which maintains a connection pool. This may help speed up your application if you're opening many short-lived connections to Postgres. If you only have a small number of connections, you won't see much of a win.
2
30
0
I have some management commands that are based on gevent. Since my management command makes thousands to requests, I can turn all socket calls into non-blocking calls using Gevent. This really speeds up my application as I can make requests simultaneously. Currently the bottleneck in my application seems to be Postgres. It seems that this is because the Psycopg library that is used for connecting to Django is written in C and does not support asynchronous connections. I've also read that using pgBouncer can speed up Postgres by 2X. This sounds great but it would be great if someone could explain how pgBouncer works and helps? Thanks
How does pgBouncer help to speed up Django
1
0
0
41,803
10,419,665
2012-05-02T18:34:00.000
105
1
0
0
python,django,postgresql,connection-pooling,pgbouncer
10,420,469
2
true
1
0
Besides saving the overhead of connect & disconnect where this is otherwise done on each request, a connection pooler can funnel a large number of client connections down to a small number of actual database connections. In PostgreSQL, the optimal number of active database connections is usually somewhere around ((2 * core_count) + effective_spindle_count). Above this number, both throughput and latency get worse. NOTE: Recent versions have improved concurrency, so in 2022 I would recommend something more like ((4 * core_count) + effective_spindle_count). Sometimes people will say "I want to support 2000 users, with fast response time." It is pretty much guaranteed that if you try to do that with 2000 actual database connections, performance will be horrible. If you have a machine with four quad-core processors and the active data set is fully cached, you will see much better performance for those 2000 users by funneling the requests through about 35 database connections. To understand why that is true, this thought experiment should help. Consider a hypothetical database server machine with only one resource to share -- a single core. This core will time-slice equally among all concurrent requests with no overhead. Let's say 100 requests all come in at the same moment, each of which needs one second of CPU time. The core works on all of them, time-slicing among them until they all finish 100 seconds later. Now consider what happens if you put a connection pool in front which will accept 100 client connections but make only one request at a time to the database server, putting any requests which arrive while the connection is busy into a queue. Now when 100 requests arrive at the same time, one client gets a response in 1 second; another gets a response in 2 seconds, and the last client gets a response in 100 seconds. Nobody had to wait longer to get a response, throughput is the same, but the average latency is 50.5 seconds rather than 100 seconds. A real database server has more resources which can be used in parallel, but the same principle holds, once they are saturated, you only hurt things by adding more concurrent database requests. It is actually worse than the example, because with more tasks you have more task switches, increased contention for locks and cache, L2 and L3 cache line contention, and many other issues which cut into both throughput and latency. On top of that, while a high work_mem setting can help a query in a number of ways, that setting is the limit per plan node for each connection, so with a large number of connections you need to leave this very small to avoid flushing cache or even leading to swapping, which leads to slower plans or such things as hash tables spilling to disk. Some database products effectively build a connection pool into the server, but the PostgreSQL community has taken the position that since the best connection pooling is done closer to the client software, they will leave it to the users to manage this. Most poolers will have some way to limit the database connections to a hard number, while allowing more concurrent client requests than that, queuing them as necessary. This is what you want, and it should be done on a transactional basis, not per statement or connection.
2
30
0
I have some management commands that are based on gevent. Since my management command makes thousands to requests, I can turn all socket calls into non-blocking calls using Gevent. This really speeds up my application as I can make requests simultaneously. Currently the bottleneck in my application seems to be Postgres. It seems that this is because the Psycopg library that is used for connecting to Django is written in C and does not support asynchronous connections. I've also read that using pgBouncer can speed up Postgres by 2X. This sounds great but it would be great if someone could explain how pgBouncer works and helps? Thanks
How does pgBouncer help to speed up Django
1.2
0
0
41,803
10,420,966
2012-05-02T20:14:00.000
0
0
0
0
python,linear-algebra,robotics,calibration
10,421,975
1
false
0
0
You really need four data points to characterize three independent axes of movement. Can you can add some other constraints, ie are the manipulator axes orthogonal to each other, even if not fixed relative to the stage's axes? Do you know the manipulator's alignment roughly, even if not exactly? What takes the most time - moving the stage to re-center? Can you move the manipulator and stage at the same time? How wide is the microscope's field of view? How much distance-distortion is there near the edges of the view - does it actually have to be re-centered each time to be accurate? Maybe we could come up with a reverse-screen-distortion mapping instead?
1
0
1
I'm working on a research project involving a microscope (with a camera connected to the view port; the video feed is streamed to an application we're developing) and a manipulator arm. The microscope and manipulator arm are both controlled by a Luigs & Neumann control box (very obsolete - the computer interfaces with it with a serial cable and its response time is slowwww.) The microscope can be moved in 3 dimensions; X, Y, and Z, whose axes are at right angles to one another. When the box is queried, it will return decimal values for the position of each axis of each device. Each device can be sent a command to move to a specific position, with sub-micrometer precision. The manipulator arm, however, is adjustable in all 3 dimensions, and thus there is no guarantee that any of its axes are aligned at right angles. We need to be able to look at the video stream from the camera, and then click on a point on the screen where we want the tip of the manipulator arm to move to. Thus, the two coordinate systems have to be calibrated. Right now, we have achieved calibration by moving the microscope/camera's position to the tip of the manipulator arm, setting that as the synchronization point between the two coordinate systems, and moving the manipulator arm +250um in the X direction, moving the microscope to the tip of the manipulator arm at this new position, and then using the differences between these values to define a 3d vector that corresponds to the distance and direction moved by the manipulator, per unit in the microscope coordinate system. This is repeated for each axis of the manipulator arm. Once this data is obtained, in order to move the manipulator arm to a specific location in the microscope coordinate system, a system of equations can be solved by the program which determines how much it needs to move the manipulator in each axis to move it to the center point of the screen. This works pretty reliably so far. The issue we're running into here is that due to the slow response time of the equipment, it can take 5-10 minutes to complete the calibration process, which is complicated by the fact that the tip of the manipulator arm must be changed occasionally during an experiment, requiring the calibration process to be repeated. Our research is rather time sensitive and this creates a major bottleneck in the process. My linear algebra is a little patchy, but it seems like if we measure the units traveled by the tip of the manipulator arm per unit in the microscope coordinate system and have this just hard coded into the program (for now), it might be possible to move all 3 axes of the manipulator a specific amount at once, and then to derive the vectors for each axis from this information. I'm not really sure how to go about doing this (or if it's even possible to do this), and any advice would be greatly appreciated. If there's any additional information you need, or if you need clarification on anything please let me know.
Manipulator/camera calibration issue (linear algebra oriented)
0
0
0
303
10,421,194
2012-05-02T20:29:00.000
1
0
0
0
python,layout,python-2.7,boxlayout,kivy
22,755,884
2
false
0
1
There is a tricky way to do that. Use a gridlayout and set cols to 1
1
3
0
I am testing kivy and I want to create a BoxLayout so to stack some buttons. My problem is that the children that are added to the layout follow a bottom-top logic while I want the opposite. Do you know how can I reverse the order? Thanks!
How can I change the order of the BoxLayout in kivy?
0.099668
0
0
2,021
10,421,859
2012-05-02T21:16:00.000
0
0
1
0
python,memory-management,python-2.7
25,157,235
1
true
0
0
You need to use the VS X64 command prompt. I started "VS2013 x64 Cross Tools Command Prompt" (with 2013 Express for Desktop installed) and executed the build command, which completed without any errors.
1
3
0
I'm trying to use guppy to do some memory analysis for my Python program. I am using Windows 7 with Python 2.7 64-bit. I have checked out the latest version of guppy from the trunk: svn co https://guppy-pe.svn.sourceforge.net/svnroot/guppy-pe/trunk/guppy guppy When I do python setup.py build I get a bunch of errors. Has anyone compiled guppy for Windows 7 Python 2.7 64-bit? If so, how? If this isn't possible, what other Python memory analyzers would I be able to use? Thanks.
How do I compile guppy for Windows 64 bit Python 2.7?
1.2
0
0
1,153
10,423,245
2012-05-02T23:44:00.000
3
0
0
1
python,google-app-engine
10,423,664
1
true
1
0
No - get_application_id returns the ID of the app that is actually serving your request. You can examine the hostname to see if the request was directed to oldappid.appspot.com.
1
1
0
I was forced to alias my app name after migrating to the High Replication Datastore. I use the google.appengine.api.app_identity.get_application_id() method throughout my app, but now it returns the new app id instead of the original one even when visiting via the old app id url. Is there a way to output the original app id? thanks
get_application_id() behaviour with aliased app id
1.2
0
0
124
10,423,593
2012-05-03T00:30:00.000
3
0
1
0
python,excel,ms-word,character
10,423,918
2
true
0
0
Try using value.rstrip('\r\n') to remove any carriage returns (\r) or newlines (\n) at the end of your string value.
1
3
0
I am facing an issue with setting a value of Excel Cell. I get data from a table cell in MS-Word Document(dcx) and print it on output console. Problem is that the data of the cell is just a word, "Hour", with no apparent other leading or trailing printable character like white-spaces. But when I print it using python's print() function, it shows some unexpected character, more like a small "?" in a rectangle. I don't know where does it come from. And when I write the same variable that holds the word, "Hour", to an Excel cell it shows a bold dot(.) in the cell. What can be the problem? Any help is much appreciated. I Am Using Python 3.2 And PyWin32 3.2 On Win7. Thanks.
Unwanted character in Excel Cell In Python
1.2
1
0
3,883
10,424,456
2012-05-03T02:48:00.000
4
0
0
1
python,django,amazon-s3,celery,sorl-thumbnail
11,048,085
3
false
1
0
As I understand Sorl works correctly with the S3 storage but it's very slow. I believe that you know what image sizes do you need. You should launch the celery task after the image was uploaded. In task you call to sorl.thumbnail.default.backend.get_thumbnail(file, geometry_string, **options) Sorl will generate a thumbnail and upload it to S3. Next time you request an image from template it's already cached and served directly from Amazon's servers a clean way to handle a placeholder thumbnail image while the image is being processed. For this you will need to override the Sorl backend. Add new argument to get_thumbnail function, e.g. generate=False. When you will call this function from celery pass generate=True And in function change it's logic, so if thumb is not present and generate is True you work just like the standard backend, but if generate is false you return your placeholder image with text like "We process your image now, come back later" and do not call backend._create_thumbnail. You can launch a task in this case, if you think that thumbnail can be accidentally deleted. I hope this helps
1
11
0
I'm surprised I don't see anything but "use celery" when searching for how to use celery tasks with sorl-thumbnails and S3. The problem: using remote storages causes massive delays when generating thumbnails (think 100s+ for a page with many thumbnails) while the thumbnail engine downloads originals from remote storage, crunches them, then uploads back to s3. Where is a good place to set up the celery task within sorl, and what should I call? Any of your experiences / ideas would be greatly appreciated. I will start digging around Sorl internals to find a more useful place to delay this task, but there are a few more things I'm curious about if this has been solved before. What image is returned immediately? Sorl must be told somehow that the image returned is not the real thumbnail. The cache must be invalidated when celery finishes the task. Handle multiple thumbnail generation requests cleanly (only need the first one for a given cache key) For now, I've temporarily solved this by using an nginx reverse proxy cache that can serve hits while the backend spends time generating expensive pages (resizing huge PNGs on a huge product grid) but it's a very manual process.
Pointers on using celery with sorl-thumbnails with remote storages?
0.26052
0
0
1,459
10,424,983
2012-05-03T04:09:00.000
21
0
1
0
python,operators
10,425,003
2
true
0
0
== invokes __eq__(). != invokes __ne__() if it exists, otherwise is equivalent to not ==. Not unless the difference in 1 matters.
1
13
0
I realized today while writing some Python that one could write the inequality operator as a!=b or not a==b. This got me curious: Do both ways behave exactly the same, or are there some subtle differences? Is there any reason to use one over the other? Is one more commonly used than the other?
Python inequalities: != vs not ==
1.2
0
0
24,506
10,426,506
2012-05-03T06:52:00.000
1
0
1
0
python,regex,solr
10,426,794
2
false
0
0
Your use case is very basic and doesn't require regex at all with Solr. It looks like you just may have a syntax issue. q=text:day OR text:run should do exactly what you're looking for.
2
0
0
I have got indexes created on tables having data of the form: indexname='text'---->Today is a great day for running in the park. Now i want to perform a search on the indexes where only 'day' or 'run' is appearing in the text. I have implemented query like : q = 'text:(day or run*)' But this query is not returning me any results from indexes.Is this correct way?or how can i improve my query by applying regex ?
Apply regex on Solr query?
0.099668
0
0
624
10,426,506
2012-05-03T06:52:00.000
0
0
1
0
python,regex,solr
10,434,663
2
false
0
0
Regex and wildcards are slow in search engines. You'll get better performance by pre-processing the terms in a language-sensitive way. You can match "run" to "running" with a stemmer, an analysis step that reduces different forms of a word to a common stem. When the query and the index term are both stemmed, then they will match. You should also look into the Extended Dismax (edismax) search handler. That will do some of the work of turning "day run" into a search for the individual words and the phrase, something like 'day OR run OR "day run"'. Then it can further expand that against multiple fields with different weights, all automatically.
2
0
0
I have got indexes created on tables having data of the form: indexname='text'---->Today is a great day for running in the park. Now i want to perform a search on the indexes where only 'day' or 'run' is appearing in the text. I have implemented query like : q = 'text:(day or run*)' But this query is not returning me any results from indexes.Is this correct way?or how can i improve my query by applying regex ?
Apply regex on Solr query?
0
0
0
624
10,427,900
2012-05-03T08:42:00.000
2
1
1
0
python,multithreading,parallel-processing,cpu-usage
10,428,163
3
false
0
0
Your code might be calling some functions that uses C/C++/etc. underneath. In that case, it is possible for multiple thread usage. Are you calling any libraries that are only python bindings to some more efficiently implemented functions?
3
1
1
I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu. However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so. Interesting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python. If it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script. UPDATE: My friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is. There's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu.
Stop Python from using more than one cpu
0.132549
0
0
1,390
10,427,900
2012-05-03T08:42:00.000
1
1
1
0
python,multithreading,parallel-processing,cpu-usage
10,429,302
3
false
0
0
You can always set your process affinity so it run on only one cpu. Use "taskset" command on linux, or process explorer on windows. This way, you should be able to know if your script has same performance using one cpu or more.
3
1
1
I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu. However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so. Interesting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python. If it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script. UPDATE: My friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is. There's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu.
Stop Python from using more than one cpu
0.066568
0
0
1,390
10,427,900
2012-05-03T08:42:00.000
1
1
1
0
python,multithreading,parallel-processing,cpu-usage
10,445,816
3
false
0
0
Could it be that your code uses SciPy or other numeric library for Python that is linked against Intel MKL or another vendor provided library that uses OpenMP? If the underlying C/C++ code is parallelised using OpenMP, you can limit it to a single thread by setting the environment variable OMP_NUM_THREADS to 1: OMP_NUM_THREADS=1 python myscript.py Intel MKL for sure is parallel in many places (LAPACK, BLAS and FFT functions) if linked with the corresponding parallel driver (the default link behaviour) and by default starts as many compute threads as is the number of available CPU cores.
3
1
1
I have a problem when I run a script with python. I haven't done any parallelization in python and don't call any mpi for running the script. I just execute "python myscript.py" and it should only use 1 cpu. However, when I look at the results of the command "top", I see that python is using almost 390% of my cpus. I have a quad core, so 8 threads. I don't think that this is helping my script to run faster. So, I would like to understand why python is using more than one cpu, and stop it from doing so. Interesting thing is when I run a second script, that one also takes up 390%. If I run a 3rd script, the cpu usage for each of them drops to 250%. I had a similar problem with matlab a while ago, and the way I solved it was to launch matlab with -singlecompthread, but I don't know what to do with python. If it helps, I'm solving the Poisson equation (which is not parallelized at all) in my script. UPDATE: My friend ran the code on his own computer and it only takes 100% cpu. I don't use any BLAS, MKL or any other thing. I still don't know what the cause for 400% cpu usage is. There's a piece of fortran algorithm from the library SLATEC, which solves the Ax=b system. That part I think is using a lot of cpu.
Stop Python from using more than one cpu
0.066568
0
0
1,390
10,428,414
2012-05-03T09:17:00.000
4
0
0
0
php,python,facebook
10,429,261
1
true
1
0
Facebook calls an AJAX endpoint every few seconds to keep the client-side UI fresh. The payload from this endpoint contains updates for ticker, newsfeed, notifications, messages and various other statuses. You can view this by opening Facebook in Google Chrome and looking at the network tab in Chrome Developer Tools.
1
2
0
How is the dynamic notification updates are displayed in facebook.Also here at stackoverflow, why isn't the notifications pop up immediately as notifications arises.They aren't displayed until i refresh the page.
Dynamic notifications pop up
1.2
0
0
1,343
10,428,816
2012-05-03T09:40:00.000
-2
0
1
0
python,core-foundation
10,430,699
2
false
0
0
why do you wanna use a CFString in python.. BTW CF string has its own structure defined and the way it was stored in memory is different than python string. Its not possible to do this conversion.
1
1
0
I have a CFString and would like to use it in Python. What is the fastest way to do so? Is it possible to avoid the conversion, i.e. to somehow create Python string just from the CFString pointer?
Convert CFString into Python string
-0.197375
0
0
576
10,434,260
2012-05-03T15:12:00.000
5
0
1
1
python,linux,debian
28,640,488
5
false
0
0
btw, if you are using bash or running from the shell, and you normally include at the top of the file the following line: #!/usr/bin/python then you can change the line to instead be: #!/usr/bin/python3 That is another way to have pythonX run instead of the default (where X is 2 or 3).
2
18
0
I got both python2 and python3 installed in my debian machine. But when i try to invoke the python interpreter by just typing 'python' in bash, python2 pops up and not python3. Since I am working with the latter at the moment, It would be easier to invoke python3 by just typing python. Please guide me through this.
How to make python3.2 interpreter the default interpreter in debian
0.197375
0
0
24,208
10,434,260
2012-05-03T15:12:00.000
9
0
1
1
python,linux,debian
10,468,921
5
false
0
0
Well, you can simply create a virtualenv with the python3.x using this command: virtualenv -p <path-to-python3.x> <virtualenvname>
2
18
0
I got both python2 and python3 installed in my debian machine. But when i try to invoke the python interpreter by just typing 'python' in bash, python2 pops up and not python3. Since I am working with the latter at the moment, It would be easier to invoke python3 by just typing python. Please guide me through this.
How to make python3.2 interpreter the default interpreter in debian
1
0
0
24,208
10,434,523
2012-05-03T15:27:00.000
0
0
0
0
mysql,sql,mysql-python
10,434,644
1
false
0
0
This process works best on inserts Make all you SQL queries into Stored Procedures. These eventually will become child stored procedures Create Master Store procedure to run all other Stored Procedures. Modify master Stored procedure to accept values required by child Stored Procedures Modify master Stored procedure to accept commands using "if" statements to know which child stored procedures to run If you need return data from Database use 1 stored procedure at the time.
1
0
0
Sometimes an application requires quite a few SQL queries before it can do anything useful. I was wondering if there is a way to send those as a batch to the database, to avoid the overhead of going back and forth between the client and the server? If there is no standard way to do it, I'm using the python bindings of MySQL. PS: I know MySQL has an executemany() function, but that's only for the same query executed many times with different parameters, right?
Grouping SQL queries
0
1
0
92
10,435,715
2012-05-03T16:40:00.000
1
0
0
1
python,macos,bash,shell,installation
10,435,770
3
false
0
0
Something got messed up in your $PATH. Have a look in ~/.profile, ~/.bashrc, ~/.bash_profile, etc., and look for a line starting with export that doesn't end cleanly.
1
4
0
I stupidly downloaded python 3.2.2 and since then writing 'python' in the terminal yields 'command not found'. Also, when starting the terminal I get this: Last login: Wed May 2 23:17:28 on ttys001 -bash: export: `folder]:/Library/Frameworks/Python.framework/Versions/2.7/bin:/opt/local/bin:/opt/local/sbin:/usr/local/git/bin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Applications/android-sdk-mac_86/tools:/Applications/android-sdk-mac_86/platform-tools:/usr/local/git/bin:/usr/X11/bin:/usr/local/ant/bin': not a valid identifier Why the Android SDK folder is there is beyond me. It's all jazzed up. Any ideas how can I remove the offending file, folder or fix his problem? I've checked the System Profiler and python 2.6.1 and 2.7.2.5 shows up.
Python installation mess on Mac OS X, cannot run python
0.066568
0
0
1,985
10,436,458
2012-05-03T17:30:00.000
-1
0
0
0
python,escaping,whitespace,jinja2,webapp2
10,441,370
8
false
1
0
The easiest way to do this is to escape the field yourself, then add line breaks. When you pass it in in jinja, mark it as safe so it's not autoescaped.
2
10
0
In my web app, the user can make blog posts. When I display the blog post, newlines aren't shown because I didn't replace the new lines with <br> tags. The problem is that I've turned autoescaping on in Jinja, so <br> tags are escaped. I don't want to temporarily disable autoescaping, I want to specifically allow <br> tags. How would I do this?
Allowing tags with Google App Engine and Jinja2
-0.024995
0
0
4,529
10,436,458
2012-05-03T17:30:00.000
-1
0
0
0
python,escaping,whitespace,jinja2,webapp2
10,436,756
8
false
1
0
The solution was to put <pre></pre> tags around the area where I had the content.
2
10
0
In my web app, the user can make blog posts. When I display the blog post, newlines aren't shown because I didn't replace the new lines with <br> tags. The problem is that I've turned autoescaping on in Jinja, so <br> tags are escaped. I don't want to temporarily disable autoescaping, I want to specifically allow <br> tags. How would I do this?
Allowing tags with Google App Engine and Jinja2
-0.024995
0
0
4,529
10,439,104
2012-05-03T20:48:00.000
1
0
1
0
python,python-3.x
18,048,247
8
false
0
0
The common port of PIL to Python 3.x is called "Pillow". Also I would suggest pygame library for simple tasks. It is a library, full of features for creating games - and reading from some common image formats is among them. Works with Python 3.x as well.
1
18
0
Is there a way to read in a bmp file in Python that does not involve using PIL? PIL doesn't work with version 3, which is the one I have. I tried to use the Image object from graphics.py, Image(anchorPoint, filename), but that only seems to work with gif files.
Reading bmp files in Python
0.024995
0
0
56,618
10,439,481
2012-05-03T21:20:00.000
12
1
0
0
python,pylint
10,439,541
2
false
0
0
You can redirect its output in your shell using > somefile.txt In case it writes to stderr, use 2>&1 > somefile.txt
1
11
0
Is there a built in way to save the pylint report to a file? It seems it might be useful to do this in order to log progress on a project and compare elements of reports across multiple files as changes are made.
save pylint message to a file
1
0
0
11,281
10,439,654
2012-05-03T21:36:00.000
1
0
0
0
python,ajax,django,sudo,fabric
10,439,756
2
false
1
0
I can't think of a way to do a password prompt only if required... you could prompt before and cache it as required, though, and the backend would have access. To pass the sudo password to the fabric command, you can use sudo -S... i.e. echo password | sudo -S command
2
1
0
I'm working on the deployment tool in Django and fabric. The case is putting some parameters (like hostname and username) in the initial form, then let Django app to call fabric methods to do the rest and collect the output in the web browser. IF there is a password prompt from OS to fabric (ie. running sudo commands etc.), I would like to popup the one-field form for the password to be put in it (for example using jQuery UI elements). The person will fill the password field for user prompted and fabric will continue to do the things. Is this situation possible to be implemented? I was thinking about some async calls to browser, but I have no idea how it can be done from the other side. Probably there is another way. Please let me know if you have any suggestions. Thanks!
Fabric + django asynchronous prompt for sudo password
0.099668
0
0
626
10,439,654
2012-05-03T21:36:00.000
2
0
0
0
python,ajax,django,sudo,fabric
10,439,758
2
true
1
0
Yes, capture the password exception, than popup the form, and run the fabric script again with env.password = userpassword If you want to continue where you caught the exception, keep a variable that knows what has been done yet (i.e. nlinesexecuted) and save it when you catch the exception. Use logic when you rerun the script to continue where you left of.
2
1
0
I'm working on the deployment tool in Django and fabric. The case is putting some parameters (like hostname and username) in the initial form, then let Django app to call fabric methods to do the rest and collect the output in the web browser. IF there is a password prompt from OS to fabric (ie. running sudo commands etc.), I would like to popup the one-field form for the password to be put in it (for example using jQuery UI elements). The person will fill the password field for user prompted and fabric will continue to do the things. Is this situation possible to be implemented? I was thinking about some async calls to browser, but I have no idea how it can be done from the other side. Probably there is another way. Please let me know if you have any suggestions. Thanks!
Fabric + django asynchronous prompt for sudo password
1.2
0
0
626
10,440,277
2012-05-03T22:39:00.000
1
0
0
1
python,distributed-computing
10,440,880
1
true
1
0
The general way to handle this is to have the threads report their status back to the server daemon. If you haven't seen a status update within the last 5N seconds, then you kill the thread and start another. You can keep track of the current active threads that you've spun up in a list, then just loop through them occasionally to determine state. You of course should also fix the errors in your program that are causing threads to exit prematurely. Premature exits and killing a thread could also leave your program in an unexpected, non-atomic state. You should probably also have the server daemon run a cleanup process that makes sure any items in your queue, or whatever you're using to determine the workload, get reset after a certain period of inactivity.
1
1
0
Lets say I have 100 servers each running a daemon - lets call it server - that server is responsible for spawning a thread for each user of this particular service (lets say 1000 threads per server). Every N seconds each thread does something and gets information for that particular user (this request/response model cannot be changed). The problem I a have is sometimes a thread hangs and stops doing something. I need some way to know that users data is stale, and needs to be refreshed. The only idea I have is every 5N seconds have the thread update a MySQL record associated with that user (a last_scanned column in the users table), and another process that checks that table every 15N seconds, if the last_scanned column is not current, restart the thread.
Distributed server model
1.2
1
0
221
10,440,667
2012-05-03T23:18:00.000
1
0
1
0
python,multithreading
10,440,758
7
false
0
0
You can let the threads push their results into a threading.Queue. Have another thread wait on this queue and print the message as soon as a new item appears.
2
11
0
I've a python program that spawns a number of threads. These threads last anywhere between 2 seconds to 30 seconds. In the main thread I want to track whenever each thread completes and print a message. If I just sequentially .join() all threads and the first thread lasts 30 seconds and others complete much sooner, I wouldn't be able to print a message sooner -- all messages will be printed after 30 seconds. Basically I want to block until any thread completes. As soon as a thread completes, print a message about it and go back to blocking if any other threads are still alive. If all threads are done then exit program. One way I could think of is to have a queue that is passed to all the threads and block on queue.get(). Whenever a message is received from the queue, print it, check if any other threads are alive using threading.active_count() and if so, go back to blocking on queue.get(). This would work but here all the threads need to follow the discipline of sending a message to the queue before terminating. I'm wonder if this is the conventional way of achieving this behavior or are there any other / better ways ?
In Python threading, how I can I track a thread's completion?
0.028564
0
0
31,277
10,440,667
2012-05-03T23:18:00.000
6
0
1
0
python,multithreading
10,440,790
7
false
0
0
The thread needs to be checked using the Thread.is_alive() call.
2
11
0
I've a python program that spawns a number of threads. These threads last anywhere between 2 seconds to 30 seconds. In the main thread I want to track whenever each thread completes and print a message. If I just sequentially .join() all threads and the first thread lasts 30 seconds and others complete much sooner, I wouldn't be able to print a message sooner -- all messages will be printed after 30 seconds. Basically I want to block until any thread completes. As soon as a thread completes, print a message about it and go back to blocking if any other threads are still alive. If all threads are done then exit program. One way I could think of is to have a queue that is passed to all the threads and block on queue.get(). Whenever a message is received from the queue, print it, check if any other threads are alive using threading.active_count() and if so, go back to blocking on queue.get(). This would work but here all the threads need to follow the discipline of sending a message to the queue before terminating. I'm wonder if this is the conventional way of achieving this behavior or are there any other / better ways ?
In Python threading, how I can I track a thread's completion?
1
0
0
31,277
10,440,924
2012-05-03T23:53:00.000
2
1
1
0
c++,python,configuration
10,441,128
1
true
0
0
An obvious solution that comes to mind is to go with something along the lines of YAML or JSON which you should find support for across many languages.
1
1
0
There are a lot of questions and answers on how to parse/create config files in python and C++ individually. In my case, I have one single config file and need to be processed (read/write) by both python and C++. In python world, ConfigParser is popular; while in C++, libconfig looks nice. But they are using different formats. What I am looking for is a stone being able to kill two birds at the same time. :)
Config File Process in Python and C++
1.2
0
0
404
10,442,359
2012-05-04T03:35:00.000
0
0
0
1
shell,python-3.x,subprocess,python-idle
11,967,614
1
false
0
0
Use process.stdin.write. Remember to set stdin = subprocess.PIPE when you call subprocess.Popen.
1
1
0
I am currently displaying the output of a subprocess onthe python shell (in my case iDLE on windows) by using a pipe and displaying each line. I want to do this with a subprocess that has user input, so that the prompt will appear on the python console, and the user can enter the result, and the result can be send to the subprocess. Is there a way to do this?
python Input delegation for subprocesses
0
0
0
116
10,446,139
2012-05-04T09:23:00.000
3
0
1
0
python
10,446,202
1
true
0
0
It depends what __dict__ you are talking about. Python has a method resolution order that (ignoring the fun of multiple inheritance) works by checking the instance, then the class, then the parent class, then it's parent class, etc... So __class__, for instance, is in object.__dict__ - which is why it's defined for all classes as they inherit from object.
1
0
0
In Python, a lot of the "special" attributes of objects are stored in the __dict__ dictionary, like __doc__, __module__ (from what I could see in my experiments). However some are not, like __class__. My question is exactly which attributes are not stored in __dict__ (is that even documented somewhere?), and why are they not?
What attributes are not stored in __dict__ and why are they not?
1.2
0
0
532
10,446,440
2012-05-04T09:45:00.000
1
0
0
1
python,relative-path
10,446,554
1
true
0
0
Exec format error will come when the shell isn't set at the script. try adding #!/bin/sh at the beginning of the script and execute the python script.
1
2
0
Python project looks like this: setup.py README Application scripts hello.py shell_scripts date.sh From hello.py I'm executing the command subprocess.call(['../shell_scripts/date.sh']) and receiving the error OSError: [Errno 8] Exec format error. Note: date.sh is a perfectly valid shell script and is executable. I've also tried os.path.realpath to no avail. I assume this is due to an invalid path?
Relative Python Path to Script
1.2
0
0
259
10,447,858
2012-05-04T11:23:00.000
4
0
0
0
python,plone
10,448,068
1
true
1
0
Not sure you can do this with a content rule; there is no code running at that exact time. You'd need to run an external cron job to trigger a scan for expired events. Why not just use a collection to list expired events in the other location?
1
2
0
I wish to create a content rule for an event such that after expiry date of the event i.e end date, it should be moved to another folder. How do I specify the content rule. Please guide. Using Plone 4.1
plone how to add content rule for event which after end date should be moved to another folder
1.2
0
0
213
10,447,970
2012-05-04T11:30:00.000
0
1
0
0
php,python,amazon-web-services,whois
10,679,625
1
false
0
0
The Amazon service you want to use is the server service: EC2. You get full access to a server and, of course, you can performs socket connections on port 43 (the one required by the Whois protocol).
1
1
0
I was looking for a Whois Api, but most of them charge heavy price and not reliable enough. We can code in Python or Php. We need to make a Whois lookup service, to integrate with our site. What AWS Resource we need for this? We need at least 5k lookups per day. AWS provides: S3 , elastic, and others. We are confused. As Amazon provides free tire. Does it allow who is lookup? As google app engine never allowed this.
Amazon AWS For Whois?
0
0
1
766
10,451,323
2012-05-04T14:59:00.000
1
0
0
0
python,django
10,451,563
2
false
1
0
In well designed django, you should only have to edit the template. Good design provides clean separation. It's possible the developer may have been forced todo something unusual...but you could try to edit the template and see what happens (make backup 1st)
2
0
0
Say, I find some bug in a web interface. I open firebug and discover element and class, id of this element. By them I can then identify a template which contains variables, tags and so on. How can I move forward and reveal in which .py files these variables are filled in? I know how it works in Lift framework: when you've found a template there are elements with attributes bound to snippets. So you can easily proceed to specific snippets and edit code. How does it work in django? May be, I suppose wrong process... then point me to the right algorithm, please.
Django: How can I find methods/functions filling in the specific template
0.099668
0
0
66
10,451,323
2012-05-04T14:59:00.000
1
0
0
0
python,django
10,451,982
2
true
1
0
Determining template variable resolution is all about Context. Use the URL to identify the view being invoked. Look at the view's return and note a) the template being used, and b) any values being passed in the Context used when the template is being rendered. Look at settings.py for the list of TEMPLATE_CONTEXT_PROCESSORS. These are routines that are called automatically and invisibly to add values to the Context being passed to the template. This is sort of a Man Behind the Curtain™ process that can really trip you up if you don't know about it. Check to see if there are any magic template tags being called (either in the template in question, in a template it extends, or in a template that includes the template) that might be modifying the Context. Sometimes I need use an old-school django snippet called {%expr%} that can do evaluation in the template, but I always use it as close to the point of need as possible to highlight the fact it is being used. Note that because of the way Django template variables are resolved, {{foo.something}} could be either a value or a callable method. I have serious issues with this syntax, but that's the way they wrote it.
2
0
0
Say, I find some bug in a web interface. I open firebug and discover element and class, id of this element. By them I can then identify a template which contains variables, tags and so on. How can I move forward and reveal in which .py files these variables are filled in? I know how it works in Lift framework: when you've found a template there are elements with attributes bound to snippets. So you can easily proceed to specific snippets and edit code. How does it work in django? May be, I suppose wrong process... then point me to the right algorithm, please.
Django: How can I find methods/functions filling in the specific template
1.2
0
0
66
10,453,176
2012-05-04T17:05:00.000
3
0
1
0
python,collections
10,453,273
3
false
0
0
Accessing the middle of a lisp list is also O(n). Python lists are array lists, which is why popping the head is expensive (popping the tail is constant time). What you are looking for is an array with (amortised) constant time deletions at the head; that basically means that you are going to have to build a datastructure on top of list that uses lazy deletion, and is able to recycle lazily-deleted slots when the queue is empty. Alternatively, use a hashtable, and a couple of integers to keep track of the current contiguous range of keys.
1
5
0
what are my options there? I need to call a lot of appends (to the right end) and poplefts (from the left end, naturally), but also to read from the middle of the storage, which will steadily grow, by the nature of the algorithm. I would like to have all these operations in O(1). I could implement it in C easy enough on circularly-addressed array (what's the word?) which would grow automatically when it's full; but what about Python? Pointers to other languages are appreciated too (I realize the "collections" tag is more Java etc. oriented and would appreciate the comparison, but as a secondary goal). I come from a Lisp background and was amazed to learn that in Python removing a head element from a list is an O(n) operation. A deque could be an answer except the documentation says access is O(n) in the middle. Is there anything else, pre-built?
O(1) indexable deque of integers in Python
0.197375
0
0
1,393
10,453,799
2012-05-04T17:55:00.000
0
0
0
1
python,unix,stdout,stdin
10,455,014
2
false
0
0
You cannot redirect stdin or stdout for a running process. You can, however, add code to your caller -- foo.py -- that will read from foo.py's stdin and send it to bar.py's stdout, and vice-versa. In this model, foo.py would connect bar.py's stdin and stdout to pipes and would be responsible for shuttling data between those pipes and the real stdin/stdout.
1
4
0
Is there any way of attaching a console's STDIN/STDOUT to an already running process? Use Case: I have a python script which runs another python script on the command line using popen. Let's say foo.py runs popen to run python bar.py. Then bar.py blocks on input. I can get the PID of python bar.py. Is there any way to attach a new console to the running python instance in order to be able to work interactively with it? This is specifically useful because I want to run pdb inside of bar.py.
Python: interacting with STDIN/OUT of a running process in *nix
0
0
0
2,715
10,453,841
2012-05-04T17:58:00.000
0
1
1
0
c++,python,stdout,stdin
10,453,984
2
false
0
1
Sounds like SWIG might be what you're looking for. Use it to generate an extension module for Python, then call your C++ methods from a Python script.
2
0
0
I have a problem where it is beneficial for me to be able to mix python code and C++ code, and I think that the task is simple enough that it could be done by simply initializing the C++ program from python, and then having the C++ program "wait" for python to give it some input via std in, and then have python "wait" for the C++ program do its computation and return it via std out etc. I feel like this is either trivial or extremely extremely hard. My main problem is that each time I initialize the C++ code it takes an extremely long time, but that would only need to be done once if I can get this idea implemented. Any thoughts?
Mixing python and C++ via std in and std out
0
0
0
314
10,453,841
2012-05-04T17:58:00.000
0
1
1
0
c++,python,stdout,stdin
10,454,012
2
true
0
1
Look at the the Submodule library. You can use Submodule.popen() to create a process from python, using stdin=PIPE and stdout=PIPE. You can then read from the C++ program's stdout and write to its stdin.
2
0
0
I have a problem where it is beneficial for me to be able to mix python code and C++ code, and I think that the task is simple enough that it could be done by simply initializing the C++ program from python, and then having the C++ program "wait" for python to give it some input via std in, and then have python "wait" for the C++ program do its computation and return it via std out etc. I feel like this is either trivial or extremely extremely hard. My main problem is that each time I initialize the C++ code it takes an extremely long time, but that would only need to be done once if I can get this idea implemented. Any thoughts?
Mixing python and C++ via std in and std out
1.2
0
0
314
10,454,344
2012-05-04T18:36:00.000
1
0
1
0
python
10,456,752
4
false
0
0
Use the ast module that parses and constructs the syntax tree from Python code. You will be able to apply the customized counting algorithm you would like based on that tree and the nodes.
1
1
0
Lines of code are a bad measurement for anything, for reasons not discussed here. But is there a neat way to count statements in a Python source code file?
Counting statements in Python source files
0.049958
0
0
942
10,455,947
2012-05-04T20:48:00.000
2
0
0
1
python,macports,pip,homebrew,easy-install
10,506,147
2
false
0
0
The advantage of using a Python installed via a package manager like Homebrew or MacPorts would be that this provides a simple way of removing the Python installation and reinstalling it. Also, you can install a more recent version than the one Mac OS X provides.
1
19
0
I have a problem that comes from me following tutorials without really understanding what I'm doing. The root of the problem I think is the fact that I don't understand how the OS X filesystem works. The problem is bigger than Python but it was when I started learning about Python that I realized how little I really understand. So in the beginning I started following tutorials which led me to use the easy_install command a lot and when a lot of tutorials recommended PIP I never got it running. So I have run a lot of commands and installed a lot of different packages. As I have understood Lion comes with a python install. I have been using this a lot and from this I have installed various packages with easy_install. Is there any way to go back to default installation and begin from the very beginning? Is this something I want to do? If so why? Is there any advantage of using a Python version I have installed with Homebrew? How can I see from where Python is run when I run the Python command? When I do install something with either easy_install, homebrew, macports etc where do things actually end up?
How do Homebrew, PIP, easy_install etc. work so that I can clean up
0.197375
0
0
9,232
10,457,626
2012-05-05T00:06:00.000
0
0
1
0
python,perforce
10,457,687
3
false
0
0
Perforce doesn't allow you to reserve changelist numbers. If you want to submit an existing (pending) changelist using P4Python, do: p4.run_submit("-c", changelist)
1
5
0
P4.fetch_change() creates a change spec with Change equal to 'new'. I need to create a change spec with an actual number (that won't collide with any other changes). IOW, I need to be able to reserve a changelist number. How can this be done with P4Python? Context: My script takes in an already-existing changelist number. I need to be able to test that the script works correctly.
How to create numbered changelist using P4Python?
0
0
0
3,575
10,457,722
2012-05-05T00:25:00.000
2
0
1
0
python,windows,visual-c++,static-linking
13,320,086
1
true
0
0
Not sure if you still needed this answer, given how long ago you asked this question, but I figured I'd leave some of the info I've found out, since I wondered the same thing: Note: This is based on the source tree for Python 2.7.3 There's a few python modules that depend on the _ssl/ssl modules, but they all have error checking to support versions of Python without SSL, and will just disable that functionality. The included python modules that make use of the ssl module are: socket ftplib httplib imaplib poplib smtplib urllib xmlrpclib Since you're embedding it into your own app, I'd probably also part with _msi. (which would allow you to remove the msilib module) If you went ahead and removed the extensions you mentioned, you'd always want to get rid of the following python modules from the Lib folder: lib-tk ssl wave (I'm assuming you don't need support for parsing wave files, since you dropped winsound) sunau ( ^ ) sunaudio ( ^ ) audiodev ( ^ ) aifc ( ^ ) chunk ( ^ ) toaiff ( ^ ) I'm assuming this is a GUI app, so you probably won't need the following Python modules: curses tty pty rlcompleter Not sure what your app does/did, though, so I'll be conservative with the rest. As for the builtin modules, written in C, I can't guarantee this to be 100% problem-free, but you should be able to remove some of the following, depending on what your application actually needs. _csv _json (though this module offers speedups for the python-only json module) _hotshot (If you don't need hotshot, which is a logging profiler) imageop Probably some others here, too.
1
0
0
I would like to patch up the Python source so I can statically link it into my Windows application (I am aware that this is not easy or even encouraged because of how especially the core modules get loaded). Can I leave out certain "core modules" despite the name that suggests that they are required? I'm thinking of _tkinter, _ssl and ssl (not 100% sure whether I want to remove that one, yet), winsound and w9xpopen (it's only going to be used on the NT platform) here. Is that possible or will that break things in subtle ways? NB: please, no need to mention that static linking is bad for some reason or another. For the case I need it, it would be the superior solution by far.
Which Python "core modules" can I leave out from a custom build?
1.2
0
0
137
10,459,041
2012-05-05T05:09:00.000
5
0
1
0
python,django,project
10,459,311
1
false
1
0
The best way is to use separate virtualenv for each project. There is nothing messy with it (use virtualenvwrapper). Sharing a library between project is always a potentional risk: what if you want to upgrade the library in one project and use older version in another? Also pip freeze will list actual list of aps for the project, not some list you should filter manually.
1
1
0
The large number of apps/packages which can be used in python/django is a great advantage of both. This also raises a question about handling these installed applications/library, especially when there are multiple environments in which the project has to be deployed. Installing such third party libraries to the system does not seem ideal to me. Thus after some research, I found that there are two possible ways to go namely virtualenv or including the package within the project folder. But the problems are that creating a virtualenv for each project is kind of messy and on the other side, including large packages within the project directory increases the project size and also creates import problems. I have found kind of a middle ground between the above two methods which is to install libraries which can be shared with multiple projects into a virtualenv and smaller project specific libraries within the project. For example, for a django project, I would install django into a virtualenv and other libraries used in the project for example xlwrt, dojango etc are included within a "lib" folder within the project. Is this the best way to go or are there better alternative methods??
Django - Is it better to install packages to virtualenv/system or include them within the project?
0.761594
0
0
121
10,460,601
2012-05-05T09:37:00.000
1
1
1
0
c++,python
10,460,624
2
false
0
0
Python can use as a server application. I can remember many web and ftp servers written in python. See in threading library for threads.
1
1
0
I have some python code which runs every 10 minutes or so. It reads in data, does some processing and produces an output. I'd like to change this so the it runs continuously. Is python well suited for running as a server (asin running continuously) or would I be better off converting my application to use c++? If I leave it in python, are there any modules you would reccomend for achieving this? Thanks
Using python as a server
0.099668
0
0
160
10,460,716
2012-05-05T09:55:00.000
1
0
0
0
python,html,forms,google-app-engine,jinja2
10,460,937
2
true
1
0
Why do this? Any logic that you implement in the template is accessible to you in the controller of your app, including any variables that you place in the template context. If the data has been changed due to interaction with the user, then the best way to retrieve data, in my opinion, is to set up a form and use the normal POST method to send the request and the required data, correctly encoded and escaped, back to your program. In this way, you are protected from XSS issues, among other inconveniences. I would never do any processing in a template, and only use any local logic to modify the presentation itself. EDIT Taking into account your scenario, I suggest the following: User presses a button on a page and invokes a Get handler Get handler queries a database and receives a list of images the list is cached, maybe in a memcache and the key is sent with the list of images encoded as a parameter in the GET URL displayed by the template List of images get passed to the template engine for display Another button is pressed and a different Get handler is invoked using the key received encoded in the GET URL, after sanitising and validation, to retrieve the cached list If you don't want the intermediate step of caching a key-value pair, you may want to encode the whole list in the GET URL, and the step of sanitising and validation should be as easy on the whole list as on a key to the list. Both methods avoid a round trip to the database, protect you from malicious use, and respect the separation of data, presentation, and logic.
2
1
0
I started using Jinja Templating with Python to develop web apps. With Jinja, I am able to send objects from my Python code to my index.html, but is it possible to receive objects from my index.html to my Python code? For example, passing a list back and forth. If so, do you have any examples? Thank You!
Sending objects from Jinja Templates to Python
1.2
0
0
1,379
10,460,716
2012-05-05T09:55:00.000
-1
0
0
0
python,html,forms,google-app-engine,jinja2
10,460,906
2
false
1
0
Just a thought.. Have you tried accessing the variables in the dict you passed to jinja after processing the template?
2
1
0
I started using Jinja Templating with Python to develop web apps. With Jinja, I am able to send objects from my Python code to my index.html, but is it possible to receive objects from my index.html to my Python code? For example, passing a list back and forth. If so, do you have any examples? Thank You!
Sending objects from Jinja Templates to Python
-0.099668
0
0
1,379
10,461,356
2012-05-05T11:22:00.000
2
0
0
0
python,amazon-s3,amazon-ec2,amazon-web-services,boto
10,461,505
2
true
0
0
Using s3cmd tools (http://s3tools.org/s3cmd) it is possible to download/upload files stored in buckets.
1
2
0
I would like to install some Python modules on my EC2 instance. I have the files I need for the installation on an S3 bucket. I can also connect from my EC2 instance to the S3 bucket through Python boto, but I cannot access the bucket contents to get the source files I need installed.
How can I navigate into S3 bucket folders from EC2 instance?
1.2
0
1
3,321
10,463,702
2012-05-05T16:19:00.000
1
0
0
0
python,tree,wxpython
10,464,339
2
false
0
1
I don't know use WxPython and so don't have much idea about it. But in general what you can do is whenever a key is pressed, call a callback function and you could get the time when the key was pressed. save it somewhere. And when the next key is pressed, get the time. compare both times, if there's not much significant delay (you can decide the delay), it means that both the keys were pressed simultaneously (although they were not).
1
0
0
I am creating a Project Manager using wxPython it has a splitter window. On one side is a tree that shows the names of and opens the files and on the other size is a textctrl that is used to edit the file. One problem I am having is that I would like it to go back 4 spaces when SHIFT and TAB are pressed, I have code working that add's 4 spaces when TAB is pressed. I am also have a problem that when I add a file that is in a different folder to my programs cwd the tree adds a new node and the file appears under this node and I am struggling to get the tree to save to a file. Also I would like to know how to add an icon to an item in the tree from an external png file. I would appreciate any help that could be given with either of these problems.
Multiple key press detection wxPython
0.099668
0
0
1,518
10,464,301
2012-05-05T17:33:00.000
0
0
1
0
python,python-3.x,version,nltk
22,716,959
15
false
0
0
If you're talking about shell, as in linux, if you install python 3, you can invoke it separately with the python3 command. Python 2 is just invoked using python. At least this is my experience with Ubuntu-like systems, I haven't used other Linux environments. I realize this question is almost a year old, but NLTK has been ported to Python 3 (or at least that was true as of writing this).
1
14
0
I am faced with a unique situation, slightly trivial but painful. I need to use Python 2.6.6 because NLTK is not ported to Python 3 (that's what I could gather). In a different code(which am working concurrently), there is a collections counter function which is available only in Python 3 but not in Python 2.6.6. So, each time I switch between the two codes, I need to install & uninstall the versions. That's such a waste of time. Any suggestions on how I specify which version I want to use?
How to use multiple versions of Python without uninstallation
0
0
0
36,794
10,465,212
2012-05-05T19:23:00.000
2
0
1
0
python,audio,video
10,465,583
1
true
0
0
Demultiplexing the audio stream out of the AV container and decompressing it: you'll want a wrapper for the ffmpeg library. eg try pyffmpeg, AVbin, pymedia. Normalizing: use a Numpy array of per-sample integers, find the max then multiply the array to amplify/attenuate volume. Consider using ReplayGain. Recompressing the audio and remultiplexing with the original video stream into a new container: same libraries as above, but more likely to cause difficulties, especially for proprietary containers and codecs. (eg I believe ffmpeg can only produce a really old WMA version.) It's not going to be straightforward and I'm not sure it will necessarily be worth it, in comparison with using a readymade app. For example ffmpeg itself has a command line you could batch script, and eg avidemux has both command line and GUI interfaces. Also, I suspect you'll find simple peak normalisation won't get you very far in terms of making effective volume levels similar; usually you'll need to use some quantity of dynamic range compression too.
1
0
0
By "normalize," I mean "increase/decrease the overall volume so that the maximum reaches maximum headroom." I'm part of my school's news crew and teachers send commercials in, but they are often too loud or too soft. I'd like to create a program the normalizes the audio (no compression or limiting). It would generally have to work with .mov and .wmv files. Can anyone guide me toward some good tutorials, libraries, etc.?
Python - How to normalize audio in a video file?
1.2
0
0
1,748
10,466,411
2012-05-05T22:12:00.000
-1
0
0
0
python,pygame
10,779,855
3
false
0
1
Have you tried calling just pygame.quit() or pygame.init()? I don't believe there is a pygame.display.quit().
1
3
0
Having a pygame.display window open, I call pygame.display.quit() upon it in order to destroy the window. Because I need to open the window again, I call pygame.display.init() and pygame.display.set_mode(), but after these two functions are called, nothing happens. Can anyone point me to the root of this problem?
Pygame display module init and quit
-0.066568
0
0
2,849
10,466,590
2012-05-05T22:44:00.000
5
0
0
0
python,pygame
10,466,840
4
true
0
1
No there isn't. All you can do is minimize the window using pygame.display.iconify().
1
6
0
Is there any way to hide the screen of 'pygame.display' and to make it visible afterwards without calling 'pygame.display.quit()'?
Hiding pygame display
1.2
0
0
7,611
10,467,224
2012-05-06T00:54:00.000
0
0
0
0
python,wxpython
10,483,810
3
false
0
1
I'm guessing the OP is talking about an MDI frame, which Microsoft created and has since decided to abandon. I think the OP should check out the wx.agw.aui widget set versus the wx.aui stuff since the former has been updated a lot and wx.aui has not. Plus the agw package is pure Python and thus much more hackable.
1
1
0
I have a quick question about WxPython. I would like to have frames inside of my main frame in a program. The user should not be able to move the frame. Any ideas you guys? Thanks
Frame Inside a Frame WxPython
0
0
0
1,726
10,468,669
2012-05-06T06:31:00.000
1
0
1
1
python,linux,ubuntu,tkinter,pyinstaller
10,468,962
1
true
0
0
The following is reposted from my comment on the question, so that this question may be marked as answered (assuming OP is satisfied with this answer). It was originally posted as a comment because it does not answer the question directly. The reason there aren't many tutorials on how to do this on Linux is because there is not much point to do this on Linux, as the actual Python files can be turned into a package with a set of dependencies and everything. Perhaps you should try that instead; the PyInstaller approach is only worth it if you have a valid reason not to use packages (and such reasons do exist).
1
2
0
I have been searching for tutorials on how to use pyinstaller and cant find that one that i can follow. I have been researching this for hours on end and cant find anything that helps me. I am using Linux and was wondering if anyone can help me out form the very begging, because there is not one part i understand about this. I also have three files that make up one program, and am also using Tkinter so i dont know if that makes it more difficult.
Python PyInstaller Ubuntu Troubles
1.2
0
0
731
10,476,161
2012-05-07T02:52:00.000
2
0
1
0
python,redhat,ipython
10,488,757
2
false
0
0
IPython is a Python package. When you have multiple Pythons, you must (generally, barring PYTHONPATH hacks) install a package for a given Python. yum would only install it for the System Python, so you must install it separately for your own Python. The solution is to simply pip install ipython with your own Python (install distribute/pip first, if you haven't).
1
0
0
I am guessing that I am not the only one using non-system Python 2.7.2 for scientific computations on 6.2 PUIAS Linux (RedHat)). My Python 2.7.2 is installed in the sandbox and I call it with the command python2.7. When I need to execute scripts this is not the problem. However, I would like to use ipython instead default Python shell for interactive use. I added ipython using yum? When I start it, it defaults to system python. How do I force it to load python2.7? Thanks a bunch!
ipython for Python 2.7.2 users on PUIAS (RedHat) Linux
0.197375
0
0
677
10,477,310
2012-05-07T05:55:00.000
0
0
0
1
python,task,scheduler,bottle
11,097,542
1
true
1
0
I would suggest threading it allows the webserver to be unaffected by the scheduled tasks that will either be in a queue or written into the code itself.
1
2
0
Does anyone have any examples on how to integrate a task scheduler in Bottle. Something like APScheduler or sched?
is it possible to run a task scheduler in bottle web framework
1.2
0
0
663
10,479,040
2012-05-07T08:37:00.000
5
0
0
0
python,hard-drive
10,479,073
1
true
0
0
On linux, you can open('/dev/sdX', 'r'). However, the easier way is using the dd commandline utility (but it will only work properly if both disks are exactly the same).
1
4
0
I want to read bytes directly off a hard drive, preferably using python. How can I do this, provided it is even possible. Also, can I write directly to a hard drive, and how? I want to do this to make a complete clone of a hard drive, and then restoring from that backup. I'm quite certain there are easier ways to get what I want done, and this is partly simply curiosity ;)
Python - Reading directly from hard drive
1.2
0
0
1,762
10,479,078
2012-05-07T08:40:00.000
2
0
0
0
python,command-prompt,fabric
30,519,133
6
false
0
0
Putting this as an answer though its a comment from @BobNadler run("yes | my_command");
1
32
0
I want to run a command which prompts me to enter yes/no or y/n or whatever. If I just run the command local("my_command") then it stops and asks me for input. When I type what is needed, script continues to work. How can I automatically respond to the prompt?
How to answer to prompts automatically with python fabric?
0.066568
0
0
22,953
10,481,008
2012-05-07T11:08:00.000
1
0
0
0
python,linux,matplotlib,archlinux
10,481,152
2
false
0
0
One thing to take into account for the huge numpy array is that you are not touching it. Memory is allocated lazily by default by the kernel. Try writing some values in that huge array and then check for swapping behaviour.
1
2
1
My program plots a large number of lines (~200k) with matplotlib which is pretty greedy for memory. I usually have about 1.5G of free memory before plotting. When I show the figures, the system starts swapping heavily when there's still about 600-800M of free RAM. This behavior is not observed when, say, creating a huge numpy array, it just takes all the available memory instantaneously. It would be nice to figure out whether this is a matplotlib or system problem. I'm using 64-bit Arch Linux. UPD: The swapiness level is set to 10. Tried setting it to 0, as DoctororDrive suggested, but same thing. However, other programs seem to be ok with filling almost all the memory before the swap is used.
system swaps before the memory is full
0.099668
0
0
199
10,483,013
2012-05-07T13:33:00.000
0
1
0
0
python,oauth,proxy
10,496,221
1
false
0
0
I am guessing you will have to set up your own proxy service for this, i.e set up your entire API and OAuth logic on a server outside your own country. If you call this proxy service from within your own country it is probably not apparent that you are actually communicating with Twitter. You will need some sort of cryptographic layer between your client and your proxy/relay service though to make it somewhat secure/obscure. Your own request signing mechanism so to say, and your proxy/relay endpoint should definitely talk (HTTPS/SSL).
1
0
0
Twitter, Facebook and some other websites are blocked in my country. And I want to call the open API to do some hacking. I have searched but it can't solve my problem. Any python libraries can help me sign the OAuth request through proxy and get the access token ? Thanks.
How can I sign OAuth with proxy
0
0
1
771
10,484,184
2012-05-07T14:50:00.000
1
0
1
1
python,distutils
13,338,067
1
false
0
0
You may use something like the common solution on *nix. Install the config files to %PROGRAMFILES%, and copy them to %APPDATA% when the program detects a particular user is running the program for the first time (which can be detected by checking that the config files are missing).
1
6
0
My setup routine using distutils that works perfectly fine on Windows XP does not work for Windows 7. Here are the specifics: My package has a lot of config files which I install into %APPDATA%. On Windows I run setup.py with the bdist_wininst option to create an installer. On Win7 the installer is then executed as Administrator so that the module can be installed into %PROGRAMFILES%\Python etc. The installation does not report any errors but as you might have guessed the config files will not have been installed into %APPDATA% nor anywhere else (I searched for them). If I open a cmd as Administrator and install my package with the install option directly (setup.py install), everything works perfectly fine however. So, what am I missing here? Is this a limitation in the graphical installer or am I doing something wrong?
Installing data files into %APPDATA% with distutils on Windows 7 X64
0.197375
0
0
594
10,487,563
2012-05-07T19:02:00.000
1
0
1
0
python,python-3.x,text,encoding
10,487,881
3
false
0
0
You should open the file with a codecs to make sure that the file gets interpreted as UTF8. import codecs fd = codecs.open(filename,'r',encoding='utf-8') data = fd.read()
1
40
0
I keep getting this error while reading a text file. Is it possible to handle/ignore it and proceed? UnicodeEncodeError: ‘charmap’ codec can’t decode byte 0x81 in position 7827: character maps to undefined.
Unicode error handling with Python 3's readlines()
0.066568
0
0
76,913
10,489,126
2012-05-07T21:05:00.000
0
1
0
0
python,import,module,wxwidgets
10,489,311
2
false
0
0
If you start your script with something like #!/usr/local/bin/python (but using the path to your python interpreter) you can run it without including python in your command, like a bash script.
1
1
0
When I import the wx module in a python interpreter it works as expect. However, when I run a script (ie. test.py) with wx in the imports list, I need to write "python test.py" in order to run the script. If I try to execute "test.py" I get an import error saying there is no module named "wx". Why do I need to include the word python in my command? PS the most helpful answer I found was "The Python used for the REPL is not the same as the Python the script is being run in. Print sys.executable to verify." but I don't understand what that means.
importing the wx module in python
0
0
0
383