Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
13,593,922
2012-11-27T22:00:00.000
1
0
1
0
1
python,windows,command-prompt
0
13,594,149
0
4
0
false
0
0
Changing your PATH environment variable should do the trick. Some troubleshooting tips: Make sure you didn't just change the local, but rather the system variable to reflect the new location Make sure you restarted your CL window (aka close "cmd" or command prompt and reopen it). This will refresh the system variables you just updated. Make sure you remove all references to C:\Python32\ or whatever the old path was (again, check local and system PATH - they are both found on the same environmental variables window). Check to see if Python3.2 is installed where you think it is... (just rename the directory to something like OLD_Python3.2 and go to your CLI and enter "python" - does it start up? If it does is it 2.7? or 3.2? If not, you did something wrong with your PATH variable. All else fails - reboot and try again (you might have some persistent environment variable - which I don't see how that can be - but hey, we are brainstorming here! - and a reboot would give you a fresh start. If that doesn't work then I'd think you are doing something else wrong (aka user error). CMD has to know where to look for python before it can execute. It knows this from your PATH variable... now granted, I work almost exclusively in 2.6/2.7, so if they did something to the registry (which I doubt) then I wouldn't know about that. Good luck!
1
2
0
0
My command prompt is currently running Python 3.2 by default how do I set it up to run Python 2.7 by default, I have changed the PATH variable to point towards Python 2.7, but that did not work. UPDATE: It still does not work. :( Still running python3 - to be specific it runs python3 when I am trying to install flask - which is what I want to do. More generally, when I simply type python into the command line, it does nothing. I get a 'python' is not recognized as an internal or external command, operable program, or batch file error. No idea what to do.
Command Prompt: Set up for Python 2.7 by default
0
0.049958
1
0
0
8,222
13,606,584
2012-11-28T13:47:00.000
0
0
0
1
1
python,networking,httplib
1
13,643,155
0
3
0
false
0
0
After alot more research, the glibc problem jedwards suggested, seemed to be the problem. I did not find a solution, but made workaround for my usecase. Considering I only use one URL, I added my own "resolv.file" . A small daemon gets the IP address of the URL when PHY reports cable connected. This IP is saved to "my own resolv.conf". From this file the python script retrieves the IP to use for posts. Not really a good solution, but a solution.
1
1
0
0
I hope this doesn't cross into superuser territory. So I have an embedded linux, where system processes are naturally quite stripped. I'm not quite sure which system process monitors to physical layer and starts a dhcp client when network cable is plugged in, but i made one myself. ¨ The problem is, that if i have a python script, using http connections, running before i have an IP address, it will never get a connection. Even after i have a valid IP, the python still has "Temporary error in name resolution" So how can I get the python to realize the new connection available, without restarting the script? Alternatively , am I missing some normal procedure Linux runs normally at network cable connect. The dhcp client I am using is udhcpc and python version is 2.6. Using httplib for connections.
Python not getting IP if cable connected after script has started
0
0
1
0
1
772
13,609,985
2012-11-28T16:38:00.000
0
0
0
0
0
javascript,python,html,django,web-applications
0
13,610,314
0
2
0
false
1
0
As Sanjay says, prefer using memory solutions (online statuses have a quite brief use) like the Django cache (Redis or Memcache). If you want a simple way of updating the online status of an user on an already loaded web page, use any lib like jQuery, AJAX-poll an URL giving the status of an user, and then update the tiny bit of your page showing your wanted status. Don't poll this page too often, once every 15 seconds seems reasonable.
2
1
0
0
I am quite new to web development and am working on this social networking site. Now I want to add functionality to show if a person is online. Now one of the ways I figure out doing this is by keeping online status bit in the database. My question is how to do it dynamically. Say the page is loaded and a user (say connection) comes online. How do I dynamically change status of that connection on that page. I wanted to know if there are any tools(libraries) available for this type of tracking. My site is in python using django framework. I think something can be done using javascript/ jquery . I want to know if I am going in the right direction or is there anything else I should look into?
Tracking online status?
1
0
1
0
0
302
13,609,985
2012-11-28T16:38:00.000
1
0
0
0
0
javascript,python,html,django,web-applications
0
13,610,508
0
2
0
false
1
0
Create a new model with a last_activity DateTimeField and a OneToOneField to User. Alternatively, if you are subclassing User, using a custom User in django 1.5, or using a user profile, just add the field to that model. Write a custom middleware that automatically updates the last_activity field for each user on every request. Write an is_online method in one of your models that uses a timedelta to determine a user's inactivity period to return a boolean for whether they are online. For example, if their last_activity was more than 15 minutes ago, return False. Write a view that is polled through jQuery ajax to return a particular user's online status.
2
1
0
0
I am quite new to web development and am working on this social networking site. Now I want to add functionality to show if a person is online. Now one of the ways I figure out doing this is by keeping online status bit in the database. My question is how to do it dynamically. Say the page is loaded and a user (say connection) comes online. How do I dynamically change status of that connection on that page. I wanted to know if there are any tools(libraries) available for this type of tracking. My site is in python using django framework. I think something can be done using javascript/ jquery . I want to know if I am going in the right direction or is there anything else I should look into?
Tracking online status?
1
0.099668
1
0
0
302
13,611,126
2012-11-28T17:38:00.000
1
0
0
0
0
python,opencv,machine-learning,computer-vision,object-detection
0
13,612,350
0
1
0
true
0
0
This looks like you need to determine what features you would like to train your classifier on first, as using the haar classifier it benefits from those extra features. From there you will need to train the classifier, this requires you to get a lot of images that have cars and those that do not have cars in them and then run it over this and having it tweak the average it is shooting for in order to classify to the best it can with your selected features. To get a better classifier you will have to figure out the order of your features and the optimal order you put them together to further dive into the object and determine if it is in fact what you are looking for. Again this will require a lot of examples for your particular features and your problem as a whole.
1
1
1
1
I am having a little bit of trouble creating a haar classifier. I need to build up a classifier to detect cars. At the moment I made a program in python that reads in an image, I draw a rectangle around the area the object is in, Once the rectangle is drawn, it outputs the image name, the top left and bottom right coordinates of the rectangle. I am unsure of where to go from here and how to actually build up the classifier. Can anyone offer me any help? EDIT* I am looking for help on how to use the opencv_traincascade. I have looked at the documentation but I can't quite figure out how to use it to create the xml file to be used in the detection program.
Creating a haar classifier using opencv_traincascade
0
1.2
1
0
0
1,867
13,619,088
2012-11-29T04:55:00.000
2
0
1
1
0
python,virtualenv
0
13,619,252
0
2
1
true
1
0
Tell Eclipse or Idle that the python interpreter is django_venv/bin/python instead of /usr/bin/python
1
3
0
0
When I enter my virtual environment (source django_venv/bin/activate), how do I make that environment transfer to apps run outside the terminal, such as Eclipse or even Idle? Even if I run Idle from the virtualenv terminal window command line (by typing idle), none of my pip installed frameworks are available within Idle, such as SQLAlchemy (which is found just fine when running a python script from within the virtual environment).
Virtualenv and python - how to work outside the terminal?
0
1.2
1
0
0
2,074
13,622,895
2012-11-29T09:45:00.000
1
0
0
1
0
python,google-app-engine,web2py
0
13,892,159
0
1
0
false
1
0
Mutual exclusion is already built into DBMS so we just have to use that. Lets take an example. First, your table in the model should be defined in such a way that your room number should be unique (use UNIQUE constraint). When User1 and User2 both query for a room, they should get a response saying the room is vacant. When both the users send the "BOOK" request for that room at the same time, the booking function should directly insert the "BOOK" request of both users into the db. But only one will actually be executed (because of the UNIQUE constraint) and the other will produce a DAL exception. Catch the exception and respond to the user whose "BOOK" request was unsuccesful, saying You just missed this room by an instant :-) Hope this helped.
1
0
0
0
I'm making a room reservation system in Web2Py over Google App Engine. When a user is booking a Room the system must be sure that this room is really available and no one else have reserved it just a moment before. To be sure I make a query to see if the room is available, then I make the reservation. The problem is how can I do this transaction in a kind of "Mutual exclusion" to be sure that this room is really for this user? Thank you!! :)
Transactions in Web2Py over Google App Engine
0
0.197375
1
0
0
192
13,628,190
2012-11-29T14:42:00.000
0
0
0
0
0
java,python,cookies,http-headers,httpwebrequest
0
13,628,291
0
1
0
true
1
0
When you send the login information (and usually in response to many other requests) the server will set some cookies to the client, you must keep track of them and send them back to the server for each subsequent request. A full implementation would also keep track of the time they are supposed to be stored.
1
0
0
0
I have a crawler that automates the login and crawling for a website, but since the login was changed it is not working anymore. I am wondering, can I feed the browser cookie (aka, I manually log-in) to my HTTP request? Is there anything particularly wrong in principle that wouldn't make this work? How do I find the browser cookies relevant for the website? If it works, how do I get the "raw" cookie strings I can stick into my HTTP request? I am quite new to this area, so forgive my ignorant questions. I can use either PYthon or Java
How to use browser cookies programmatically
0
1.2
1
0
1
925
13,654,122
2012-11-30T22:29:00.000
2
0
1
0
0
python
0
62,995,730
0
10
0
false
0
0
you can try the following as well: import os print (os.environ['USERPROFILE']) The advantage of this is that you directly get an output like: C:\\Users\\user_name
3
39
0
0
I know that using getpass.getuser() command, I can get the username, but how can I implement it in the following script automatically? So i want python to find the username and then implement it in the following script itself. Script: os.path.join('..','Documents and Settings','USERNAME','Desktop')) (Python Version 2.7 being used)
How to make Python get the username in windows and then implement it in a script
0
0.039979
1
0
0
107,727
13,654,122
2012-11-30T22:29:00.000
46
0
1
0
0
python
0
29,154,027
0
10
0
false
0
0
os.getlogin() did not exist for me. I had success with os.getenv('username') however.
3
39
0
0
I know that using getpass.getuser() command, I can get the username, but how can I implement it in the following script automatically? So i want python to find the username and then implement it in the following script itself. Script: os.path.join('..','Documents and Settings','USERNAME','Desktop')) (Python Version 2.7 being used)
How to make Python get the username in windows and then implement it in a script
0
1
1
0
0
107,727
13,654,122
2012-11-30T22:29:00.000
72
0
1
0
0
python
0
13,654,236
0
10
0
true
0
0
os.getlogin() return the user that is executing the, so it can be: path = os.path.join('..','Documents and Settings',os.getlogin(),'Desktop') or, using getpass.getuser() path = os.path.join('..','Documents and Settings',getpass.getuser(),'Desktop') If I understand what you asked.
3
39
0
0
I know that using getpass.getuser() command, I can get the username, but how can I implement it in the following script automatically? So i want python to find the username and then implement it in the following script itself. Script: os.path.join('..','Documents and Settings','USERNAME','Desktop')) (Python Version 2.7 being used)
How to make Python get the username in windows and then implement it in a script
0
1.2
1
0
0
107,727
13,655,152
2012-12-01T00:34:00.000
0
0
0
0
0
python,wxpython
0
13,687,868
0
2
0
false
0
1
Probably the only way to do it with the standard wx.Menu is to destroy and recreate the entire menubar. You might be able to Hide it though. Either way, I think it would be easiest to just put together a set of methods that creates each menubar on demand. Then you can destroy one and create the other. You might also take a look at FlatMenu since it is pure Python and easier to hack.
1
1
0
0
I'm writing a document based application in wxPython, by which I mean that the user can have open multiple documents at once in multiple windows or tabs. There are multiple kinds of documents, and the documents can all be in different "states", meaning that there should be different menu options available in the main menu. I know how to disable and enable menu items using the wx.EVT_UPDATE_UI event, but I can't figure out how to pull off a main menu that changes structure and content drastically based on which document that currently has focus. One of my main issues is that the main menu is created in the top level window, and it has to invoke methods in grand children and great grand children that haven't even been created yet. Contrived example; when a document of type "JPEG" is open, the main menu should look like: File Edit Compression Help And when the user switches focus (CTRL+Tab) to a document of type "PDF", the main menu should change to: File Edit PDF Publish Help And the "Edit" menu should contain some different options from when the "JPEG" document was in focus. Current I'm just creating the menu in a function called create_main_menu in the top level window, and the document panels have no control over it. What would be necessary to pull off the kind of main menu scheme I describe above, specifically in wxPython?
How do I manage a dynamic, changing main menu in wxPython?
0
0
1
0
0
2,119
13,656,736
2012-12-01T05:34:00.000
1
0
0
0
1
python,architecture,rabbitmq,web-frameworks,gevent
0
13,775,086
0
1
1
false
1
0
My first thought is that you could use a service oriented architecture to separate these tasks. Each of these services could run a Flask app on a separate port (or machine (or pool of machines)) and communicate to each other using simple HTTP. The breakdown might go something like this: GameService: Handles incoming connections from players and communicates with them through socketio. GameFinderService: Accepts POST requests from GameService to start looking for games for player X. Accepts GET requests from GameService to get the next best game for playerX. You could use Redis as a backing store for this short-lived queue of games per connected player that gets updated each time GameStatusService (below) notifies us of a change. GameStatusService: Monitors in-progress games via UDP and when a notable event occurs e.g. new game created, player disconnects, etc it notifies GameFinderService of the change. GameFinderService would then update its queues appropriately for each connected player. Redis is really nice because it serves as a data structure store that allows you to maintain both short and long lived data structures such as queues without too much overhead.
1
2
0
0
I'm creating a website that allows players to queue to find similarly skilled players for a multiplayer video game. Simple web backends only modify a database and create a response using a template, but in addition to that, my backend has to: Communicate with players in real-time (via gevent-socketio) while they queue or play Run calculations in the background to find balanced games, slowly compromising game quality as waiting time grows (and inform players via SocketIO when a game has been found) Monitor in progress games via a UDP socket (and if a player disconnects, ask the queue for a substitute) and eventually update the database with the results I know how I would do these things individually, but I'm wondering how I should separate these components and let them communicate. I imagine that my web framework (Flask) shouldn't be very involved at all in these other things. Since I already must use gevent, I'm currently planning to start separate greenlets for each of these tasks. This will work for all my tasks (with the possible exception of the calculations) because they will usually be waiting for something to happen. However, this won't scale at all because I can't run more Flask instances. Everything would be dependent on the greenlets running in just a single thread. So is this the best way? Is there another way to handle separating these tasks (especially with languages I might use in the future that don't have coroutines)? I've heard of RabbitMQ/ZeroMQ and Celery and other such tools, but I wasn't sure how and whether to use them to solve this problem.
When a web backend does more than simply reply to requests, how should my application be structured?
0
0.197375
1
0
0
183
13,662,400
2012-12-01T18:30:00.000
0
0
1
0
0
python,multithreading,asynchronous
0
13,663,583
0
1
0
false
0
0
i didn't really understand what sort of application its going to be, but i tried to anwser your questions create a thread that query's, and then sleeps for a while create a thread for each user, and close it when the user is gone create a thread that download's and stops after all, there ain't going to be 500 threads.
1
0
0
0
I'm developing a Python-application that "talks" to the user, and performs tasks based on what the user says(e.g. User:"Do I have any new facebook-messages?", answer:"Yes, you have 2 new messages. Would you like to see them?"). Functionality like integration with facebook or twitter is provided by plugins. Based on predefined parsing rules, my application calls the plugin with the parsed arguments, and uses it's response. The application needs to be able to answer multiple query's from different users at the same time(or practically the same time). Currently, I need to call a function, "Respond", with the user input as argument. This has some disadvantages, however: i)The application can only "speak when it is spoken to". It can't decide to query facebook for new messages, and tell the user whether it does, without being told to do that. ii)Having a conversation with multiple users at a time is very hard, because the application can only do one thing at a time: if Alice asks the application to check her Facebook for new messages, Bob can't communicate with the application. iii)I can't develop(and use) plugins that take a lot of time to complete, e.g. download a movie, because the application isn't able to do anything whilesame the previous task isn't completed. Multithreading seems like the obvious way to go, here, but I'm worried that creating and using 500 threads at a time dramatically impacts performance, so using one thread per query(a query is a statement from the user) doesn' seem like the right option. What would be the right way to do this? I've read a bit about Twisted, and the "reactor" approach seems quite elegant. However, I'm not sure how to implement something like that in my application.
Multithreading or how to avoid blocking in a Python-application
0
0
1
0
0
190
13,664,861
2012-12-01T23:34:00.000
0
0
1
0
0
python,recursion,artificial-intelligence,greedy
0
13,665,044
0
3
0
false
0
0
You could actually brute force the game, and prove that every time there is a winning strategy, your A.I. picks the correct move. Then, you could prove that for every position, your A.I. picks the move which maximizes the chances of having a winning strategy, assuming the other player is playing randomly. There are not that many possibilities, so you should be able to eliminate all of them. You could also significantly diminish the space of possibilities by assuming the other player is actually slightly intelligent, e.g. always tries to block a move which results in immediate victory.
2
2
0
0
I made a tic tac toe A.I. Given each board state, my A.I. will return 1 exact place to move. (Even if moves are equally correct, it chooses same one every time, it does not pick a random one) I also made a function that loops though all possible plays made with the A.I. So it's a recursive function that lets the A.I. make a move for a given board, then lets the other play make all possible moves and calls the recursive function in it self with a new board for each possible move. I do this for when the A.I goes first, and when the other one goes first... and add these together. I end up with 418 possible wins and 115 possible ties, and 0 possible loses. But now my problem is, how do I maximize the amount of wins? I need to compare this statistic to something, but I can't figure out what to compare it to.
How can I test if my Tic Tac Toe A.I. is perfect?
0
0
1
0
0
1,040
13,664,861
2012-12-01T23:34:00.000
0
0
1
0
0
python,recursion,artificial-intelligence,greedy
0
18,435,798
0
3
0
false
0
0
One issue with akaRem's answer is that an optimal player shouldn't look like the overall distribution. For example, a player that I just wrote wins about 90% of the time against someone playing randomly and ties 10% of the time. You should only expect akaRem's statistics to match if you have two players against each other playing randomly. Two optimal players would always result in a tie.
2
2
0
0
I made a tic tac toe A.I. Given each board state, my A.I. will return 1 exact place to move. (Even if moves are equally correct, it chooses same one every time, it does not pick a random one) I also made a function that loops though all possible plays made with the A.I. So it's a recursive function that lets the A.I. make a move for a given board, then lets the other play make all possible moves and calls the recursive function in it self with a new board for each possible move. I do this for when the A.I goes first, and when the other one goes first... and add these together. I end up with 418 possible wins and 115 possible ties, and 0 possible loses. But now my problem is, how do I maximize the amount of wins? I need to compare this statistic to something, but I can't figure out what to compare it to.
How can I test if my Tic Tac Toe A.I. is perfect?
0
0
1
0
0
1,040
13,667,690
2012-12-02T08:33:00.000
0
0
1
1
0
python,ide,pymongo
0
32,381,638
0
2
0
false
0
0
If you are in windows platform just install the pymongo.exe file and it will install in python directory. Then you will be able to access it in any IDE such PyCharm by typing: import pymongo
1
0
0
0
I am a python newbie - I want to use the pymongo library to access mongoDb using some convenient IDE, and after looking through the web i decided to use WING. Can some one point how to add the pymongo library to the WING ide (or to any other IDE for that matter)? i want to get the auto-completion for commands. Thanks
Adding PyMongo to Python IDE
0
0
1
0
0
666
13,675,440
2012-12-03T00:05:00.000
4
1
0
0
0
python,nginx,nosql,openbsd
0
13,675,611
0
2
0
false
0
0
My advice - if you don't know how to use these technologies - don't do it. Few servers will cost you less than the time spent mastering technologies you don't know. If you want to try them out - do it. One by one, not everything at once. There is no magic solution on how to use them.
2
1
0
0
I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit. We're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has experience with OpenBSD and has insisted that we use it as an alternative to linux. The reason that we want to use OpenBSD is that it's well known for it's security. The reason we chose Python is that it's fast. The reason we want to use Nginx is that it's known to be able to handle more http request when compared to Apache. The reason we want to use NoSQL is that MySQL is known to have problems in scalability when the databases grows. We want the web pages to load as fast as possible (caching and cdn's will be used) using the minimum amount of hardware possible. That's why we want to use ONPN (OpenBSD,Nginx,Python,Nosql) instead of the traditional LAMP (Linux,Apache,Mysql,PHP). We're not a very big company so we're using opensource technologies. Any suggestion is appreciated on how to use these software as a platform and giving hardware suggestions is also appreciated. Any criticism is also welcomed.
How to utilize OpenBSD, Nginx, Python and NoSQL
0
0.379949
1
1
0
1,447
13,675,440
2012-12-03T00:05:00.000
1
1
0
0
0
python,nginx,nosql,openbsd
0
13,676,002
0
2
0
false
0
0
I agree with wdev, the time it takes to learn this is not worth the money you will save. First of all, MySQL databases are not hard to scale. WordPress utilizes MySQL databases, and some of the world's largest websites use MySQL (google for a list). I can also say the same of linux and PHP. If you design your site using best practices (CSS sprites) Apache versus Nginx will not make a considerable difference in load times if you utilize a CDN and best practices (caching, gzip, etc). I strongly urge you to reconsider your decisions. They seem very ill-advised.
2
1
0
0
I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit. We're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has experience with OpenBSD and has insisted that we use it as an alternative to linux. The reason that we want to use OpenBSD is that it's well known for it's security. The reason we chose Python is that it's fast. The reason we want to use Nginx is that it's known to be able to handle more http request when compared to Apache. The reason we want to use NoSQL is that MySQL is known to have problems in scalability when the databases grows. We want the web pages to load as fast as possible (caching and cdn's will be used) using the minimum amount of hardware possible. That's why we want to use ONPN (OpenBSD,Nginx,Python,Nosql) instead of the traditional LAMP (Linux,Apache,Mysql,PHP). We're not a very big company so we're using opensource technologies. Any suggestion is appreciated on how to use these software as a platform and giving hardware suggestions is also appreciated. Any criticism is also welcomed.
How to utilize OpenBSD, Nginx, Python and NoSQL
0
0.099668
1
1
0
1,447
13,702,106
2012-12-04T11:42:00.000
1
0
1
0
1
python,qt,windows-8,pyside,stackless
1
16,126,325
0
2
0
false
0
1
I had the same problem with Pyside 1.1.2 and Qt 4.8.4. The solution for me was to set the compatibility mode of the Python executable to Windows 7 via right click on the executable -> Properties -> Compatiblity -> Run this program in compatibility mode for: Windows 7 Hope that helps.
2
2
0
0
I cannot get my code to run on my win8 laptop. I am working with a combination of: Stackless Python 2.7.2 Qt 4.8.4 PySide 1.1.2 Eclipse/Pydev and WingIDE This works well on my Win7 PC, but now i have bought a demo laptop with windows 8. As far as I know all is installed the same way as on my PC. When i run my program (same code) now, i get a warning: "Qt: Untested Windows version 6.2 detected!" Ok, so that could be the source of my problem, but also i get errors: some times the program just quits after the warning above (i think only eclipse) sometimes i get an APPCRASH (i think only eclipse) sometimes i get the exception: TypeError: Error when calling the metaclass bases: mro() returned base with unsuitable layout ('') sometimes i get the exception: TypeError: Error when calling the metaclass bases: multiple bases have instance lay-out conflict Especially the last two don't seem like a windows problem, but i don't see any other difference with my PC win7 install. Does anyone have any idea what is going on or how to fix this? Did i miss a step in the installation or is it some incompatibility maybe? Cheers, Lars Does anyone have some input on this?
windows 8 incompatibility?
1
0.099668
1
0
0
2,146
13,702,106
2012-12-04T11:42:00.000
0
0
1
0
1
python,qt,windows-8,pyside,stackless
1
13,881,726
0
2
0
false
0
1
try using Hyper-V however Hyper-V is not installed by default in Windows 8. You need to go "Turn Windows features on or off."
2
2
0
0
I cannot get my code to run on my win8 laptop. I am working with a combination of: Stackless Python 2.7.2 Qt 4.8.4 PySide 1.1.2 Eclipse/Pydev and WingIDE This works well on my Win7 PC, but now i have bought a demo laptop with windows 8. As far as I know all is installed the same way as on my PC. When i run my program (same code) now, i get a warning: "Qt: Untested Windows version 6.2 detected!" Ok, so that could be the source of my problem, but also i get errors: some times the program just quits after the warning above (i think only eclipse) sometimes i get an APPCRASH (i think only eclipse) sometimes i get the exception: TypeError: Error when calling the metaclass bases: mro() returned base with unsuitable layout ('') sometimes i get the exception: TypeError: Error when calling the metaclass bases: multiple bases have instance lay-out conflict Especially the last two don't seem like a windows problem, but i don't see any other difference with my PC win7 install. Does anyone have any idea what is going on or how to fix this? Did i miss a step in the installation or is it some incompatibility maybe? Cheers, Lars Does anyone have some input on this?
windows 8 incompatibility?
1
0
1
0
0
2,146
13,711,765
2012-12-04T20:56:00.000
0
0
1
1
0
python
0
13,711,840
0
2
0
false
0
0
Install both pythons, and change the path in Windows, by default both Pythons will be PATH=c:\python\python 2.7 and PATH=c:\python\python 3.2 Or something like that. What and since windows stops as soon as it finds the first python, what you could do is have one called PATH=c:\python27\ and another PATH=c:\python32\ this way you can run both of them.
1
1
0
0
I try to save a python script as a shortcut that I want to run. It opens it but then closes right away. I know why it is doing this, it is opening my windows command line in python3.2 the script is in python 2.7 I need both version on my PC, my question is how to I change the cmd default. I have tried to "open with" shortcut on the icon and it just continues to default to 3.2. Help please
changing python windows command line
1
0
1
0
0
538
13,722,760
2012-12-05T11:57:00.000
4
1
1
0
0
python
0
13,722,800
0
2
0
false
0
0
Unit testing is the best way to handle this. If you think the testing is taking too much time, ask yourself how much time you are loosing on defects - identifying, diagnosing and rectifying - after you have released the code. In effect, you are testing in production, and there's plenty of evidence to show that defects found later in the development cycle can be orders of magnitude more expensive to fix.
1
0
0
0
I've always have trouble with dynamic language like Python. Several troubles: Typo error, I can use pylint to reduce some of these errors. But there's still some errors that pylint can not figure out. Object type error, I often forgot what type of the parameter is, int? str? some object? Also, forgot the type of some object in my code. Unit test might help me sometimes, but I'm not always have enough time to do UT. When I need a script to do a small job, the line of code are 100 - 200 lines, not big, but I don't have time to do the unit test, because I need to use the script as soon as possible. So, many errors appear. So, any idea on how to reduce the number of these troubles?
How to reduce errors in dynamic language such as python, and improve my code quality?
0
0.379949
1
0
0
166
13,728,325
2012-12-05T16:54:00.000
1
1
0
1
0
python,z3
0
13,730,652
0
2
0
true
0
0
Yes, you can do it by including the build directory in your LD_LIBRARY_PATH and PYTHONPATH environment variables.
1
2
0
0
I'm trying to use Z3 from its python interface, but I would prefer not to do a system-wide install (i.e. sudo make install). I tried doing a local install with a --prefix, but the Makefile is hard-coded to install into the system's python directory. Best case, I would like run z3 directly from the build directly, in the same way I use the z3 binary (build/z3). Does anyone know how to, or have script, to run the z3py directly from the build directory, without doing an install?
Can I use Z3Py withouth doing a system-wide install?
0
1.2
1
0
0
540
13,736,310
2012-12-06T03:07:00.000
1
0
1
1
0
python,loops
0
13,736,441
0
1
0
false
0
0
The simplest solution is to add a variable outside of the loop which stores the last time the data size was checked. Every time through your loop you can compare the current time vs the last time every time through the loop and check if more than X time has elapsed.
1
0
0
0
I'm just starting out with Python. And I need help understanding how to do the main loop of my program. I have a source file with two columns of data, temperature & time. This file gets updated every 60 seconds by a bash script. I successfully wrote these three separate programs; 1. A program that can read the last 1440 lines of the source data and plot out a day graph. 2. A program that can read the last 10080 lines of the source data and plot out a week graph. 3. A program that can read the source data and just display the last recorded temperature. 4. Check the size of the source file and delete data over X days old. I want to put it all together so that a user can toggle between the 3 different display types. I understand that this would work in a main loop, with just have the input checked in the loop, and call a function based on what is returned. But I don't know how to handle the file size check. I don't want it checked every time the loops cycles. I would like it to be run once a day. thanks in advance!
Understanding an element of the main loop
0
0.197375
1
0
0
70
13,748,166
2012-12-06T16:33:00.000
7
0
0
0
1
python,django,multithreading,session,race-condition
0
13,924,932
1
3
1
true
1
0
Yes, it is possible for a request to start before another has finished. You can check this by printing something at the start and end of a view and launch a bunch of request at the same time. Indeed the session is loaded before the view and saved after the view. You can reload the session using request.session = engine.SessionStore(session_key) and save it using request.session.save(). Reloading the session however does discard any data added to the session before that (in the view or before it). Saving before reloading would destroy the point of loading late. A better way would be to save the files to the database as a new model. The essence of the answer is in the discussion of Thomas' answer, which was incomplete so I've posted the complete answer.
1
8
0
0
Summary: is there a race condition in Django sessions, and how do I prevent it? I have an interesting problem with Django sessions which I think involves a race condition due to simultaneous requests by the same user. It has occured in a script for uploading several files at the same time, being tested on localhost. I think this makes simultaneous requests from the same user quite likely (low response times due to localhost, long requests due to file uploads). It's still possible for normal requests outside localhost though, just less likely. I am sending several (file post) requests that I think do this: Django automatically retrieves the user's session* Unrelated code that takes some time Get request.session['files'] (a dictionary) Append data about the current file to the dictionary Store the dictionary in request.session['files'] again Check that it has indeed been stored More unrelated code that takes time Django automatically stores the user's session Here the check at 6. will indicate that the information has indeed been stored in the session. However, future requests indicate that sometimes it has, sometimes it has not. What I think is happening is that two of these requests (A and B) happen simultaneously. Request A retrieves request.session['files'] first, then B does the same, changes it and stores it. When A finally finishes, it overwrites the session changes by B. Two questions: Is this indeed what is happening? Is the django development server multithreaded? On Google I'm finding pages about making it multithreaded, suggesting that by default it is not? Otherwise, what could be the problem? If this race condition is the problem, what would be the best way to solve it? It's an inconvenience but not a security concern, so I'd already be happy if the chance can be decreased significantly. Retrieving the session data right before the changes and saving it right after should decrease the chance significantly I think. However I have not found a way to do this for the request.session, only working around it using django.contrib.sessions.backends.db.SessionStore. However I figure that if I change it that way, Django will just overwrite it with request.session at the end of the request. So I need a request.session.reload() and request.session.commit(), basically.
Django session race condition?
0
1.2
1
0
0
1,895
13,772,857
2012-12-07T23:52:00.000
0
0
0
0
0
python,mysql,flot
0
13,774,224
0
1
0
false
0
0
Install a httpd server Install php Write a php script to fetch the data from the database and render it as a webpage. This is fairly elaborate request, with relatively little details given. More information will allow us to give better answers.
1
0
1
0
So I am trying to create a realtime plot of data that is being recorded to a SQL server. The format is as follows: Database: testDB Table: sensors First record contains 3 records. The first column is an auto incremented ID starting at 1. The second column is the time in epoch format. The third column is my sensor data. It is in the following format: 23432.32 112343.3 53454.322 34563.32 76653.44 000.000 333.2123 I am completely lost on how to complete this project. I have read many pages showing examples dont really understand them. They provide source code, but I am not sure where that code goes. I installed httpd on my server and that is where I stand. Does anyone know of a good how-to from beginning to end that I could follow? Or could someone post a good step by step for me to follow? Thanks for your help
Plotting data using Flot and MySQL
0
0
1
1
0
428
13,776,973
2012-12-08T11:20:00.000
1
0
1
1
0
python,bash,shell,race-condition
0
13,777,096
0
3
0
false
0
0
The only sure way that no two scripts will act on the same file at the same time is to employ some kind of file locking mechanism. A simple way to do this could be to rename the file before beginning work, by appending some known string to the file name. The work is then done and the file deleted. Each script tests the file name before doing anything, and moves on if it is 'special'. A more complex approach would be to maintain a temporary file containing the names of files that are 'in process'. This file would obviously need to be removed once everything is finished.
1
1
0
0
I have a directory with thousands of files and each of them has to be processed (by a python script) and subsequently deleted. I would like to write a bash script that reads a file in the folder, processes it, deletes it and moves onto another file - the order is not important. There will be n running instances of this bash script (e.g. 10), all operating on the same directory. They quit when there are no more files left in the directory. I think this creates a race condition. Could you give me an advice (or a code snippet) how to make sure that no two bash scripts operate on the same file? Or do you think I should rather implement multithreading in Python (instead of running n different bash scripts)?
Multiple processes reading&deleting files in the same directory
0
0.066568
1
0
0
1,245
13,780,732
2012-12-08T18:52:00.000
0
0
0
0
0
python,django,background-process,backend
0
13,783,426
0
3
0
false
1
0
Why don't you have a url or python script that triggers whatever sort of calculation you need to have done everytime it's run and then fetch that url or run that script via a cronjob on the server? From what your question was it doesn't seem like you need a whole lot more than that.
1
4
0
0
Newb quesion about Django app design: Im building reporting engine for my web-site. And I have a big (and getting bigger with time) amounts of data, and some algorithm which must be applied to it. Calculations promise to be heavy on resources, and it would be stupid if they are performed by requests of users. So, I think to put them into background process, which would be executed continuously and from time to time return results, which could be feed to Django views-routine for producing html output by demand. And my question is - what proper design approach for building such system? Any thoughts?
Django - how to set up asynchronous longtime background data processing task?
1
0
1
0
0
2,371
13,783,071
2012-12-08T23:33:00.000
4
0
0
0
0
python,list,numpy,min
0
37,094,880
0
2
0
false
0
0
numpy.argpartition(cluster, 3) would be much more effective.
1
14
1
0
So I have this list called sumErrors that's 16000 rows and 1 column, and this list is already presorted into 5 different clusters. And what I'm doing is slicing the list for each cluster and finding the index of the minimum value in each slice. However, I can only find the first minimum index using argmin(). I don't think I can just delete the value, because otherwise it would shift the slices over and the indices is what I have to recover the original ID. Does anyone know how to get argmin() to spit out indices for the lowest three? Or perhaps a more optimal method? Maybe I should just assign ID numbers, but I feel like there maybe a more elegant method.
Finding the indices of the top three values via argmin() or min() in python/numpy without mutation of list?
0
0.379949
1
0
0
11,034
13,784,459
2012-12-09T03:52:00.000
0
1
1
1
0
python,linux,startup,runlevel
0
13,876,262
0
1
0
true
0
0
I may as well answer my own question with my findings. On Debian,Ubuntu,CentOS systems there is a file named /etc/rc.local. If you use pythons' FileIO to edit that file, you can put a command that will be run at the end of all the multi-user boot levels. This facility is still present on systems that use upstart. On BSD I have no idea. If you know how to make something go on startup please comment to improve this answer. Archlinux and Fedora use systemd to start daemons - see the Arch wiki page for systemd. Basically you need to create a systemd service and symlink it. (Thanks Emil Ivanov)
1
0
0
0
I would like to find out how to write Python code which sets up a process to run on startup, in this case level two. I have done some reading, yet it has left me unclear as to which method is most reliable on different systems. I originally thought I would just edit /etc/inittab with pythons fileIO, but then I found out that my computers inittab was empty. What should I do? Which method of setting something to startup on boot is most reliable? Does anyone have any code snippets lying around?
Programmatically setting a process to execute at startup (runlevel 2)?
0
1.2
1
0
0
302
13,798,520
2012-12-10T09:53:00.000
3
0
0
1
0
python,bash,upnp,server-push,samsung-smart-tv
0
13,798,803
0
1
0
true
0
0
You will still need a DLNA server to host your videos on. Via UPnP you only hand the URL to the TV, not the video directly. Once you have it hosted on a DLNA server, you can find out the URL to a video by playing it in Windows Media Player (which has DLNA-support) or by using UPnP Inspector (which I recommend anyways, if you are going to be working with UPnP). You can then push this URL to the TV, which will download and play the video, if its format is supported. I do not know my way around python, but you since UPnP is HTTP based, you will need to send an HTTP request with appropriate UPnP-headers (see wikipedia or test it yourself with UPnP Inspector) and the proper XML-formatted body for the function you are trying to use. The UPnP-function I worked with to push a link to the TV is "SetAVTransportURI", but it might differ from your TV. Use UPnP Inspector to find the correct one, including its parameters. In summary: Get a DLNA-Server to host you videos on. Find out the links to those videos using UPnP Inspector or other DLNA-clients. Find out the UPnP-function that sends a URL to your TV (again, I recomment UPnP Inspector, you can explore and call all functions with it). Implement a call to that function in your script.
1
2
0
0
I would like to make a simple script to push a movie to a Smart TV. I have already install miniupnp or ushare, but I don't want to go in a folder by the TV Smart Apps, i want to push the movie to the TV, to win time and in future why not make the same directly from a NAS. Can anyone have an idea how to do this ? This application SofaPlay make it great but only from my mac. Thanks you
uPnP pushing video to Smart TV/Samsung TV on OSX/Mac
0
1.2
1
0
0
3,338
13,803,315
2012-12-10T14:51:00.000
0
1
0
0
0
python,eclipse
0
13,866,443
0
1
0
false
0
0
You can use NetBeans 6.5 IDE its provide python support.
1
0
0
0
I have installed PyCuda without any difficulty but am having trouble linking it to my eclipse environment. Does anyone know how I can link pycuda and eclipse IDE? Thanks in Adanced
PyCuda and Eclipse
0
0
1
0
0
284
13,823,554
2012-12-11T15:52:00.000
0
0
0
0
0
python,elasticsearch
0
14,884,970
0
2
0
false
0
0
It sounds like you have an issue unrelated to the client. If you can pare down what's being sent to ES and represent it in a simple curl command it will make what's actually running slowly more apparent. I suspect we just need to tweak your query to make sure it's optimal for your context.
2
4
0
0
I'm writing some scripts for our sales people to query an index with elastic search through python. (Eventually the script will update lead info in our Salesforce DB.) I have been using the urllib2 module, with simplejson, to pull results. The problem is that this seems to be a not-so-good approach, evidenced by scripts which are taking longer and longer to run. Questions: Does anyone have any opinions (opinions, on the internet???) about Elastic Search clients for Python? Specifically, I've found pyes and pyelasticsearch, via elasticsearch.org---how do these two stack up? How good or bad is my current approach of dynamically building the query and running it via self.raw_results = simplejson.load(urllib2.urlopen(self.query))? Any advice is greatly appreciated!
Elastic search client for Python: advice?
0
0
1
0
1
748
13,823,554
2012-12-11T15:52:00.000
2
0
0
0
0
python,elasticsearch
0
14,870,170
0
2
0
true
0
0
We use pyes. And its pretty neat. You can there go with the thrift protocol which is faster then the rest service.
2
4
0
0
I'm writing some scripts for our sales people to query an index with elastic search through python. (Eventually the script will update lead info in our Salesforce DB.) I have been using the urllib2 module, with simplejson, to pull results. The problem is that this seems to be a not-so-good approach, evidenced by scripts which are taking longer and longer to run. Questions: Does anyone have any opinions (opinions, on the internet???) about Elastic Search clients for Python? Specifically, I've found pyes and pyelasticsearch, via elasticsearch.org---how do these two stack up? How good or bad is my current approach of dynamically building the query and running it via self.raw_results = simplejson.load(urllib2.urlopen(self.query))? Any advice is greatly appreciated!
Elastic search client for Python: advice?
0
1.2
1
0
1
748
13,830,334
2012-12-11T23:36:00.000
1
0
1
0
0
python,variables
0
13,830,530
0
3
0
false
0
0
You need to store it on disk. Unless you want to be really fancy, you can just use something like CSV, JSON, or YAML to make structured data easier. Also check out the python pickle module.
1
6
0
0
I'm writing a small program that helps you keep track of what page you're on in your books. I'm not a fan of bookmarks, so I thought "What if I could create a program that would take user input, and then display the number or string of text that they wrote, which in this case would be the page number they're on, and allow them to change it whenever they need to?" It would only take a few lines of code, but the problem is, how can I get it to display the same number the next time I open the program? The variables would reset, would they not? Is there a way to permanently change a variable in this way?
Change variable permanently
0
0.066568
1
0
0
5,917
13,832,095
2012-12-12T02:59:00.000
1
0
1
0
0
python
0
13,832,200
0
3
0
false
0
0
While the approach of @Makato is certainly right, for your 'diff' like application you want to capture the inode 'stat()' information of the files in your directory and pickle that python object from day-to-day looking for updates; this is one way to do it - overkill maybe - but more suitable than save/parse-load from text-files IMO.
1
2
0
0
Is it possible for the user to input a specific directory on their computer, and for Python to write the all of the file/folder names to a text document? And if so, how? The reason I ask is because, if you haven't seen my previous posts, I'm learning Python, and I want to write a little program that tracks how many files are added to a specific folder on a daily basis. This program isn't like a project of mine or anything, but I'm writing it to help me further learn Python. The more I write, the more I seem to learn. I don't need any other help than the original question so far, but help will be appreciated! Thank you! EDIT: If there's a way to do this with "pickle", that'd be great! Thanks again!
PYTHON - Search a directory
1
0.066568
1
0
0
7,504
13,846,155
2012-12-12T18:22:00.000
1
1
1
1
0
python,egg,python-internals
0
13,846,221
0
1
0
true
0
0
No, that is not a bug. Eggs, when being created, have their bytecode compiled in a build/bdist.<platform>/egg/ path, and you see that reflected in the co_filename variable. The bdist stands for binary distribution.
1
0
0
0
When tracing(using sys.settrace) python .egg execution by Python 2.7 interpreter frame.f_code.co_filename instead of <path-to-egg>/<path-inside-egg> eqauls to something like build/bdist.linux-x86_64/egg/<path-inside-egg> Is it a bug? And how to reveal real path to egg? In Python 2.6 and Python 3 everything works as expected.
Strange co_filename for file from .egg during tracing in Python 2.7
0
1.2
1
0
0
84
13,858,776
2012-12-13T11:22:00.000
1
0
1
0
0
python,debugging,pdb
0
13,961,662
0
4
0
false
0
0
Using pdb, any function call can be stepped into. For any other statement, pdb can print the values of the relevant names in the line. What additional functionality are you looking for that isn't covered? If you're trying to 'step into' things like a list comprehension, that won't work from a pure Python perspective because it's a single opcode. At some point for every expression you'll need to tell your students 'and this is where Python goes into the C implementation and evaluates this...'.
1
7
0
0
I want to build a visual debugger, which helps programming students to see how expression evaluation takes place (how subexpressions get evaluated and "replaced" by their values, something like expression evaluation visualizer in Excel). Looks like you can't step through this process with Python's pdb, as its finest step granularity is line of code. Is it somehow possible to step through Python bytecode? Any other ideas how to achieve this goal? EDIT: I need a lightweight solution that can be built on top of CPython standard library.
How to step through Python expression evaluation process?
1
0.049958
1
0
0
1,143
13,859,124
2012-12-13T11:44:00.000
0
0
0
0
0
python,html,parsing
0
18,350,504
0
3
0
true
1
0
No, to this moment there is no such HTML parser and every parser has it's own limitations.
1
1
0
0
I want to parse html code in python and tried beautiful soup and pyquery already. The problem is that those parsers modify original code e.g insert some tag or etc. Is there any parser out there that do not change the code? I tried HTMLParser but no success! :( It doesn't modify the code and just tells me where tags are placed. But it fails in parsing web pages like mail.live.com Any idea how to parse a web page just like a browser?
python html parser which doesn't modify actual markup?
0
1.2
1
0
1
273
13,884,439
2012-12-14T18:37:00.000
2
0
1
0
0
python,audio
1
13,884,538
0
2
0
false
0
0
The answer is highly platform dependent and more details are required. Different Operating Systems have different ways of handling Interprocess Communication, or IPC. If you're using a UNIXlike environment, there are a rich set of IPC primitives to work with. Pipes, SYS V Message Queues, shared memory, sockets, etc. In your case I think it would make sense to use a pipe or a socket, depending on whether the A and B are running in the same process or not. Update: In your case, I would use python's subprocess and or os module and a pipe. The idea here is to create calling contexts to the two APIs in processes which share a parent process, which has also created a unidirectional named pipe and passed it to its children. Then, data written to the named pipe in create_recorder will immediately be available for read()ing in the named pipe.
1
1
0
0
Suppose I have two functions drawn from two different APIs, function A and B. By default, function A outputs audio data to a wav file. By default, function B takes audio input from a wav file and process it. Is it possible to stream the data from function A to B? If so, how do I do this? I work on lubuntu if that is relevant. This is function A I'm thinking about from the PJSUA python API: create_recorder(self, filename) Create WAV file recorder. Keyword arguments filename -- WAV file name Return: WAV recorder ID And this is function B from the Pocketsphinx Python API decode_raw(...) Decode raw audio from a file. Parameters: fh (file) - Filehandle to read audio from. uttid (str) - Identifier to give to this utterance. maxsamps (int) - Maximum number of samples to read. If not specified or -1, the rest of the file will be read. update: When I try to pass the filename of a socket or named pipe, it outputs this error message, seems that the C function that the python bindings use doesn't like anything but .wav files... Why would that be? pjsua_aud.c .pjsua_recorder_create() error: unable to determine file format for /tmp/t_fifo. Exception: Object: LIb, operation=create(recorder), error=Option/operation is not supported (PJ_ENOTSUP) I need to use a value returned by create_recorder(), it is an int that is used to get the wav recorder id (which is not passed on directly to decode_raw() but rather passed on to some other function.
Redirecting audio output from one function to another function in python
1
0.197375
1
0
0
1,006
13,889,066
2012-12-15T03:39:00.000
3
0
1
0
0
python
0
13,889,088
0
5
0
false
0
0
You can do timings within Python, but if you want to know the overall CPU consumption of your program, that is kind of silly to do. The best thing to do is to just use the GNU time program. It even comes standard in most operating systems.
1
7
0
0
Pretty simple, I'd like to run an external command/program from within a Python script, once it is finished I would also want to know how much CPU time it consumed. Hard mode: running multiple commands in parallel won't cause inaccuracies in the CPU consumed result.
Run an external command and get the amount of CPU it consumed
0
0.119427
1
0
0
2,139
13,892,113
2012-12-15T12:20:00.000
6
0
0
0
0
python,rss
0
13,892,148
0
1
0
false
0
0
Each RSS have some format. See what Content-Type the server returns for the given URL. However, this may not be specific and a server may not necessarily return the correct header. Try to parse the content of the URL as RSS and see if it is successful - this is likely the only definitive proof that a given URL is a RSS feed.
1
0
0
0
am trying to find a way to detect if a given URL has an RSS feed or not. Any suggestions?
given a URL in python, how can i check if URL has RSS feed?
1
1
1
0
1
812
13,899,823
2012-12-16T08:46:00.000
2
0
0
0
0
python,pip,web2py
0
14,100,013
0
2
0
true
1
0
I think I can give my answer to my own question: we don't need to install web2py, just download it and python it.
1
5
0
0
I tried to install Web2py with pip. The installation is completed successfully. But after that I don't know how to start the server. I know there are three apps which are 'w2p_clone', 'w2p_apps' and 'w2p_run'. I don't know how to use these three apps. And also I did not set up my virtual env for Web2py, however even I do not have virtual env I can start Web2py sever from src code (like python web2py.py) I just want to know how to use pip intall for Web2py. Thank you very much.
pip install web2py
0
1.2
1
0
0
2,902
13,906,679
2012-12-16T23:50:00.000
0
1
1
0
0
c++,python,memory,ram
0
13,928,292
0
4
0
false
0
0
Your problem description is kind of vague and can be read in several different ways. One way in which I read this is that you have some kind of ASCII representation of a data structure on disk. You read this representation into memory, and then grep through it one or more times looking for things that match a given regular expression. Speeding this up depends a LOT on the data structure in question. If you are simply doing line splitting, then maybe you should just read the whole thing into a byte array using a single read instruction. Then you can alter how you grep to use a byte-array grep that doesn't span multiple lines. If you fiddle the expression to always match a whole line by putting ^.*? at the beginning and .*?$ at the end (the ? forces a minimal instead of maximal munch) then you can check the size of the matched expression to find out how many bytes forward to go. Alternately, you could try using the mmap module to achieve something similar without having to read anything and incur the copy overhead. If there is a lot of processing going on to create your data structure and you can't think of a way to use the data in the file in a very raw way as a simple byte array, then you're left with various other solutions depending, though of these it sounds like creating a daemon is the best option. Since your basic operation seems to be 'tell me which tables entries match a regexp', you could use the xmlrpc.server and xmlrpc.client libraries to simply wrap up a call that takes the regular expression as a string and returns the result in whatever form is natural. The library will take care of all the work of wrapping up things that look like function calls into messages over a socket or whatever. Now, your idea of actually keeping it in memory is a bit of a red-herring. I don't think it takes 30 minutes to read 2G of information from disk these days. It likely takes at most 5, and likely less than 1. So you might want to look at how you're building the data structure to see if you could optimize that instead. What pickle and/or marshal will buy you is highly optimized code for building the data structure out of a serialized form. This will cause the data structure creation to possibly be constrained by disk read speeds instead. That means the real problem you're addressing is not reading it off disk each time, but building the data structure in your own address space. And holding it in memory and using a daemon isn't a guarantee that it will stay in memory. It just guarantees that it stays built up as the data structure you want within the address space of a Python process. The os may decide to swap that memory to disk at any time. Again, this means that focusing on the time to read it from disk is likely not the right focus. Instead, focus on how to efficiently re-create (or preserve) the data structure in the address space of a Python process. Anyway, that's my long-winded ramble on the topic. Given the vagueness of your question, there is no definite answer, so I just gave a smorgasbord of possible techniques and some guiding ideas.
3
2
0
0
Is it possible to store python (or C++) data in RAM for latter use and how can this be achieved? Background: I have written a program that finds which lines in the input table match the given regular expression. I can find all the lines in roughly one second or less. However the problem is that i process the input table into a python object every time i start this program. This process takes about 30minutes. This program will eventually run on a machine with over 128GB of RAM. The python object takes about 2GB of RAM. The input table changes rarely and therefore the python object (that i'm currently recalculating every time) actually changes rarely. Is there a way that i can create this python object once, store it in RAM 24/7 (recreate if input table changes or server restarts) and then use it every time when needed? NOTE: The python object will not be modified after creation. However i need to be able to recreate this object if needed. EDIT: Only solution i can think of is just to keep the program running 24/7 (as a daemon??) and then issuing commands to it as needed.
Storing large python object in RAM for later use
1
0
1
0
0
2,201
13,906,679
2012-12-16T23:50:00.000
2
1
1
0
0
c++,python,memory,ram
0
13,924,546
0
4
0
false
0
0
We regularly load and store much larger chunks of memory than 2 Gb in no time (seconds). We can get 350 Mb/s from our 3 year old SAN. The bottlenecks /overheads seem to involve mainly python object management. I find that using marshal is much faster than cPickle. Allied with the use of data structures which involve minimal python object handles, this is more than fast enough. For data structures, you can either use array.array or numpy. array.array is slightly more portable (no extra libraries involved) but numpy is much more convenient in many ways. For example, instead of having 10 million integer (python objects), you would create a single array.array('i') with 10 million elements. The best part to using marshal is that it is a very simple format you can write to and read from easily using c/c++ code.
3
2
0
0
Is it possible to store python (or C++) data in RAM for latter use and how can this be achieved? Background: I have written a program that finds which lines in the input table match the given regular expression. I can find all the lines in roughly one second or less. However the problem is that i process the input table into a python object every time i start this program. This process takes about 30minutes. This program will eventually run on a machine with over 128GB of RAM. The python object takes about 2GB of RAM. The input table changes rarely and therefore the python object (that i'm currently recalculating every time) actually changes rarely. Is there a way that i can create this python object once, store it in RAM 24/7 (recreate if input table changes or server restarts) and then use it every time when needed? NOTE: The python object will not be modified after creation. However i need to be able to recreate this object if needed. EDIT: Only solution i can think of is just to keep the program running 24/7 (as a daemon??) and then issuing commands to it as needed.
Storing large python object in RAM for later use
1
0.099668
1
0
0
2,201
13,906,679
2012-12-16T23:50:00.000
2
1
1
0
0
c++,python,memory,ram
0
13,906,794
0
4
0
false
0
0
You could try pickling your object and saving it to a file, so that each time the program runs it just has to deserialise the object instead of recalculating it. Hopefully the server's disk cache will keep the file hot if necessary.
3
2
0
0
Is it possible to store python (or C++) data in RAM for latter use and how can this be achieved? Background: I have written a program that finds which lines in the input table match the given regular expression. I can find all the lines in roughly one second or less. However the problem is that i process the input table into a python object every time i start this program. This process takes about 30minutes. This program will eventually run on a machine with over 128GB of RAM. The python object takes about 2GB of RAM. The input table changes rarely and therefore the python object (that i'm currently recalculating every time) actually changes rarely. Is there a way that i can create this python object once, store it in RAM 24/7 (recreate if input table changes or server restarts) and then use it every time when needed? NOTE: The python object will not be modified after creation. However i need to be able to recreate this object if needed. EDIT: Only solution i can think of is just to keep the program running 24/7 (as a daemon??) and then issuing commands to it as needed.
Storing large python object in RAM for later use
1
0.099668
1
0
0
2,201
13,910,576
2012-12-17T08:23:00.000
0
0
1
0
0
python,sqlalchemy
0
13,910,851
0
3
0
false
0
0
This is a classic case of buffering. Try a reasonably large chunk and reduce it if there's too much disk I/O (or you don't like it causing long pauses, etc) or increase it if your profile shows too much CPU time in I/O calls. To implement, use an array, each "write" you append an item to the array. Have a separate "flush" function that writes the whole thing. Each append you check, and if it has reached maximum size, write them all, and clear the array. At the end, call the flush function to write partially filled array.
1
0
0
0
I developed an application which parses a lot of data, but if I commit the data after parsing all the data, it will consume too much memory. However, I cannot commit it each time, because it costs too much hard disk I/O. Therefore, my question is how can I know how many uncommitted items are in the session?
Find out how many uncommitted items are in the session
0
0
1
0
0
1,666
13,920,682
2012-12-17T19:19:00.000
1
0
1
0
0
python,.net,powerbuilder
0
13,943,263
0
1
0
true
0
1
A PowerBuilder application can load a DataWindow from a PBL (doesn't have to be in the library path), modify it, and save it back to the PBL. I've written a couple of tools that do that. PowerBuilder will allow you to modify the DataWindow according to its object model using the modify method. I don't know why anyone would want to reinvent all of this. I recall seeing Python bindings for PB somewhere. You could get the DW syntax from PB, call out to Python, then save it back in PB. But you'd have to do all the parsing in Python, whereas PB already understands the DW. Finally I'm surprised Terry didn't plug PBL Peeper. You could use PBL Peeper to export the DataWindows, massage them to your hearts's content in Python. then import them back into PB.
1
1
0
0
I want to get the content of DataWindow from PBL (PowerBuilder Library) file and edit it in place. The idea is to read the pbl file, and access individual DataWindows to modify source code. Somehow, I have managed to do the first part with PblReader .NET library using IronPython. It allows me to read PBL files, and access DataWindow source code. However it doesn't support modifications. I would like to know if anyone have an idea for editing PBL files?
idea/solution how to edit PBL (PowerBuilder Library) files?
0
1.2
1
0
0
3,599
13,936,563
2012-12-18T15:47:00.000
5
0
0
0
0
python,netcdf
0
16,386,862
0
5
0
false
0
0
If you want to only use the netCDF-4 API to copy any netCDF-4 file, even those with variables that use arbitrary user-defined types, that's a difficult problem. The netCDF4 module at netcdf4-python.googlecode.com currently lacks support for compound types that have variable-length members or variable-length types of a compound base type, for example. The nccopy utility that is available with the netCDF-4 C distribution shows it is possible to copy an arbitrary netCDF-4 file using only the C netCDF-4 API, but that's because the C API fully supports the netCDF-4 data model. If you limit your goal to copying netCDF-4 files that only use flat types supported by the googlecode module, the algorithm used in nccopy.c should work fine and should be well-suited to a more elegant implementation in Python. A less ambitious project that would be even easier is a Python program that would copy any netCDF "classic format" file, because the classic model supported by netCDF-3 has no user-defined types or recursive types. This program would even work for netCDF-4 classic model files that also use performance features such as compression and chunking.
1
5
0
0
I would like to make a copy of netcdf file using Python. There are very nice examples of how to read or write netcdf-file, but perhaps there is also a good way how to make the input and then output of the variables to another file. A good-simple method would be nice, in order to get the dimensions and dimension variables to the output file with the lowest cost.
copy netcdf file using python
1
0.197375
1
0
0
3,485
13,938,903
2012-12-18T18:07:00.000
2
1
0
0
0
c#,python,asp.net,web-services,perl
0
13,939,065
0
3
0
false
0
0
a) WebSockets in conjuction with ajax to update only parts of the site would work, disadvantage: the clients infrastructure (proxies) must support those (which is currently not the case 99% of time). b) With existing infrastructure the approach is Long Polling. You make an XmlHttpRequest using javascript. In case no data is present, the request is blocked on server side for say 5 to 10 seconds. In case data is avaiable, you immediately answer the request. The client then immediately sends a new request. I managed to get >500 updates per second using java client connecting via proxy, http to a webserver (real time stock data displayed). You need to bundle several updates with each request in order to get enough throughput.
1
2
0
0
I wonder how to update fast numbers on a website. I have a machine that generates a lot of output, and I need to show it on line. However my problem is the update frequency is high, and therefore I am not sure how to handle it. It would be nice to show the last N numbers, say ten. The numbers are updated at 30Hz. That might be too much for the human eye, but the human eye is only for control here. I wonder how to do this. A page reload would keep the browser continuously loading a page, and for a web page something more then just these numbers would need to be shown. I might generate a raw web engine that writes the number to a page over a specific IP address and port number, but even then I wonder whether this page reloading would be too slow, giving a strange experience to the users. How should I deal with such an extreme update rate of data on a website? Usually websites are not like that. In the tags for this question I named the languages that I understand. In the end I will probably write in C#.
Rapid number updates on a website
0
0.132549
1
0
1
114
13,940,449
2012-12-18T19:52:00.000
2
1
0
0
1
python,html,tidesdk
0
13,948,150
0
1
0
false
1
0
The current version has very old webkit so because of that the HTML5 support is lacking. Audio and video tags are currently not supported in windows because underlying webkit implementation (wincairo) does not support it. Wa are working on the first part to use the latest webkit. once completed we are also planning to work on the audio/video support on windows.
1
0
0
0
Iam trying to stream audio in my TideSDK application, but it seems to be quite difficult. The HTML5 audio does not work for me, neither does video tags. The player simply keeps loading. I've tested and confirmed that my code worked in many other browsers. My next attemp was VLC via Python bindings. But without any confirmation I do believe you need to have VLC installed for the vlc.py file to work? Basically, what I want to do is play audio in a sophisticated way (probably through Python) and wrap it in my TideSDK application. I want it to work out of the box - nothing for my end users to install. Iam by the way pretty new the the whole python thing, but I learn fast so I'd love to see some examples on how to get started! Perhaps a quite quirky way to do it would be by using flash, but I'd love not to. For those of you who are not familiar with TideSDK, its a way to build desktop applications with HTML, CSS, Python, Ruby and PHP.
Streaming audio in Python with TideSDK
1
0.379949
1
0
0
401
13,965,403
2012-12-20T04:37:00.000
0
0
0
0
0
python,parsing,selenium,lxml,xpath
0
40,276,743
0
2
0
false
1
0
I prefer to use lxml. Because the efficiency of lxml is more higher than selenium for large elements extraction. You can use selenium to get source of webpages and parse the source with lxml's xpath instead of the native find_elements_with_xpath in selenium.
1
1
0
1
So I have been trying to figure our how to use BeautifulSoup and did a quick search and found lxml can parse the xpath of an html page. I would LOVE if I could do that but the tutorial isnt that intuitive. I know how to use Firebug to grab the xpath and was curious if anyone has use lxml and can explain how I can use it to parse specific xpath's, and print them.. say 5 per line..or if it's even possible?! Selenium is using Chrome and loads the page properly, just need help moving forward. Thanks!
Can I parse xpath using python, selenium and lxml?
0
0
1
0
1
1,853
13,999,970
2012-12-22T04:07:00.000
0
0
0
0
0
python,qt,user-interface,qt4,pyqt
0
14,000,161
0
2
0
true
1
1
You need to connect your view to the dataChanged ( const QModelIndex & topLeft, const QModelIndex & bottomRight ) signal of the model: This signal is emitted whenever the data in an existing item changes. If the items are of the same parent, the affected ones are those between topLeft and bottomRight inclusive. If the items do not have the same parent, the behavior is undefined. When reimplementing the setData() function, this signal must be emitted explicitly.
1
3
0
0
By default, the built-in views in PyQt can auto-refresh itself when its model has been updated. I wrote my own chart view, but I don't know how to do it, I have to manually update it for many times. Which signal should I use?
PyQt - Automatically refresh a custom view when the model is updated?
0
1.2
1
0
0
5,032
14,000,083
2012-12-22T04:34:00.000
1
0
1
0
0
list,python-2.7,indexing
0
14,000,721
0
2
0
false
0
0
Lets use a 'list' with following values: '1', '2', '3','4', '5', '6' Steps to get the negative index of any value: Step1. Get the 'normal_index' of the value. For example the normal index of value '4' is 3. Step2. Get the 'count' of the 'list'. In our example the 'list_count' is 5. Step3. Get Negative index of the requested value. negative_index = (normal_index - list_count) - 1. Which is -3.
1
2
0
0
I have a string of variable length and I know the index position is 25. As it's variable in his length (>= 25), I need a way to locate the negative index of that same position for easier data manipulation. Do you have any idea how this can be done?
How do you find the negative index of a position if you know the index?
1
0.099668
1
0
0
67
14,003,386
2012-12-22T13:51:00.000
1
0
0
0
0
python,windows
1
14,003,444
0
1
0
false
1
0
Use sys.getfilesystemencoding(). That should allow you to convert all paths that look ok. However, there can always be illegally-encoded files or folders, you have to think how to deal with those in the framework of your application. Some apps may ignore such files, others keep name as a binary blob.
1
0
0
0
I'm writing a small application which saves file paths to a database (using django). I assumed file paths are utf-8 encoded, but I ran into the following file name: C:\FXG™.nfo which is apparently not encoded in utf-8. When I do filepath.decode('utf-8') I get the following error: UnicodeDecodeError: 'utf8' codec can't decode byte 0x99 in position 30: invalid start byte (I trimmed the file name, so the position is wrong here). How do I know how the file paths are encoded in a way that this will work for every file name?
File path encoding in Windows
0
0.197375
1
0
0
1,880
14,004,835
2012-12-22T17:13:00.000
1
0
0
1
1
python,ncurses,curses
0
72,080,593
0
3
0
false
0
1
While I didn't use curses in python, I am currently working with it in C99, compiled using clang on Mac OS Catalina. It seems that nodelay()` does not work unless you slow down the program step at least to 1/10 of a second, eg. usleep(100000). I suppose that buffering/buffer reading is not fast enough, and getch() or wgetch(win*) simply doesn't manage to get the keyboard input, which somehow causes it to fail (no message whatsoever, even a "Segmentation fault"). For this reason, it's better to use halfdelay(1), which equals nodelay(win*, true) combined with usleep(100000). I know this is a very old thread (2012), but the problem is still present in 2022, so I decided to reply.
2
7
0
0
I've written a curses program in python. It runs fine. However, when I use nodelay(), the program exits straight away after starting in the terminal, with nothing shown at all (just a new prompt). EDIT This code will reproduce the bug: sc = curses.initscr() sc.nodelay(1) # But removing this line allows the program to run properly for angry in range(20): sc.addstr(angry, 1, "hi") Here's my full code import curses, time, sys, random def paint(x, y, i): #... def string(s, y): #... def feed(): #... sc = curses.initscr() curses.start_color() curses.curs_set(0) sc.nodelay(1) ######################################### # vars + colors inited for angry in range(20): try: dir = chr(sc.getch()) sc.clear() feed() #lots of ifs body.append([x, y]) body.pop(0) for point in body: paint(*point, i=2) sc.move(height-1, 1) sc.refresh() time.sleep(wait) except Exception as e: print sys.exc_info()[0], e sc.getch() curses.beep() curses.endwin() Why is this happenning, and how can I use nodelay() safely?
nodelay() causes python curses program to exit
0
0.066568
1
0
0
5,827
14,004,835
2012-12-22T17:13:00.000
0
0
0
1
1
python,ncurses,curses
0
14,006,585
0
3
0
false
0
1
I see no difference when running your small test program with or without the sc.nodelay() line. Neither case prints anything on the screen...
2
7
0
0
I've written a curses program in python. It runs fine. However, when I use nodelay(), the program exits straight away after starting in the terminal, with nothing shown at all (just a new prompt). EDIT This code will reproduce the bug: sc = curses.initscr() sc.nodelay(1) # But removing this line allows the program to run properly for angry in range(20): sc.addstr(angry, 1, "hi") Here's my full code import curses, time, sys, random def paint(x, y, i): #... def string(s, y): #... def feed(): #... sc = curses.initscr() curses.start_color() curses.curs_set(0) sc.nodelay(1) ######################################### # vars + colors inited for angry in range(20): try: dir = chr(sc.getch()) sc.clear() feed() #lots of ifs body.append([x, y]) body.pop(0) for point in body: paint(*point, i=2) sc.move(height-1, 1) sc.refresh() time.sleep(wait) except Exception as e: print sys.exc_info()[0], e sc.getch() curses.beep() curses.endwin() Why is this happenning, and how can I use nodelay() safely?
nodelay() causes python curses program to exit
0
0
1
0
0
5,827
14,008,232
2012-12-23T02:46:00.000
1
0
0
0
0
python,mysql,django,security,encryption
0
14,008,320
0
2
0
false
1
0
Your question embodies a contradiction in terms. Either you don't want reversibility or you do. You will have to choose. The usual technique is to hash the passwords and to provide a way for the user to reset his own password on sufficient alternative proof of identity. You should never display a password to anybody, for legal non-repudiability reasons. If you don't know what that means, ask a lawyer.
2
2
0
0
So, a friend and I are currently writing a panel (in python/django) for managing gameservers. Each client also gets a MySQL server with their game server. What we are stuck on at the moment is how clients will find out their MySQL password and how it will be 'stored'. The passwords would be generated randomly and presented to the user in the panel, however, we obviously don't want them to be stored in plaintext or reversible encryption, so we are unsure what to do if a a client forgets their password. Resetting the password is something we would try to avoid as some clients may reset the password while the gameserver is still trying to use it, which could cause corruption and crashes. What would be a secure (but without sacrificing ease of use for the clients) way to go about this?
Storing MySQL Passwords
0
0.099668
1
1
0
408
14,008,232
2012-12-23T02:46:00.000
4
0
0
0
0
python,mysql,django,security,encryption
0
14,008,264
0
2
0
true
1
0
Though this is not the answer you were looking for, you only have three possibilities store the passwords plaintext (ugh!) store with a reversible encryption, e.g. RSA (http://stackoverflow.com/questions/4484246/encrypt-and-decrypt-text-with-rsa-in-php) do not store it; clients can only reset password, not view it The second choice is a secure way, as RSA is also used for TLS encryption within the HTTPS protocol used by your bank of choice ;)
2
2
0
0
So, a friend and I are currently writing a panel (in python/django) for managing gameservers. Each client also gets a MySQL server with their game server. What we are stuck on at the moment is how clients will find out their MySQL password and how it will be 'stored'. The passwords would be generated randomly and presented to the user in the panel, however, we obviously don't want them to be stored in plaintext or reversible encryption, so we are unsure what to do if a a client forgets their password. Resetting the password is something we would try to avoid as some clients may reset the password while the gameserver is still trying to use it, which could cause corruption and crashes. What would be a secure (but without sacrificing ease of use for the clients) way to go about this?
Storing MySQL Passwords
0
1.2
1
1
0
408
14,013,214
2012-12-23T17:49:00.000
0
0
0
0
0
python,django,linux,deployment
0
14,013,487
0
1
0
false
1
0
If the names on the development and production server are the same, then record the rename commands in a shell-script. You can run that on both the development and the production server...
1
0
0
0
I have a development & production server and some large video files. The large files need to be renamed. I don't know how to automatically change the file names in the production environment when I change their name in the development environment. I think using git is very inefficient for large files. On the development environment I copied only the first 5 seconds of the videos. I'll be using Django with South to synchronize the database and git to synchronize the code.
synchronizing file names across servers
0
0
1
0
0
67
14,028,164
2012-12-25T06:09:00.000
3
0
1
0
0
python,list,shallow-copy,deep-copy
0
14,028,181
0
2
0
false
0
0
The new list is a copy of references. g[0] and a[0] both reference the same object. Thus this is a shallow copy. You can see the copy module's deepcopy method for recursively copying containers, but this isn't a common operation in my experience. Stylistically, I prefer the more explicit g = list(a) to create a copy of a list, but creating a full slice has the same effect.
1
1
0
0
How is Deep copy being done in python for lists? I am a little confused for copying of lists. Is it using shallow copy or deep copy? Also, what is the syntax for sublists? is it g=a[:]?
python lists copying is it deep copy or Shallow copy and how is it done?
0
0.291313
1
0
0
1,094
14,029,177
2012-12-25T08:57:00.000
0
0
1
0
0
python,list
0
14,030,552
0
4
0
false
0
0
In C language terms, a Python list is like a PyObject *mylist[100], except it's dynamically allocated. It's a contiguous chunk of memory storing references to Python objects.
3
0
0
0
How does the language know how much space to reserve for each element? Or does it reserve the maximum space possible required for a datatype? (Talk about large floating point numbers). In that case isn't it a bit inefficient?
How dynamic arrays store different datatypes?
1
0
1
0
0
91
14,029,177
2012-12-25T08:57:00.000
1
0
1
0
0
python,list
0
14,029,213
0
4
0
false
0
0
Arrays in Python are done via the array module. They do not store different datatypes, they store arrays of specific numerical values. I think you mean the list type. It doesn't contain values, it just contains references to objects, which can be any type of object at all. None of these reserves any space for any elements at all (well, they do, but that's internal implementation details). It adds the space for the elements needed when it they are added to the list/array. The list type is indeed less efficient than the array type, which is why the array type exists.
3
0
0
0
How does the language know how much space to reserve for each element? Or does it reserve the maximum space possible required for a datatype? (Talk about large floating point numbers). In that case isn't it a bit inefficient?
How dynamic arrays store different datatypes?
1
0.049958
1
0
0
91
14,029,177
2012-12-25T08:57:00.000
1
0
1
0
0
python,list
0
14,029,199
0
4
0
false
0
0
Python reserves only enough space in a list for a reference to the various objects; it is up to the objects' allocators to reserve enough space for them when they are instantiated.
3
0
0
0
How does the language know how much space to reserve for each element? Or does it reserve the maximum space possible required for a datatype? (Talk about large floating point numbers). In that case isn't it a bit inefficient?
How dynamic arrays store different datatypes?
1
0.049958
1
0
0
91
14,032,521
2012-12-25T17:04:00.000
5
0
1
0
0
python,list,sorting,alphabetical
0
26,274,529
0
6
0
false
0
0
ListName.sort() will sort it alphabetically. You can add reverse=False/True in the brackets to reverse the order of items: ListName.sort(reverse=False)
1
207
0
0
I am a bit confused regarding data structure in python; (),[], and {}. I am trying to sort a simple list, probably since I cannot identify the type of data I am failing to sort it. My list is simple: ['Stem', 'constitute', 'Sedge', 'Eflux', 'Whim', 'Intrigue'] My question is what type of data this is, and how to sort the words alphabetically?
Python data structure sort list alphabetically
0
0.16514
1
0
0
435,587
14,034,401
2012-12-25T22:21:00.000
4
0
0
0
0
python,qt,cookies,qwebkit
0
14,039,648
0
1
0
true
0
1
You can get/set the cookie jar through QWebView.page().networkAccessManager().cookieJar()/setCookieJar(). The Browser demo included with Qt (in C++) shows how to read and write cookies to disk.
1
3
0
0
I need to store cookies persistently in an application that uses QWebKit. I understand that I have to create a subclass of QNetworkCookieJar and attach it to a QNetworkAccessManager. But how do I attach this QNetworkAccessManager to my QWebView or get the QNetworkAccessManager used by it? I use Python 3 and PyQt if that is important.
Permanent cookies with QWebKit -- where to get the QNetworkAccessManager?
0
1.2
1
0
0
1,268
14,036,549
2012-12-26T06:00:00.000
0
1
0
1
0
python,uwsgi,pythonpath
0
14,039,533
0
1
0
false
0
0
you can specify multiple --pythonpath options, but PYTHONPATH should be honoured (just be sure it is correctly set by your init script, you can try setting it from the command line and running uwsgi in the same shell session)
1
1
0
0
Its weird because, when I run a normal python script on the server, it runs but when I run it via uWSGI, it cant import certain modules. there is a bash script that starts uwsgi, and passes a path via --pythonpath option. Is this an additional path or all the paths have to be given here ? If yes, how do I separate multiple paths given by this option.
Does uwsgi server read the paths in the environment variable PYTHONPATH?
0
0
1
0
0
1,207
14,043,045
2012-12-26T16:05:00.000
1
0
0
1
0
python,google-app-engine,google-cloud-datastore
0
14,043,190
0
2
0
false
0
0
Every blob you upload, creates a new version of that blob (with that filename) in the blobstore. Ofcourse you can delete the old version(s) of the blob, if you uploaded a new version. But to make sure you have the latest version of a blob (of a filename) you have to store the filename in the datastore and make a reference to the latest version. This reference holds the blob_key.
1
0
0
0
I know that I can grab a blob by BlobKey, but how do I get the blobkey associated with a given filename? In short, I want to implement "get file by filename" I can't seem to find any built-in functionality for this.
Downloading a Blob by Filename in Google App Engine (Python)
0
0.099668
1
0
0
259
14,049,028
2012-12-27T03:31:00.000
3
0
1
0
0
python,self
0
14,049,063
0
2
0
false
0
0
It's not really allright. self makes your variable available to global object-scope. That way you need to make sure that names of your variables are unique throughout complete object, rather than in localized scopes, amongst other side-effects that might or might not be unwanted. In your particular case it might be not an issue, but it's a very bad practice in general. Know your scoping and use it wisely. :)
2
3
0
0
I have a habit to declare new variables with self. in front to make it available to all methods. This is because sometimes I thought I don't need the variable in other methods. But halfway through I realized that I need it to be accessible in other methods. Then I have to add self. in front of all that variable. So my question is, besides needing to type 5 characters more each time I use a variable, are there any other disadvantages? Or, how do you overcome my problem?
Is declaring [almost] everything with self. alright (Python)?
1
0.291313
1
0
0
184
14,049,028
2012-12-27T03:31:00.000
14
0
1
0
0
python,self
0
14,049,045
0
2
0
true
0
0
Set a property on self only when the value is part of the overall object state. If it's only part of the method state, then it should be method-local, and should not be a property of self.
2
3
0
0
I have a habit to declare new variables with self. in front to make it available to all methods. This is because sometimes I thought I don't need the variable in other methods. But halfway through I realized that I need it to be accessible in other methods. Then I have to add self. in front of all that variable. So my question is, besides needing to type 5 characters more each time I use a variable, are there any other disadvantages? Or, how do you overcome my problem?
Is declaring [almost] everything with self. alright (Python)?
1
1.2
1
0
0
184
14,051,324
2012-12-27T07:57:00.000
0
0
0
1
0
python,pypy
1
69,422,294
0
2
0
false
0
0
For anyone coming here in the future, Oct 3 2021 pypy3 does accept the -O flag and turn off assertion statements
2
5
0
0
$ ./pypy -O Python 2.7.2 (a3e1b12d1d01, Dec 04 2012, 13:33:26) [PyPy 1.9.1-dev0 with GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: `` amd64 and ppc are only available in enterprise version'' >>>> assert 1==2 Traceback (most recent call last): File "", line 1, in AssertionError >>>> But when i execute $ python -O Python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> assert 1==2 >>>
how to disable pypy assert statement?
0
0
1
0
0
342
14,051,324
2012-12-27T07:57:00.000
5
0
0
1
0
python,pypy
1
14,051,708
0
2
0
true
0
0
PyPy does silently ignore -O. The reasoning behind it is that we believe -O that changes semantics is seriously broken, but well, I guess it's illegal. Feel free to post a bug (that's also where such reports belong, on bugs.pypy.org)
2
5
0
0
$ ./pypy -O Python 2.7.2 (a3e1b12d1d01, Dec 04 2012, 13:33:26) [PyPy 1.9.1-dev0 with GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: `` amd64 and ppc are only available in enterprise version'' >>>> assert 1==2 Traceback (most recent call last): File "", line 1, in AssertionError >>>> But when i execute $ python -O Python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> assert 1==2 >>>
how to disable pypy assert statement?
0
1.2
1
0
0
342
14,051,766
2012-12-27T08:42:00.000
1
0
0
1
0
python,linux,command-line
0
14,052,830
0
2
1
false
0
0
You can check the state of the printer using the lpstat command (man lpstat). To wait for a process to finish, get the PID of the process and pass it wait command as argument
1
4
0
0
I have a bunch of files that I need to print via PDF printer and after it is printed I need to perform additional tasks, but only when it is finally completed. So to do this from my python script i call command "lpr path/to/file.doc -P PDF" But this command immediately returns 0 and I have no way to track when printing process is finished, was it successful or not etc... There is an option to send email when printing is done, but to wait for email after I start printing looks very hacky to me. Do you have some ideas how to get this done? Edit 1 There are a plenty of ways to check if printer is printing something at current moment. Therefore at the moment after I start printing something I run lpq command every 0.5 second to find out if it is still printing. But this looks to m e not the best way to do it. I want to be able get alerted or something when actual printing process is finished. Was it successful or not etc...
How to check if pdf printing is finished on linux command line
0
0.099668
1
0
0
2,252
14,074,805
2012-12-28T19:40:00.000
1
0
1
0
0
python,configobj
0
14,075,363
0
6
0
false
0
0
Configobj is for reading and writing ini-style config files. You are apparently trying to use it to write bash scripts. That's not something that is likely to work. Just write the bash-script like you want it to be, perhaps using a template or something instead. To make ConfigParses not write the spaces around the = probably requires that you subclass it. I would guess that you have to modify the write method, but only reading the code can help there. :-)
1
2
0
0
Simple question. It is possible to make configobj to not put a space before and after the '=' in a configuration entry ? I'm using configobj to read and write a file that is later processed by a bash script, so putting an antry like: VARIABLE = "value" breaks the bash script, it needs to always be: VARIABLE="value" Or if someone has another suggestion about how to read and write a file with this kind of entries (and restrictions) is fine too. Thanks
Make python configobj to not put a space before and after the '='
1
0.033321
1
0
0
1,931
14,079,587
2012-12-29T07:13:00.000
7
0
1
0
0
python,integer
0
14,079,631
0
4
0
false
0
0
There's no need. The interpreter handles allocation behind the scenes, effectively promoting from one type to another as needed without you doing anything explicit.
1
3
0
0
I want to differentiate between 32-bit and 64-bit integers in Python. In C it's very easy as we can declare variables using int_64 and int_32. But in Python how do we differentiate between 32-bit integers and 64-bit integers?
How to determine whether the number is 32-bit or 64-bit integer in Python?
0
1
1
0
0
5,901
14,101,063
2012-12-31T11:36:00.000
3
0
0
0
0
c#,python,ui-automation,pywinauto
0
14,104,632
0
2
0
false
0
1
The short answer is that there's no good way to automate sub-controls of a DataGridView using PyWinAuto. If you want to read data out of a DataGridView (e.g. read the text contents of a cell, or determine whether a checkbox is checked), you are completely out of luck. If you want to control a DataGridView, there are two approaches that you can try: clicking at various coordinate offsets. sending keypresses to it to mimic keyboard navigation. These may work if your DataGridView has a small amount of data in it, but once the DataGridView starts needing scrollbars you're out of luck. Furthermore, clicking at offsets is sensitive to the sizes of the rows and columns, and if the columns can be resized then this approach will never be reliable.
1
0
0
0
I was working with GUI automation for visual studio C# desktop application. There I have DataGridView and inside the grid I have combo box and check boxes. I tried to automate these using pywinauto, I can get only grid layout control only and internal things I cant able to get the controls (I tried with print _control_identifiers() , Swapy, AutoIT Window Info and winspy also..) anyone plz tell me how to automate visual studio C# DataGridView and its sub controls using pywinauto for desktop application??
C# GUI automation using PyWinAuto
0
0.291313
1
0
0
1,878
14,113,906
2013-01-01T20:23:00.000
0
0
0
1
0
python,macos,command-line
0
14,113,933
0
3
0
false
0
0
Add the directory it is stored in to your PATH variable? From your prompt, I'm guessing you're using an sh-like shell and from your tags, I'm further assuming OS X. Go into your .bashrc and make the necessary changes.
2
0
0
0
I have just written a python script to do some batch file operations. I was wondering how i could keep it in some common path like rest of the command line utilities like cd, ls, grep etc. What i expect is something like this to be done from any directory - $ script.py arg1 arg2
How to execute a python command line utility from the terminal in any directory
0
0
1
0
0
170
14,113,906
2013-01-01T20:23:00.000
1
0
0
1
0
python,macos,command-line
0
14,113,928
0
3
0
true
0
0
Just put the script directory into the PATH environment variable, or alternatively put the script in a location that is already in the PATH. On Unix systems, you usually use /home/<nick>/bin for your own scripts and add that to the PATH.
2
0
0
0
I have just written a python script to do some batch file operations. I was wondering how i could keep it in some common path like rest of the command line utilities like cd, ls, grep etc. What i expect is something like this to be done from any directory - $ script.py arg1 arg2
How to execute a python command line utility from the terminal in any directory
0
1.2
1
0
0
170
14,120,045
2013-01-02T10:02:00.000
3
0
0
1
1
python,linux,audio
0
14,120,125
0
2
0
true
0
0
mpd should be perfect for you. It is a daemon and can be controlled by various clients, ranging from GUI-less command-line clients like mpc to GUI command-line clients like ncmpc and ncmpcpp up to several full-featured desktop clients. mpd + mpc should do the job for you as mpc can be easily controlled via the command line and is also able to provide various status information about the currently played song and other things. It seems like there is already a python client library available for mpd - python-mpd.
1
3
0
0
I want to build use my Raspberry Pi as a media station. It should be able to play songs via commands over the network. These commands should be handled by a server written in Python. Therefor, I need a way to control audio playback via Python. I decided to use a command line music player for linux since those should offer the most flexibility for audio file formats. Also, Python libraries like PyAudio and PyMedia don't seem to work for me. I don't really have great expectations about the music player. It must be possible to play and pause sound files in as much codecs as possible and turn the volume up and down. Also it has to be a headless player since I am not running any desktop environment. There are a lot of players like that out there, it seems. mpg123 for example, works well for all I need. The problem I have now is that all of these players seem to have a user interface written in ncurses and I have no idea how to access this with the Python subprocess module. So, I either need a music player which comes with Python bindings or one which can be controlled with the command line via the subprocess module. At least these are the solutions I thought about by now. Does anyone know about a command line audio player for linux that would solve my problem? Or is there any other way? Thanks in advance
Python-controllable command line audio player for Linux
1
1.2
1
0
0
2,048
14,130,010
2013-01-02T22:09:00.000
0
0
1
0
0
java,python,image,image-recognition
0
14,130,289
0
3
0
false
0
0
If I were going to do this, I would use normalized floats. The letter A would be: [(0.0,0.0),(0.5,1.0),(1.0,0.0),(0.1,0.5)(0.9,0.5)] Update (further explanation) So my thought is that you should be able to uniquely identify a letter with an array of normalized points. Points would be at important features of the letter such as start, end, and midpoints of a line. Curves would be sliced up into multiple smaller line segments, which would also be represented by points. To use this model, the source image would be analyzed. You would then analyze the image for text. You could use edge detection and other methods to find text. You'd also have to analyze for any transforms on the text. After you figure out the transform of the text, you would split the text into characters, then analyze the character for the points of important features. Then you would write an algorithm to normalize the points, and determine which model represents the found points most accurately.
3
2
0
0
I am developing Image recognition software that will detect letters. I was wondering how I could create a model of a letter ("J" for example) so that when I take a picture with the letter "J" on it, the software will compare the image to the model and detect the letter "J". How would I create the model?
Creating an Image model
0
0
1
0
0
203
14,130,010
2013-01-02T22:09:00.000
0
0
1
0
0
java,python,image,image-recognition
0
14,130,097
0
3
0
false
0
0
You can use OpenCv library for java , this library contain Template Matching Model already implemented but the better thing for image recognition is using learning machine or neural network
3
2
0
0
I am developing Image recognition software that will detect letters. I was wondering how I could create a model of a letter ("J" for example) so that when I take a picture with the letter "J" on it, the software will compare the image to the model and detect the letter "J". How would I create the model?
Creating an Image model
0
0
1
0
0
203
14,130,010
2013-01-02T22:09:00.000
2
0
1
0
0
java,python,image,image-recognition
0
14,130,216
0
3
0
false
0
0
It seems you want to do OCR (Optical Character Recognition). If this is only part of a larger project, try OpenCV. Even if you are making a commercial product, it has a permissive BSD license. If you have set out to make a library of your own, read some of the papers available through any good search engine. There are many tutorials for machine learning and neural nets, which could produce the image models you want.
3
2
0
0
I am developing Image recognition software that will detect letters. I was wondering how I could create a model of a letter ("J" for example) so that when I take a picture with the letter "J" on it, the software will compare the image to the model and detect the letter "J". How would I create the model?
Creating an Image model
0
0.132549
1
0
0
203
14,139,377
2013-01-03T12:54:00.000
0
0
0
1
0
python,batch-file,cross-platform,py2exe,scientific-computing
0
14,139,446
0
3
0
false
0
0
I would recommend using py2exe for the windows side, and then BuildApplet for the mac side. This will allow you to make a simple app you double click for your less savvy users.
2
5
0
0
EDIT One option I contemplated but don't know enough about is to e.g. for windows write a batch script to: Search for a Python installation, download one and install if not present Then install the bundled package using distutils to also handle dependencies. It seems like this could be a relatively elegant and simple solution, but I'm not sure how to proceed - any ideas? Original Question In brief What approach would you recommend for the following scenario? Linux development environment for creation of technical applications Deployment now also to be on Windows and Mac Existing code-base in Python wine won't install windows version of Python No windows install CDs available to create virtual windows/mac machines Porting to java incurs large overhead because of existing code-base Clients are not technical users, i.e. providing standard Python packages not sufficient - really requires installable self-contained products Background I am writing technical and scientific apps under Linux but will need some of them to be deployable on Windows/MacOs machines too. In the past I have used Python a lot, but I am finding that for non-technical users who aren't happy installing python packages, creating a simple executable (by using e.g. py2exe) is difficult as I can't get the windows version of Python to install using wine. While java would seem a good choice, if possible I wanted to avoid having to port my existing code from Python, especially as Python also allows writing portable code. I realize I'm trying to cover a lot of bases here, so any suggestions regarding the most appropriate solutions (even if not perfect) will be appreciated.
Cross-platform deployment and easy installation
0
0
1
0
0
1,886
14,139,377
2013-01-03T12:54:00.000
2
0
0
1
0
python,batch-file,cross-platform,py2exe,scientific-computing
0
14,139,409
0
3
0
false
0
0
py2exe works pretty well, I guess you just have to setup a Windows box (or VM) to be able to build packages with it.
2
5
0
0
EDIT One option I contemplated but don't know enough about is to e.g. for windows write a batch script to: Search for a Python installation, download one and install if not present Then install the bundled package using distutils to also handle dependencies. It seems like this could be a relatively elegant and simple solution, but I'm not sure how to proceed - any ideas? Original Question In brief What approach would you recommend for the following scenario? Linux development environment for creation of technical applications Deployment now also to be on Windows and Mac Existing code-base in Python wine won't install windows version of Python No windows install CDs available to create virtual windows/mac machines Porting to java incurs large overhead because of existing code-base Clients are not technical users, i.e. providing standard Python packages not sufficient - really requires installable self-contained products Background I am writing technical and scientific apps under Linux but will need some of them to be deployable on Windows/MacOs machines too. In the past I have used Python a lot, but I am finding that for non-technical users who aren't happy installing python packages, creating a simple executable (by using e.g. py2exe) is difficult as I can't get the windows version of Python to install using wine. While java would seem a good choice, if possible I wanted to avoid having to port my existing code from Python, especially as Python also allows writing portable code. I realize I'm trying to cover a lot of bases here, so any suggestions regarding the most appropriate solutions (even if not perfect) will be appreciated.
Cross-platform deployment and easy installation
0
0.132549
1
0
0
1,886
14,152,548
2013-01-04T06:55:00.000
0
1
0
1
0
python,python-3.x
0
64,151,445
0
3
0
false
0
0
In the general case, no; many Python 2 scripts will not run on Python 3, and vice versa. They are two different languages. Having said that, if you are careful, you can write a script which will run correctly under both. Some authors take extra care to make sure their scripts will be compatible across both versions, commonly using additional tools like the six library (the name is a pun; you can get to "six" by multiplying "two by three" or "three by two"). However, it is now 2020, and Python 2 is officially dead. Many maintainers who previously strove to maintain Python 2 compatibility while it was still supported will now be relieved and often outright happy to pull the plug on it going forward.
2
7
0
0
i have thousands of servers(linux), some only has python 2.x and some only has python 3.x, i want to write one script check.py can run on all servers just as $./check.py without use $python check.py or $python3 check.py, is there any way to do this? my question is how the script check.py find the Interpreter no matter the Interpreter is python2.x and python3.x
can one python script run both with python 2.x and python 3.x
1
0
1
0
0
5,143
14,152,548
2013-01-04T06:55:00.000
0
1
0
1
0
python,python-3.x
0
14,152,613
0
3
0
false
0
0
Considering that Python 3.x is not entirely backwards compatible with Python 2.x, you would have to ensure that the script was compatible with both versions. This can be done with some help from the 2to3 tool, but may ultimately mean running two distinct Python scripts.
2
7
0
0
i have thousands of servers(linux), some only has python 2.x and some only has python 3.x, i want to write one script check.py can run on all servers just as $./check.py without use $python check.py or $python3 check.py, is there any way to do this? my question is how the script check.py find the Interpreter no matter the Interpreter is python2.x and python3.x
can one python script run both with python 2.x and python 3.x
1
0
1
0
0
5,143
14,153,954
2013-01-04T08:57:00.000
2
1
0
0
0
python,outlook,smtplib
0
14,154,176
0
1
0
false
0
0
You can send a copy of that email to yourself, with some header that tag the email was sent by yourself, then get another script (using IMAP library maybe) to move the email to the Outlook Sent folder
1
4
0
0
I am using hosted exchange Microsoft Office 365 email and I have a Python script that sends email with smtplib. It is working very well. But there is one issue, how can I get the emails to show up in my Outlook Sent Items?
How can I see emails sent with Python's smtplib in my Outlook Sent Items folder?
0
0.379949
1
0
1
1,550
14,162,140
2013-01-04T17:45:00.000
0
0
1
0
1
python
0
14,162,418
0
1
0
false
0
0
If what you want is a C API version of exec, maybe try PyRun_File and its ilk? Not sure exactly what you're trying to accomplish though.
1
1
0
0
I am trying to set local/globals variables in PyRun_InteractiveLoop call. Cant figure out how to do it, since, unlike exec counterparts, loop doesn't accept global/local args. What am I missing?
PyRun_InteractiveLoop globals/locals
0
0
1
0
0
199
14,177,436
2013-01-05T23:11:00.000
0
0
0
0
1
python-3.x,amazon-web-services,amazon-s3
0
70,791,596
0
2
0
false
1
0
Using AWS CLI, aws s3 ls s3://*bucketname* --region *bucket-region* --no-sign-request
2
3
0
0
Given a bucket with publicly accessible contents, how can I get a listing of all those publicly accessible contents? I know boto can do this, but boto requires AWS credentials. Also, boto doesn't work in Python3, which is what I'm working with.
Get publicly accessible contents of S3 bucket without AWS credentials
0
0
1
0
1
1,214
14,177,436
2013-01-05T23:11:00.000
4
0
0
0
1
python-3.x,amazon-web-services,amazon-s3
0
14,199,730
0
2
0
false
1
0
If the bucket's permissions allow Everyone to list it, you can just do a simple HTTP GET request to http://s3.amazonaws.com/bucketname with no credentials. The response will be XML with everything in it, whether those objects are accessible by Everyone or not. I don't know if boto has an option to make this request without credentials. If not, you'll have to use lower-level HTTP and XML libraries. If the bucket itself does not allow Everyone to list it, there is no way to get a list of its contents, even if some of the objects in it are publicly accessible.
2
3
0
0
Given a bucket with publicly accessible contents, how can I get a listing of all those publicly accessible contents? I know boto can do this, but boto requires AWS credentials. Also, boto doesn't work in Python3, which is what I'm working with.
Get publicly accessible contents of S3 bucket without AWS credentials
0
0.379949
1
0
1
1,214
14,191,034
2013-01-07T06:34:00.000
1
0
0
0
0
python,django
0
14,191,085
0
1
0
true
1
0
What we usually do in our Django projects is create versions of all configuration files for each platform (dev, prod, etc...) and use symlinks to select the correct one. Now that Windows supports links properly, this solution fits everybody. If you insist on another configuration file, try making it a Python file that just imports the proper configuration file, so instead of name="development" you'll have something like execfile('development_settings.py')
1
0
0
0
I'm building project with Django, I can make two setting files for Django: production_setting.py and development_setting.py, however, I need some configure file for my project and I'm using ConfigParser to parse that files. e.g. [Section] name = "development" version = "1.0" how to split this configure file to production and development?
python project setting for production and developement
0
1.2
1
0
0
150
14,191,462
2013-01-07T07:12:00.000
0
0
0
0
0
python,python-2.7
0
14,193,534
0
1
0
false
0
1
The question is vague, but I guess you should use c_char_p instead of POINTER(c_char).
1
0
0
0
I call foreign c library in my program using ctypes, I don't know how to assign a POINTER(c_char) variant to a string.
How can I assign a variant of POINTER(c_char) type to a string?
0
0
1
0
0
149
14,201,551
2013-01-07T18:08:00.000
3
0
0
1
0
python,linux,alsa,udev,soundcard
0
14,203,491
1
2
0
false
0
0
The sound card limit is defined as the symbol SNDRV_CARDS in include/sound/core.h. When I increased this seven years ago, I did not go beyond 32 because the card index is used as a bit index for the variable snd_cards_lock in sound/core/init.c, and I did not want to change more than necessary. If you make snd_cards_lock a 64-bit variable, change all accesses to use a 64-bit type, and adjust any other side effect that I might have forgotten about, you should be able to get the kernel to have more ALSA cards. This limit also exists in the alsa-lib package; you will have to change at least the check in snd_ctl_hw_open in src/control/control_hw.c.
1
6
0
0
I'm working on an educative multiseat project where we need to connect 36 keyboards and 36 USB sound cards to a single computer. We're running Ubuntu Linux 12.04 with the 3.6.3-030603-generic kernel. So far we've managed to get the input from the 36 keyboards, and recognized the 36 sound cards without getting a kernel panic (which happened before updating the kernel). We know the 36 sound cards have been recognized because $ lsusb | grep "Audio" -c outputs 36. However, $ aplay -l lists 32 playback devices in total (including the "internal" sound card). Also, $ alsamixer -c 32 says "invalid card index: 32" (works just from 0 through 31 ; 32 in total too). So my question is, how can I access the other sound cards if they're not even listed with these commands? I'm writing an application in python and there are some libraries to choose from, but I'm afraid they'll also be limited to 32 devices in total because of this. Any guidance will be useful. Thanks.
Need more than 32 USB sound cards on my system
0
0.291313
1
0
0
1,112
14,242,764
2013-01-09T17:18:00.000
0
0
1
0
1
python,numpy,scipy,python-module
0
14,242,912
0
1
0
false
0
0
Use the --user option to easy_install or setup.py to indicate where the installation is to take place. It should point to a directory where you have write access. Once the module has been built and installed, you then need to set the environmental variable PYTHONPATH to point to that location. When you next run the python command, you should be able to import the module.
1
1
1
0
I am totally new to Python, and I have to use some modules in my code, like numpy and scipy, but I have no permission on my hosting to install new modules using easy-install or pip ( and of course I don't know how to install new modules in a directory where I have permission [ I have SSH access ] ). I have downloaded numpy and used from numpy import * but it doesn't work. I also tried the same thing with scipy : from scipy import *, but it also don't work. How to load / use new modules in Python without installing them [ numpy, scipy .. ] ?
use / load new python module without installation
0
0
1
0
0
1,476
14,244,195
2013-01-09T18:41:00.000
1
0
0
0
0
python,slider,wxpython,color-mapping
0
14,260,324
0
1
0
true
0
1
I think you're looking for one of the following widgets: ColourDialog, ColourSelect, PyColourChooser or CubeColourDialog They all let you choose colors in different ways and they have a slider to help adjust the colours too. You can see each of them in action in the wxPython demo (downloadable from the wxPython web page)
1
0
0
0
I haven't seen an example of this but I wanted to know if any knows how to implement a colorbar with an adjustable slider using wxpython. Basically the slider should change the levels of the colorbar and as such adjust the colormap. If anyone has an idea of how to do and possible some example code it would be much appreciated.
colorbar with a slider using wxpython
0
1.2
1
0
0
386
14,254,203
2013-01-10T09:08:00.000
15
0
0
0
0
python,machine-learning,data-mining,classification,scikit-learn
0
34,036,255
0
6
0
false
0
0
The simple answer: multiply result!! it's the same. Naive Bayes based on applying Bayes’ theorem with the “naive” assumption of independence between every pair of features - meaning you calculate the Bayes probability dependent on a specific feature without holding the others - which means that the algorithm multiply each probability from one feature with the probability from the second feature (and we totally ignore the denominator - since it is just a normalizer). so the right answer is: calculate the probability from the categorical variables. calculate the probability from the continuous variables. multiply 1. and 2.
3
76
1
0
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age", "Length of membership" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
0
1
1
0
0
29,596
14,254,203
2013-01-10T09:08:00.000
0
0
0
0
0
python,machine-learning,data-mining,classification,scikit-learn
0
69,929,209
0
6
0
false
0
0
You will need the following steps: Calculate the probability from the categorical variables (using predict_proba method from BernoulliNB) Calculate the probability from the continuous variables (using predict_proba method from GaussianNB) Multiply 1. and 2. AND Divide by the prior (either from BernoulliNB or from GaussianNB since they are the same) AND THEN Divide 4. by the sum (over the classes) of 4. This is the normalisation step. It should be easy enough to see how you can add your own prior instead of using those learned from the data.
3
76
1
0
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age", "Length of membership" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
0
0
1
0
0
29,596
14,254,203
2013-01-10T09:08:00.000
74
0
0
0
0
python,machine-learning,data-mining,classification,scikit-learn
0
14,255,284
0
6
0
true
0
0
You have at least two options: Transform all your data into a categorical representation by computing percentiles for each continuous variables and then binning the continuous variables using the percentiles as bin boundaries. For instance for the height of a person create the following bins: "very small", "small", "regular", "big", "very big" ensuring that each bin contains approximately 20% of the population of your training set. We don't have any utility to perform this automatically in scikit-learn but it should not be too complicated to do it yourself. Then fit a unique multinomial NB on those categorical representation of your data. Independently fit a gaussian NB model on the continuous part of the data and a multinomial NB model on the categorical part. Then transform all the dataset by taking the class assignment probabilities (with predict_proba method) as new features: np.hstack((multinomial_probas, gaussian_probas)) and then refit a new model (e.g. a new gaussian NB) on the new features.
3
76
1
0
I'm using scikit-learn in Python to develop a classification algorithm to predict the gender of certain customers. Amongst others, I want to use the Naive Bayes classifier but my problem is that I have a mix of categorical data (ex: "Registered online", "Accepts email notifications" etc) and continuous data (ex: "Age", "Length of membership" etc). I haven't used scikit much before but I suppose that that Gaussian Naive Bayes is suitable for continuous data and that Bernoulli Naive Bayes can be used for categorical data. However, since I want to have both categorical and continuous data in my model, I don't really know how to handle this. Any ideas would be much appreciated!
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
0
1.2
1
0
0
29,596
14,255,289
2013-01-10T10:08:00.000
2
0
0
1
0
python,python-2.7,twisted,failover
0
14,266,178
0
2
0
false
0
0
ReconnectingClientFactory doesn't have this capability. You can build your own factory which implements this kind of reconnection logic, mostly by hooking into the clientConnectionFailed factory method. When this is called and the reason seems to you like that justifies switching servers (eg, twisted.internet.error.ConnectionRefused), pick the next address on your list and use the appropriate reactor.connectXYZ method to try connecting to it. You could also try constructing this as an endpoint (which is the newer high-level connection setup API that is preferred by some), but handling reconnection with endpoints is not yet a well documented topic.
1
4
0
0
I have a twisted ReconnectingClientFactory and i can successfully connect to given ip and port couple with this factory. And it works well. reactor.connectTCP(ip, port, myHandsomeReconnectingClientFactory) In this situation, when the server is gone, myHandsomeReconnectingClientFactory tries to connect same ip and port (as expected). My goal is, when the server which serves on given ip and port couple is gone, connecting to a backup server (which have different ip and port). Any ideas/comments on how to achieve this goal will be appreciated.
Twisted: ReconnectingClientFactory connection to different servers
0
0.197375
1
0
0
1,504