Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
10,930,459
2012-06-07T11:00:00.000
1
0
0
0
0
python,mysql,django
1
10,935,789
0
2
0
false
1
0
You could use a middleware with a process_view method and a try / except wrapping your call. Or you could decorate your views and wrap the call there. Or you could use class based views with a base class that has a method decorator on its dispatch method, or an overriden.dispatch. Really, you have plenty of solutions. Now, as said above, you might want to modify your Desktop application too!
1
1
0
0
I have a desktop application that send POST requests to a server where a django app store the results. DB server and web server are not on the same machine and it happens that sometimes the connectivity is lost for a very short time but results in a connection error on some requests: OperationalError: (2003, "Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (110)") On a "normal" website I guess you'd not worry too much: the browser display a 500 error page and the visitor tries again later. In my case loosing info posted by a request is not an option and I am wondering how to handle this? I'd try to catch on this exception, wait for the connectivity to come back (lag is not a problem) and then continue the process. But as the exception can occur about anywhere in the code I'm a bit stuck on how to proceed. Thanks for your advice.
Django: how to properly handle a database connection error
0
0.099668
1
1
0
2,536
10,956,683
2012-06-08T22:21:00.000
0
0
0
0
0
python,ruby-on-rails,server-side
0
33,303,096
0
2
0
false
1
0
Are you sure your database is well maintained and efficient (good indexes, normalised, clean etc) Or can you not make use of messaging queues, so you keep your rails crud app, then the jobs are just added to a queue. Python scripts on the backend (or different machine) read from the queue, process then insert back into the database or add results to a results queue or whereever you want to read them from
1
0
0
0
I am a data scientist and database veteran but a total rookie in web development and have just finished developing my first Ruby On Rails app. This app accepts data from users submitting data to my frontend webpage and returns stats on the data submitted. Some users have been submitting way too much data - its getting slow and I think I better push the data crunching to a backed python or java app, not a database. I don't even know where to start. Any ideas on how to best architect this application? The job flow is > data being submitted from the fronted app which pushes it to the > backend for my server app to process and > send back to my Ruby on Rails page. Any good tutorials that cover this? Please help! What should I be reading up on?
Ruby on Rails frontend and server side processing in Python or Java.. HOW, What, Huh?
0
0
1
0
0
694
10,957,467
2012-06-09T00:34:00.000
1
0
1
0
0
python,list,comparison
0
10,957,519
0
2
0
false
0
0
The most straightforward way: Create an object of each one, with your comparison function as __cmp__ (python 2.x) or define __lt__ and __eq__ (python 3.x). Stash each one in a list named list_. Find the least valued one using min(list_). An optimization that might help, if practical: If you can come up with a way of mapping your objects to (possibly large) integers, such that the integer for x is < the integer for y, iff the original object ox is < the original object oy, and then take a min of the integers. This should speed things up slightly, if it's workable for your types.
1
1
0
0
I have a function that takes as parameters 2 objects: a and b The function checks (with a very long algorithm) which one of these objects is better. If a is better it returns -1, if b is better it returns 1, if they tied it returns 0 My problem is: I have 21 of these objects in a list. I need to find out, using the function above (the function cannot be changed, the only way is to compare 2 objects, it's a very complicated and long algorithm), which one of these 21 objects is the best. I tried thinking for hours how to do it efficiently without doing the same comparison too many times, how to write an algorithm that will find out which one is the best (and if two of them are tied and they both are the best, it doesn't matter which one to take, though I don't think it's even possible for a tie to happen), and I couldn't come up with anything good. The function's name is handCompare(a, b) The objects are found in a list called Combos, len(combos) is 21 I need an algorithm that will find out the best item in the combos list Thanks for reading and I hope you can help :)
Python, using a comparison function for finding the best object
0
0.099668
1
0
0
103
10,960,672
2012-06-09T11:39:00.000
4
0
0
0
0
python
0
10,960,826
0
1
0
true
0
0
Unless you entirely control the system, I think you'd be better off abandoning this particular pursuit. Modern filesystems or mediums (e.g. SSD wear-leveling) can result in data being retained physically on-disk even if you overwrite them in-place. Best practice in my book is to fill the disk with random data, then exclusively use whole-disk encryption.
1
0
0
0
I'm going to make a kind of remover that erase files never recoverable. I don't know the algorithm but I think it is possible to get exact file memory address and write something like 'null' at there. So I'm searching at os module and others, but don't know how to do that...Is there a function or the otherway? Or what I have to do is just read the file binarymode and override it null?
How can I get the exact memory address on HDD of an file?
0
1.2
1
0
0
130
10,972,821
2012-06-10T22:25:00.000
0
0
0
0
0
python,pygtk
0
10,973,513
0
1
1
true
0
1
So, I understand now: it turns out that when items are reordered in IconView, gtk.ListStore.reorder or something similar is called. What that means is that all I needed to do was to use gtk.ListStore.get_iter() or gtk.ListStore.get_iter_first() and all the problems are solved. How trivial! All I needed to do was eat over it it seems.
1
0
0
0
Using PyGtk's IconView, I can set the icons to be reorderable by calling gtk.IconView.set_reorderable(True). My question is what is the best way to retrieve the new order? That is, how should I access a property of each of the elements in the new order? An iterator of sorts? I am using gtk.ListStore to store the data. I know this might sound trivial but I have virtually no experience in Python or PyGtk (or GTK in general) so I'd like to know the right way! Thanks!
Reordering in IconView (PyGtk)
0
1.2
1
0
0
135
10,994,405
2012-06-12T10:04:00.000
2
1
0
0
0
python,caching,timeit
0
10,994,548
0
1
0
false
0
0
I think the memory allocation is the problem. the python interpreter itself holds a memory pool, which starts with no (or little?) memory pooling. after the first run of your program, much memory is allocated (from the system) and free (to the pool), and then the following runs get memory from the pool, which is much faster than asking memory from system. but this makes sense only if your algorithm will consume much memory.
1
1
0
0
I am testing several different algorithms (for solving 16x16 sudokus) against each other, measuring their performance using the timeit module. However, it appears that only the first of the timeit.repeat() iterations is actually calculated, because the other iterations are gotten much faster. Testing a single algorithm with t.repeat(repeat=10, number=1) the following results are gotten: [+] Results for......: solve1 (function 1/1) [+] Fastest..........: 0.00003099 [+] Slowest..........: 32.38717794 [+] Average*.........: 0.00003335 (avg. calculated w/o min/max values) The first out of 10 results always takes an much larger time to complete, which seems to be explainable only by the fact that iterations 2 to 10 of the timeit.repeat() loop somehow use the cached results of the loop's previous iterations. When actually using timeit.repeat() in a for loop to compare several algorithms against each other, again it appears that the solution to the puzzle is calculated only once: [+] Results for......: solve1 (function 1/3) [+] Fastest..........: 0.00003099 [+] Slowest..........: 16.33443809 [+] Average*.........: 0.00003263 (avg. calculated w/o min/max values) [+] Results for......: solve2 (function 2/3) [+] Fastest..........: 0.00365305 [+] Slowest..........: 0.02915907 [+] Average*.........: 0.00647599 (avg. calculated w/o min/max values) [+] Results for......: solve3 (function 3/3) [+] Fastest..........: 0.00659299 [+] Slowest..........: 0.02440906 [+] Average*.........: 0.00717765 (avg. calculated w/o min/max values) The really weird thing is that relative speed (in relation to each other) of the algorithms is consistent throughout measurements, which would indicate that all algorithms are calculating their own results. Is this extreme increase in performance due to the fact that a large part of the intermediate results (gotten when computing the first solution) are still in some sort of cache, reserved by the python proces? Any help/insights would be greatly appreciated.
Python timeit: results cached instead of calculated?
0
0.379949
1
0
0
1,635
10,999,627
2012-06-12T15:14:00.000
0
0
0
0
0
python,twisted
0
11,001,717
0
1
0
true
0
0
Your factory's buildProtocol can return anything you want it to return. That's up to you. However, you might find that things are a lot simpler if you just use two different factories. That does not preclude sharing state. Just have them share a bunch of attributes, or collect all your state together onto a single new object and have the factories share that object.
1
0
0
0
I am trying to implement a network protocol that listens on 2 separate TCP ports. One is for control messages and one is for data messages. I understand that I need two separate protocol classes since there are two ports involved. I would like to have one factory that creates both of these protocols since there is state information and data that is shared between them and they essential implement one protocol. Is this possible? If yes, how? If not, how can I achieve something similar? I understand that it is unusal to divide a protocol between 2 ports but that is the given situation. Thanks
A single factory for multiple protocols?
1
1.2
1
0
1
153
11,013,911
2012-06-13T11:28:00.000
-2
0
1
0
0
python,google-app-engine,python-2.7
0
11,013,947
0
3
0
false
1
0
This is impossible (without an external service). DBs are made for this to store data longer than one request. What you could do is to safe the dict "in" the users session, but I don't recommend that. Unless you have millions of entries every DB is fast enough even sqlite.
1
3
0
0
I am new to Python and have been studying its fundementals for 3 months now, learning types, functions and algorithms. Now I started practiciging web app development with GAE framework. Goal: have a very large dictionary, which can be accessed from all .py files throughout the web app without having it stored more than once or re-created each time when someone visits a URL of the app. I want to render a simple DB table to a dictionary, with hopes of speed gain as it will be in memory. Also I am planing on creating an in memory DAWG - TRIE I don't want this dictionary to be created each time a page is called, I want it to be stored in memory once, kept there and used and accessed by all sessions and if possible modified too. How can I achieve this? Like a simple in memory DB but actually a Python dictionary? Thank you.
python: how to have a dictionary which can be accessed from all the app
0
-0.132549
1
0
0
576
11,013,976
2012-06-13T11:31:00.000
1
0
0
0
0
python,mysql,ruby-on-rails,database,triggers
0
11,014,025
0
1
0
true
1
0
Yes, refactor the code to put a data web service in front of the database and let the Ruby and Python apps talk to the service. Let it maintain all integrity and business rules. "Don't Repeat Yourself" - it's a good rule.
1
2
0
0
Okay., We have Rails webapp which stores data in a mysql data base. The table design was not read efficient. So we resorted to creating a separate set of read only tables in mysql and made all our internal API calls use that tables for read. We used callbacks to keep the data in sync between both the set of tables. Now we have a another Python app which is going to mess with the same database - now how do we proceed maintaining the data integrity? Active record callbacks can't be used anymore. We know we can do it with triggers. But is there a any other elegant way to do this? How to people achieve to maintain the integrity of such derived data.
Maintaining data integrity in mysql when different applications are accessing it
1
1.2
1
1
0
296
11,015,320
2012-06-13T12:56:00.000
14
0
1
0
0
python,trie,dawg
0
11,015,381
0
15
0
false
0
0
There's no "should"; it's up to you. Various implementations will have different performance characteristics, take various amounts of time to implement, understand, and get right. This is typical for software development as a whole, in my opinion. I would probably first try having a global list of all trie nodes so far created, and representing the child-pointers in each node as a list of indices into the global list. Having a dictionary just to represent the child linking feels too heavy-weight, to me.
1
148
0
0
I'm interested in tries and DAWGs (direct acyclic word graph) and I've been reading a lot about them but I don't understand what should the output trie or DAWG file look like. Should a trie be an object of nested dictionaries? Where each letter is divided in to letters and so on? Would a lookup performed on such a dictionary be fast if there are 100k or 500k entries? How to implement word-blocks consisting of more than one word separated with - or space? How to link prefix or suffix of a word to another part in the structure? (for DAWG) I want to understand the best output structure in order to figure out how to create and use one. I would also appreciate what should be the output of a DAWG along with trie. I do not want to see graphical representations with bubbles linked to each other, I want to know the output object once a set of words are turned into tries or DAWGs.
How to create a trie in Python
0
1
1
0
0
138,151
11,023,530
2012-06-13T21:25:00.000
4
0
0
0
0
python,html,directory,ip-address
0
11,023,595
0
5
0
false
0
0
HTTP does not work with "files" and "directories". Pick a different protocol.
2
17
0
0
How can I list files and folders if I only have an IP-address? With urllib and others, I am only able to display the content of the index.html file. But what if I want to see which files are in the root as well? I am looking for an example that shows how to implement username and password if needed. (Most of the time index.html is public, but sometimes the other files are not).
Python to list HTTP-files and directories
0
0.158649
1
0
1
57,862
11,023,530
2012-06-13T21:25:00.000
13
0
0
0
0
python,html,directory,ip-address
0
11,024,116
0
5
0
false
0
0
You cannot get the directory listing directly via HTTP, as another answer says. It's the HTTP server that "decides" what to give you. Some will give you an HTML page displaying links to all the files inside a "directory", some will give you some page (index.html), and some will not even interpret the "directory" as one. For example, you might have a link to "http://localhost/user-login/": This does not mean that there is a directory called user-login in the document root of the server. The server interprets that as a "link" to some page. Now, to achieve what you want, you either have to use something other than HTTP (an FTP server on the "ip address" you want to access would do the job), or set up an HTTP server on that machine that provides for each path (http://192.168.2.100/directory) a list of files in it (in whatever format) and parse that through Python. If the server provides an "index of /bla/bla" kind of page (like Apache server do, directory listings), you could parse the HTML output to find out the names of files and directories. If not (e.g. a custom index.html, or whatever the server decides to give you), then you're out of luck :(, you can't do it.
2
17
0
0
How can I list files and folders if I only have an IP-address? With urllib and others, I am only able to display the content of the index.html file. But what if I want to see which files are in the root as well? I am looking for an example that shows how to implement username and password if needed. (Most of the time index.html is public, but sometimes the other files are not).
Python to list HTTP-files and directories
0
1
1
0
1
57,862
11,027,366
2012-06-14T05:49:00.000
2
0
0
0
0
python,user-interface,wxpython,alignment,screen-resolution
0
11,036,132
0
1
0
false
0
1
Sizers automatically adjust your application widgets for screen resolution, resize, etc. If they aren't doing this automatically, then there's probably something buggy in your code. Since I don't have your code to look at, try going over the wxPython tutorials on sizers very carefully. I found the book wxPython In Action very useful on this topic.
1
0
0
0
So, here I'm happy that I wrote the whole code for a awesome looking GUI using wxPython in a day but it evaporated when I found that the panels are getting out of the way leaving a lot of empty space on the sides or getting congested (you know how!) on a different screen resolution. What I want to ask is that what all properties of a GUI should I adjust or care about if I want to see that the GUI's aspect ratio, frame alignment, panel alignments, sizer ratios etc. should remain intact or if there're any methods to do so, suggest me. Thanks in advance. :)
wxPython : Adjusting the panels and sizers for different screen resolutions
0
0.379949
1
0
0
284
11,033,892
2012-06-14T13:15:00.000
3
0
0
0
0
python,postgresql,web-applications,sqlalchemy,turbogears2
0
11,034,199
0
2
0
true
0
0
If two transactions try to set the same value at the same time one of them will fail. The one that loses will need error handling. For your particular example you will want to query for the number of parts and update the number of parts in the same transaction. There is no race condition on sequence numbers. Save a record that uses a sequence number the DB will automatically assign it. Edit: Note as Limscoder points out you need to set the isolation level to Repeatable Read.
1
2
0
0
We are writing an inventory system and I have some questions about sqlalchemy (postgresql) and transactions/sessions. This is a web app using TG2, not sure this matters but to much info is never a bad. How can make sure that when changing inventory qty's that i don't run into race conditions. If i understand it correctly if user on is going to decrement inventory on an item to say 0 and user two is also trying to decrement the inventory to 0 then if user 1s session hasn't been committed yet then user two starting inventory number is going to be the same as user one resulting in a race condition when both commit, one overwriting the other instead of having a compound effect. If i wanted to use postgresql sequence for things like order/invoice numbers how can I get/set next values from sqlalchemy without running into race conditions? EDIT: I think i found the solution i need to use with_lockmode, using for update or for share. I am going to leave open for more answers or for others to correct me if I am mistaken. TIA
SQLAlchemy(Postgresql) - Race Conditions
1
1.2
1
1
0
2,935
11,034,268
2012-06-14T13:34:00.000
0
0
1
0
0
python
0
11,034,398
0
2
0
false
0
0
I think you can use mongodb where you can set list field with all possible names of author. For example you have handwriten name "black" and you cant recognize what letter in name for example "c" or "e" and you can set origin name as "black" and add to list of possible names "blaek"
1
0
0
1
We have scanned thousands of old documents and entered key data into a database. One of the fields is author name. We need to search for documents by a given author but the exact name might have been entered incorrectly as on many documents the data is handwritten. I thought of searching for only the first few letters of the surname and then presenting a list for the user to select from. I don't know at this stage how many distinct authors there are, I suspect it will be in the hundreds rather than hundreds of thousands. There will be hundreds of thousands of documents. Is there a better way? Would an SQL database handle it better? The software is python, and there will be a list of documents each with an author.
Search unreliable author names
0
0
1
0
0
73
11,042,172
2012-06-14T22:22:00.000
0
0
0
0
0
python,qt,mvvm,pyside,architectural-patterns
0
18,681,903
0
3
0
false
0
1
I don't know how far do you want to take MVVM, but at a basic level it comes with Qt, and I've been using it for a long time. You have a business-specific model, say tied to a database. Then you create view-specific viewmodel as a proxy model. You can stack a few layers of those, depending on what you need. Then you show that using a view. As long as everything is set up right, it will "just work". Now if you want to use a model to configure a view, Qt doesn't provide anything directly for you. You'd need to write a factory class that can use viewmodel data to instantiate and set up the view for you. Everything depends on how far do you want to take it, and what architectural benefits does it give you.
2
16
0
0
I've been trying to find a way to implement MVVM with PySide but haven't been able to. I think that there should be a way to create Views from ViewModels with QItemEditorFactory, and to do data binding I think I can use QDataWidgetMapper. Do you have any ideas on how MVVM may be implemented with Qt and PySide? Even if there are some resources in C++ I'll try to translate them to python. Thanks.
MVVM pattern with PySide
0
0
1
0
0
4,531
11,042,172
2012-06-14T22:22:00.000
-2
0
0
0
0
python,qt,mvvm,pyside,architectural-patterns
0
17,227,321
0
3
0
false
0
1
An obvious answer for me is that MVVM is suitable for WPF and some other techs that welcome this pattern, and so you have to find out whether it's possible to apply this pattern on other technologies. Please, read on MVVM in wiki.
2
16
0
0
I've been trying to find a way to implement MVVM with PySide but haven't been able to. I think that there should be a way to create Views from ViewModels with QItemEditorFactory, and to do data binding I think I can use QDataWidgetMapper. Do you have any ideas on how MVVM may be implemented with Qt and PySide? Even if there are some resources in C++ I'll try to translate them to python. Thanks.
MVVM pattern with PySide
0
-0.132549
1
0
0
4,531
11,046,836
2012-06-15T08:03:00.000
5
0
1
0
0
python,tkinter,python-multithreading
0
11,049,545
0
2
0
true
0
1
It is not multithreaded. Tkinter works by pulling objects off of a queue and processing them. Usually what is on this queue are events generated by the user (mouse movements, button clicks, etc). This queue can contain other things, such as job created with after. So, to Tkinter, something submitted with after is just another event to be processed at a particular point in time.
1
1
0
0
I'm writing a physics simulating program, and found after() useful. I once would like to create a thread for physics calculation and simulation. But when I finally noticed that function, I used it instead. So, I'm curious about how Tkinter implements that function. Is it multi-threading?
Python: Does after() in Tkinter have a multi-threading approach?
0
1.2
1
0
0
718
11,053,550
2012-06-15T15:18:00.000
1
0
0
0
0
java,php,python,setter,getter
0
11,053,683
0
1
0
true
1
0
In my opinion, there shouldn't be any undocumented attributes in a class. PHP and other languages allow you to just stick attributes on a class from anywhere, whether they've been defined in the class or not. I think that's bad practice for the reasons you describe and more: It's hard to read. It makes it harder for other programmers (including your future self) to understand what's going on. It prevents auto-complete functionality in IDEs from working. It often makes the domain layer too dependent on the persistence layer. Whether you use getters and setters to access the defined attributes of a class is a little more fungible to me. I like things to be consistent, so if I have a class that has a getChildren() method to lazy load some array of objects, then I don't make the $children attribute public, and I tend to make other attributes private as well. I think that's a little more a matter of taste, but I find it annoying to access some attributes in a class directly ($object->name;) and others by getters/setters.
1
0
0
0
In Java I use getters/setters when I have simple models/pojos. I find that the code becomes self-documenting when you do it this way. Calling getName() will return a name, I don't need to care how it's mapped to some database and so on. Problems rise when using languages where getters and setters start feeling clunky, like in Python, and I often hear people saying that they are bad. For example some time a go I had a PHP project in which some of the data was just queried from the database and column values mapped to the objects/dictionaries. What I found out was that code like this was annoyingly hard to read, you can't really just read the code, you read the code, then you notice that the values are fetched from the database and now you have to look through the database schema all the time to understand it. When all you could do is just look at the class definition and knowing that there won't be any undocumented magic keys there. So my question is how do you guys document code without getters and setters?
Documentation when not using getters and setters
0
1.2
1
0
0
162
11,081,209
2012-06-18T10:45:00.000
1
0
0
1
0
python,json,rest
0
11,184,777
0
1
0
true
0
0
avasal, you were right. I did it by pip install python-rest-client
1
1
0
0
I need to use python-rest-client package into my project. I tried several times for installing python-rest-client into my linux python, it never worked. But it works well in Windows python. Would anybody tell me how to install python-rest-client in linux python.
how to install python-rest-client lib in linux
0
1.2
1
0
1
1,855
11,083,921
2012-06-18T13:27:00.000
4
0
0
0
0
python,machine-learning,svm,regression,libsvm
0
11,172,695
0
2
0
false
0
0
libsvm might not be the best tool for this task. The problem you describe is called multivariate regression, and usually for regression problems, SVM's are not necessarily the best choice. You could try something like group lasso (http://www.di.ens.fr/~fbach/grouplasso/index.htm - matlab) or sparse group lasso (http://spams-devel.gforge.inria.fr/ - seems to have a python interface), which solve the multivariate regression problem with different types of regularization.
1
3
1
0
I would like to ask if anyone has an idea or example of how to do support vector regression in python with high dimensional output( more than one) using a python binding of libsvm? I checked the examples and they are all assuming the output to be one dimensional.
Support Vector Regression with High Dimensional Output using python's libsvm
0
0.379949
1
0
0
3,847
11,091,052
2012-06-18T21:07:00.000
2
0
1
1
0
python,installation,python-idle
0
11,091,290
0
1
0
true
0
0
try making a .py file and then try to open it, and a window should appear asking you what to open it with, and then select idle in your program files.
1
1
0
0
Just installed Python 2.7.3 on a Windows7 machine. How do I get .py files to be associated with python (they are with notepad ATM) and how do I get the context menu shortcut for "edit in IDLE"? Somehow I didn't get that on this particular computer.
IDLE not integrated in desktop
0
1.2
1
0
0
397
11,094,920
2012-06-19T05:33:00.000
0
0
0
1
0
python,tornado
0
11,096,932
0
4
0
false
0
0
If you want to daemonize tornado - use supervisord. If you want to access tornado on address like http://mylocal.dev/ - you should look at nginx and use it like reverse proxy. And on specific port it can be binded like in Lafada's answer.
2
9
0
1
Is it possible to run Tornado such that it listens to a local port (e.g. localhost:8000). I can't seem to find any documentation explaining how to do this.
How do you run the Tornado web server locally?
1
0
1
0
0
19,432
11,094,920
2012-06-19T05:33:00.000
1
0
0
1
0
python,tornado
0
39,968,411
0
4
0
false
0
0
Once you've defined an application (like in the other answers) in a file (for example server.py), you simply save and run that file. python server.py
2
9
0
1
Is it possible to run Tornado such that it listens to a local port (e.g. localhost:8000). I can't seem to find any documentation explaining how to do this.
How do you run the Tornado web server locally?
1
0.049958
1
0
0
19,432
11,095,220
2012-06-19T06:00:00.000
0
0
0
1
0
python,hadoop
1
11,098,023
0
1
0
true
0
0
Problem solved by adding the file needed with the -file option or file= option in conf file.
1
0
1
1
I need to read in a dictionary file to filter content specified in the hdfs_input, and I have uploaded it to the cluster using the put command, but I don't know how to access it in my program. I tried to access it using path on the cluster like normal files, but it gives the error information: IOError: [Errno 2] No such file or directory Besides, is there any way to maintain only one copy of the dictionary for all the machines that runs the job ? So what's the correct way of access files other than the specified input in hadoop jobs?
How to read other files in hadoop jobs?
1
1.2
1
0
0
91
11,098,131
2012-06-19T09:26:00.000
2
0
0
0
0
java,python,binding,word-wrap,cpython
0
11,286,405
0
2
0
false
1
0
I've used JPype in a similar instance with decent results. The main task would be to write wrappers to translate your java api into a more pythonic api, since raw JPype usage is hardly any prettier than just writing java code.
1
6
0
0
How can we write a python (with CPython) binding to a Java library so that the developers that want to use this java library can use it by writing only python code, not worrying about any Java code?
how to write python wrapper of a java library
0
0.197375
1
0
0
10,944
11,100,066
2012-06-19T11:33:00.000
3
0
0
0
0
python,numpy,save,boolean
0
11,101,558
0
1
0
false
0
0
Thats correct, bools are integers, so you can always go between the two. import numpy as np arr = np.array([True, True, False, False]) np.savetxt("test.txt", arr, fmt="%5i") That gives a file with 1 1 0 0
1
2
1
0
The following saves floating values of a matrix into textfiles numpy.savetxt('bool',mat,fmt='%f',delimiter=',') How to save a boolean matrix ? what is the fmt for saving boolean matrix ?
how to save a boolean numpy array to textfile in python?
0
0.53705
1
0
0
3,549
11,120,427
2012-06-20T13:15:00.000
1
0
0
0
0
python,qt,pyqt,grid-layout
0
11,120,871
0
1
0
true
0
1
You can reimplement keyPressEvent() method for the main widget to catch the pressed keys. Then you can access the desired widget in your layout by calling QGridLayout::itemAtPosition (int row, int column) and then set focus to it.
1
0
0
0
How can I change behavior of how items are selected in QGridLayout by cursor keys? I want to move selection horizontally by left/right cursor keys and vertically by up/down keys. Who is responsible for it? Layout, items container or tab order?
Custom QGridLayout items selection behaviour
0
1.2
1
0
0
706
11,129,844
2012-06-20T23:59:00.000
0
0
1
0
0
python
0
11,130,087
0
8
1
false
0
0
I would be tempted to research a little into some GUI that could output graphviz (DOT format) with annotations, so you could create the rooms and links between them (a sort of graph). Then later, you might want another format to support heftier info. But should make it easy to create maps, links between rooms (containing items or traps etc..), and you could use common libraries to produce graphics of the maps in png or something. Just a random idea off the top of my head - feel free to ignore!
3
5
0
0
As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability. As an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code. I think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease. What formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML. From what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively. JSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally. Speed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use. I've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?
Optimal format for simple data storage in python
0
0
1
1
0
4,619
11,129,844
2012-06-20T23:59:00.000
5
0
1
0
0
python
0
11,129,974
0
8
1
false
0
0
Though there are good answers here already, I would simply recommend JSON for your purposes for the sole reason that since you're a new programmer it will be the most straightforward to read and translate as it has the most direct mapping to native Python data types (lists [] and dictionaries {}). Readability goes a long way and is one of the tenets of Python programming.
3
5
0
0
As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability. As an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code. I think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease. What formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML. From what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively. JSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally. Speed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use. I've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?
Optimal format for simple data storage in python
0
0.124353
1
1
0
4,619
11,129,844
2012-06-20T23:59:00.000
1
0
1
0
0
python
0
11,129,853
0
8
1
false
0
0
If you want editability, YAML is the best option of the ones you've named, because it doesn't have <> or {} required delimiters.
3
5
0
0
As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability. As an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code. I think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease. What formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML. From what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively. JSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally. Speed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use. I've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?
Optimal format for simple data storage in python
0
0.024995
1
1
0
4,619
11,130,434
2012-06-21T01:32:00.000
2
0
0
1
0
python,google-app-engine,jinja2,authentication
0
11,132,393
0
1
0
true
1
0
Your choices are Google's own authentication, OpenID, some third party solution or roll your own. Unless you really know what you are doing, do not choose option 4! Authentication is very involved, and if you make a single mistake or omission you're opening yourself up to a lot of pain. Option 3 is not great because you have to ensure the author really knows what they are doing, which either means trusting them or... really knowing what you're doing! So I'd suggest you chose between Google's authentication and OpenID. Both are well trusted; Google is going to be easier to implement because there are several OpenID account providers you have to test against; but Google authentication may turn away some users who refuse to have Google accounts.
1
1
0
0
Having ease of implementation a strong factor but security also an issue what would the best user authentication method for google app engine be? My goal is to have a small very specific social network. I know how to make my own but making it hack-proof is a little out of my league right now. I have looked at OpenID and a few others. I am using Jinja2 as my template system and writing all of my web app code in python. Thanks!
top user authentication method for google app engine
0
1.2
1
0
0
637
11,134,610
2012-06-21T08:50:00.000
1
0
1
0
0
python,vector
0
11,134,685
0
2
0
false
0
0
The plane perpendicular to a vector ⟨A, B, C⟩ has the general equation Ax + By + Cz + K = 0.
1
0
0
0
I have a object A move with Velocity (v1, v2, v3) in 3D space. Object position is (px,py,pz) Now i want to add certain particles around object A (in radius dis) on plane which perpendicular to its Velocity direction. I find something call "cross product" but seen that no use in this case. Anyone can help? I'm new to python and don't really know how to crack it.
Find perpendicular to given vector (Velocity) in 3D
0
0.099668
1
0
0
2,293
11,142,397
2012-06-21T16:15:00.000
-1
0
1
0
0
python,list,tuples,immutability
0
35,129,521
0
7
0
false
0
0
Instead of tuple, you can use frozenset. frozenset creates an immutable set. you can use list as member of frozenset and access every element of list inside frozenset using single for loop.
1
117
0
0
Does python have immutable lists? Suppose I wish to have the functionality of an ordered collection of elements, but which I want to guarantee will not change, how can this be implemented? Lists are ordered but they can be mutated.
Does Python have an immutable list?
0
-0.028564
1
0
0
87,513
11,142,503
2012-06-21T16:22:00.000
0
0
1
0
0
python,pdf-generation,reportlab
0
11,369,181
0
2
0
false
0
0
If you can keep track of page numbers, then just add a PageBreak or canvas.showPage() command at the appropriate times.
1
2
0
0
I am trying to typeset a large document using ReportLab and Python 2.7. It has a number of sections (about 6 in a 1,000 page document) and I would like each to start on odd-numbered/right-hand page. I have no idea though whether the preceding page will be odd or even and so need the ability to optionally throw an additional blank page before a particular paragraph style (like you sometimes get in manuals where some pages are "intentionally left blank"). Can anyone suggest how this could be done, as the only conditional page break I can find works on the basis of the amount of text on the page not a page number. I also need to make sure that the blank page is included in the PDF so that double-sided printing works.
Throw blank even-numbered/left pages
0
0
1
0
0
463
11,165,937
2012-06-23T01:02:00.000
1
0
1
0
0
python,python-2.7
0
11,165,954
0
1
0
true
0
0
Put the directory containing your module (let's call it functions.py) into the PYTHONPATH environment variable. Then you'll be able to use import functions to get your functions. Pip also seems to have support for this: pip install -e src/mycheckout for exxample, but I don't quite understand the ramifications of that.
1
0
0
0
I'm building a personal module of functions, generic functions for my scientific work. It's not finished so I would like to keep it in it's development folder for the time being without installing it like you install every other modules with pip, etc. Now, I also need to work on other non-related projects but still need the functions. My question is, having those 2 projects in completely independent folders, how do I import one to use in the other? thanks EDIT: Just another quick one. If both were inside their respective folder but with the same root. Would there be a better/easier way to do this?
Importing unfinished modules
1
1.2
1
0
0
56
11,166,014
2012-06-23T01:15:00.000
5
0
1
0
0
python,django,installation,duplicates
0
11,166,438
0
3
0
false
1
0
Check out virtualenv and virtualenvwrapper
2
7
0
1
In the process of trying to install django, I had a series of failures. I followed many different tutorials online and ended up trying to install it several times. I think I may have installed it twice (which the website said was not a good thing), so how do I tell if I actually have multiple versions installed? I have a Mac running Lion.
How to tell if you have multiple Django's installed
0
0.321513
1
0
0
2,635
11,166,014
2012-06-23T01:15:00.000
9
0
1
0
0
python,django,installation,duplicates
0
11,166,539
0
3
0
true
1
0
open terminal and type python then type import django then type django and it will tell you the path to the django you are importing. Goto that folder [it should look something like this: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/] and look for more than one instance of django(if there is more than one, they will be right next to each other). Delete the one(s) you don't want.
2
7
0
1
In the process of trying to install django, I had a series of failures. I followed many different tutorials online and ended up trying to install it several times. I think I may have installed it twice (which the website said was not a good thing), so how do I tell if I actually have multiple versions installed? I have a Mac running Lion.
How to tell if you have multiple Django's installed
0
1.2
1
0
0
2,635
11,170,827
2012-06-23T15:46:00.000
-1
0
1
0
0
python,version,virtualenv
0
60,839,180
1
8
0
false
0
0
I had this problem and just decided to rename one of the programs from python.exe to python2.7.exe. Now I can specify on command prompt which program to run easily without introducing any scripts or changing environmental paths. So i have two programs: python2.7 and python (the latter which is v.3.8 aka default).
3
95
0
0
How do I, in the main.py module (presumably), tell Python which interpreter to use? What I mean is: if I want a particular script to use version 3 of Python to interpret the entire program, how do I do that? Bonus: How would this affect a virtualenv? Am I right in thinking that if I create a virtualenv for my program and then tell it to use a different version of Python, then I may encounter some conflicts?
How do I tell a Python script to use a particular version
1
-0.024995
1
0
0
224,173
11,170,827
2012-06-23T15:46:00.000
-1
0
1
0
0
python,version,virtualenv
0
56,130,973
1
8
0
false
0
0
While working with different versions of Python on Windows, I am using this method to switch between versions. I think it is better than messing with shebangs and virtualenvs 1) install python versions you desire 2) go to Environment Variables > PATH (i assume that paths of python versions are already added to Env.Vars.>PATH) 3) suppress the paths of all python versions you dont want to use (dont delete the paths, just add a suffix like "_sup") 4) call python from terminal (so Windows will skip the wrong paths you changed, and will find the python.exe at the path you did not suppressed, and will use this version after on) 5) switch between versions by playing with suffixes
3
95
0
0
How do I, in the main.py module (presumably), tell Python which interpreter to use? What I mean is: if I want a particular script to use version 3 of Python to interpret the entire program, how do I do that? Bonus: How would this affect a virtualenv? Am I right in thinking that if I create a virtualenv for my program and then tell it to use a different version of Python, then I may encounter some conflicts?
How do I tell a Python script to use a particular version
1
-0.024995
1
0
0
224,173
11,170,827
2012-06-23T15:46:00.000
0
0
1
0
0
python,version,virtualenv
0
11,170,838
1
8
0
false
0
0
You can't do this within the Python program, because the shell decides which version to use if you a shebang line. If you aren't using a shell with a shebang line and just type python myprogram.py it uses the default version unless you decide specifically which Python version when you type pythonXXX myprogram.py which version to use. Once your Python program is running you have already decided which Python executable to use to get the program running. virtualenv is for segregating python versions and environments, it specifically exists to eliminate conflicts.
3
95
0
0
How do I, in the main.py module (presumably), tell Python which interpreter to use? What I mean is: if I want a particular script to use version 3 of Python to interpret the entire program, how do I do that? Bonus: How would this affect a virtualenv? Am I right in thinking that if I create a virtualenv for my program and then tell it to use a different version of Python, then I may encounter some conflicts?
How do I tell a Python script to use a particular version
1
0
1
0
0
224,173
11,181,195
2012-06-24T21:07:00.000
2
0
1
0
0
python
0
11,181,226
0
4
0
false
0
0
I would create a separated module called random_words, or something like that, hiding the list inside it and encapsulating the choice(word_list) inside an interface function. As to load them from a file, well, since I would need to type them anyway, and a python file is just a text file in the end, I would type them right there, probably one per line for easy maintenance.
2
0
0
0
I am writing a game in python in which I must periodically pull a random word from a list of words. When I prototyped my game I declared a word_list = ['cat','dog','rat','house'] of ten words at the top of one of my modules. I then use choice(word_list) to get a random word. However, I must must change this temporary hack into something more elegant because I need to increase the size of the word list to 5,000+ words. If I do this in my current module it will look ridiculous. Should I put all of these words in a flat txt file, and then read from that file as I need words? If so, how would I best do that? Put each word an a separate line and then read one random line? I'm not sure what the most efficient way is.
Where should I declare a list of 5,000+ words?
0
0.099668
1
0
0
290
11,181,195
2012-06-24T21:07:00.000
3
0
1
0
0
python
0
11,181,204
0
4
0
false
0
0
Read the words from the file at startup (or at least the line indexes), and use as required.
2
0
0
0
I am writing a game in python in which I must periodically pull a random word from a list of words. When I prototyped my game I declared a word_list = ['cat','dog','rat','house'] of ten words at the top of one of my modules. I then use choice(word_list) to get a random word. However, I must must change this temporary hack into something more elegant because I need to increase the size of the word list to 5,000+ words. If I do this in my current module it will look ridiculous. Should I put all of these words in a flat txt file, and then read from that file as I need words? If so, how would I best do that? Put each word an a separate line and then read one random line? I'm not sure what the most efficient way is.
Where should I declare a list of 5,000+ words?
0
0.148885
1
0
0
290
11,217,855
2012-06-27T00:39:00.000
1
0
0
0
0
python,math,geometry,gis
0
11,217,921
0
2
0
false
0
0
You could recursively split the quad in half on the long sides until the resulting area is small enough.
1
1
1
0
It is pretty easy to split a rectangle/square into smaller regions and enforce a maximum area of each sub-region. You can just divide the region into regions with sides length sqrt(max_area) and treat the leftovers with some care. With a quadrilateral however I am stumped. Let's assume I don't know the angle of any of the corners. Let's also assume that all four points are on the same plane. Also, I don't need for the the small regions to be all the same size. The only requirement I have is that the area of each individual region is less than the max area. Is there a particular data structure I could use to make this easier? Is there an algorithm I'm just not finding? Could I use quadtrees to do this? I'm not incredibly versed in trees but I do know how to implement the structure. I have GIS work in mind when I'm doing this, but I am fairly confident that that will have no impact on the algorithm to split the quad.
Split quadrilateral into sub-regions of a maximum area
0
0.099668
1
0
0
1,141
11,223,147
2012-06-27T09:27:00.000
1
0
0
0
0
python,sqlite
0
11,224,222
0
4
0
false
0
0
If you're not after just parameter substitution, but full construction of the SQL, you have to do that using string operations on your end. The ? replacement always just stands for a value. Internally, the SQL string is compiled to SQLite's own bytecode (you can find out what it generates with EXPLAIN thesql) and ? replacements are done by just storing the value at the correct place in the value stack; varying the query structurally would require different bytecode, so just replacing a value wouldn't be enough. Yes, this does mean you have to be ultra-careful. If you don't want to allow updates, try opening the DB connection in read-only mode.
3
0
0
0
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed.
Python + Sqlite 3. How to construct queries?
0
0.049958
1
1
0
1,125
11,223,147
2012-06-27T09:27:00.000
1
0
0
0
0
python,sqlite
0
11,224,475
0
4
0
true
0
0
If you're trying to transmit changes to the database to another computer, why do they have to be expressed as SQL strings? Why not pickle the query string and the parameters as a tuple, and have the other machine also use SQLite parameterization to query its database?
3
0
0
0
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed.
Python + Sqlite 3. How to construct queries?
0
1.2
1
1
0
1,125
11,223,147
2012-06-27T09:27:00.000
0
0
0
0
0
python,sqlite
0
11,224,003
0
4
0
false
0
0
I want how to get the parsed 'sql param'. It's all open source so you have full access to the code doing the parsing / sanitization. Why not just reading this code and find out how it works and if there's some (possibly undocumented) implementation that you can reuse ?
3
0
0
0
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed.
Python + Sqlite 3. How to construct queries?
0
0
1
1
0
1,125
11,230,979
2012-06-27T16:31:00.000
1
0
1
0
0
python,django,pycharm
0
11,231,045
0
3
0
true
1
0
If your site loads, you should put the import the models into one of your Django views. In a view you can do whatever you like with the models.
1
1
0
0
So I have a chunk of code that declares some classes, creates data, uses django to actually save them to the database. My question is how do I actually execute it? I am using PyCharm and have the file open. But I have no clue how to actually execute it. I can execute line by line in Django Console, but if it's more than that it can't handle the indentation. The project itself runs fine (127.0.0.1 loads my page). How can I accomplish this? I am sorry if this a completely obvious answer, I've been struggling with this for a bit.
How to run a file that uses django models (large block of code) in Pycharm
0
1.2
1
0
0
1,138
11,231,244
2012-06-27T16:48:00.000
1
0
0
0
0
python,ipv6,urllib,ipv4
0
11,231,476
0
1
0
false
0
0
I had a look into the source code. Unfortunately, urllib.urlopen() seems to use httplib.HTTP(), which doesn't even allow setting a source address. urllib2.urlopen() uses httplib.HTTPConnection() which you could inherit from and create a class which by default sets a source address '0.0.0.0' instead of ''. Then you could somehow inject that new overridden class into the urllib2 stuff by creating a "new" HTTPHandler() (look how it's done in urllib2.py) and a new opener which you build_opener() and/or install_opener(). Sorry for not being very exact, but I never have done such a thing and don't know exactly how that works.
1
2
0
0
What is the way to do urlopen in python such that even if the underlying machine has ipv6 networking enabled, the request is sent via ipv4 instead of ipv6?
how to do urlopen over ipv4 by default
0
0.197375
1
0
1
2,773
11,267,463
2012-06-29T18:57:00.000
0
0
1
0
1
python,visual-studio,python-2.7,compilation,windows-7-x64
0
11,285,320
0
5
0
false
0
0
there are several ways to do it apparently. build using mingw build python 2.7 using VS 2008 express. i'm not sure regarding which version is good to build 3.2 but it could be VS 2010. you can compile python x64 from source using your desired VS, but u'll have competability issues with other pre built packages.
1
17
0
0
I'm starting out some projects in words processing and I needed NumPy and NLTK. That was the first time I got to know easy_install and how to compile new module of python into the system. I have Python 2.7 x64 plus VS 11 and VS 12. Also Cygwin (the latest one I guess). I could see in the file that compiles using VS that it looks for VS env with the same version as the one that compiled the python code, why? When I hardcoded 11.0 which is my version, numpy failed to build on several strange errors regarding vcvarsall (it found vcvarsall, probably misused it). Can't I build python binaries on Windows? If not, can I cross compile on Linux for Windows? (using the same method as Google for the Android SDK)
Compiling Python modules on Windows x64
0
0
1
0
0
20,743
11,279,779
2012-07-01T05:20:00.000
-1
0
1
0
0
c++,python,algorithm
0
11,280,036
0
5
0
false
0
0
This is actually a classic interview question and the answer they were expecting was that you first sort the urls and then make a binary search. If it doesn't fit in memory, you can do the same thing with a file.
2
0
0
0
Recently I was asked this question in an interview. I gave an answer in O(n) time but in two passes. Also he asked me how to do the same if the url list cannot fit into the memory. Any help is very much appreciated.
Finding a unique url from a large list of URLs in O(n) time in a single pass
0
-0.039979
1
0
0
1,129
11,279,779
2012-07-01T05:20:00.000
6
0
1
0
0
c++,python,algorithm
0
11,279,881
0
5
0
true
0
0
If it all fits in memory, then the problem is simple: Create two sets (choose your favorite data structure), both initially empty. One will contain unique URLs and the other will contain URLs that occur multiple times. Scan the URL list once. For each URL, if it exists in the unique set, remove it from the unique set and put it in the multiple set; otherwise, if it does not exist in the multiple set, add it to the unique set. If the set does not fit into memory, the problem is difficult. The requirement of O(n) isn't hard to meet, but the requirement of a "single pass" (which seems to exclude random access, among other things) is tough; I don't think it's possible without some constraints on the data. You can use the set approach with a size limit on the sets, but this would be easily defeated by unfortunate orderings of the data and would in any event only have a certain probability (<100%) of finding a unique element if one exists. EDIT: If you can design a set data structure that exists in mass storage (so it can be larger than would fit in memory) and can do find, insert, and deletes in O(1) (amortized) time, then you can just use that structure with the first approach to solve the second problem. Perhaps all the interviewer was looking for was to dump the URLs into a data base with a UNIQUE index for URLs and a count column.
2
0
0
0
Recently I was asked this question in an interview. I gave an answer in O(n) time but in two passes. Also he asked me how to do the same if the url list cannot fit into the memory. Any help is very much appreciated.
Finding a unique url from a large list of URLs in O(n) time in a single pass
0
1.2
1
0
0
1,129
11,282,099
2012-07-01T12:51:00.000
0
0
0
0
1
python,wxwidgets
0
14,640,674
0
1
0
true
0
1
When you create a wx widget, the normal pattern of the create function is wx.SomeUIObject(parent, id, ...). The parent could be set when you create the dialog.
1
0
0
0
I've tried setting dlg.CenterOnParent() but that doesn't work. I assume it's because my MessageDialog is not setting the frame as the parent. In that case how would I do this?
How do I center a wx.MessageDialog to the frame?
0
1.2
1
0
0
176
11,284,600
2012-07-01T18:29:00.000
1
0
0
0
0
python,django
0
11,284,649
0
1
0
false
1
0
For example: field_name_exists = field_name in ModelName._meta.get_all_field_names()
1
0
0
0
Given a django class and a field name how can you test to see whether the class has a field with the given name? The field name is a string in this case.
Test for existence of field in django class
0
0.197375
1
0
0
63
11,286,809
2012-07-02T00:56:00.000
0
0
0
0
0
python,google-app-engine,youtube,youtube-api,jinja2
0
64,008,461
0
3
0
false
1
0
Use this code when getting embed link from list value. In the template inside the iframe use below code src="{{results[0].video_link}}" "video_link" is the Field name.
1
4
0
0
I have a website that gets a lot of links to youtube and similar sites and I wanted to know if there is anyway that I can make a link automatically appear as a video. Like what happens when you post a link to a video on facebook. You can play it right on the page. Is there a way to do this without users actually posting the entire embed video HTML code? By the way I am using google app engine with python and jinja2 templating.
how to make youtube videos embed on your webpage when a link is posted
0
0
1
0
0
4,555
11,287,862
2012-07-02T04:40:00.000
3
0
1
1
1
python,vim,macvim
0
11,288,495
0
2
0
true
0
0
For Python 3, just simply execute :!python3 % Furthermore, you might also want to map it to a hotkey in your settings, like what I did: noremap <D-r> <esc>:w<CR>:!python3 %<CR> So that you can just press Command+r to execute the current code with Python 3 anytime (it will be saved automatically.
1
0
0
0
I just set up IDE env for Python 3. I was wondering how I can run the file being currently edited in vim. I remembered that the command was ":python %", but it did not work for Python 3. Thank you very much.
In Macvim with +python3 supported, which command should I use to execute the current file itself?
0
1.2
1
0
0
428
11,288,320
2012-07-02T05:46:00.000
3
0
0
0
0
django,django-admin,python-2.7,django-urls,django-1.4
0
11,288,438
0
2
0
false
1
0
Just put your desired url mapping before the admin mapping in your root urls.py. The first match for the request will be taken, because django goes the url mappings from top to down. Just remember that you don't use an url the admin normally needs or provides because this will never match with a custom mapping in front of it. HTH!
1
7
0
0
I am using django 1.4 and Python 2.7. I just have a simple requirement where I have to add a new URL to the django admin app. I know how to add URLs which are for the custom apps but am unable figure out how to add URLs which are of the admin app. Please guide me through this. Basically the full URL should be something like admin/my_url. UPDATE I want a way after which I can as well reverse map the URL using admin.
New URL on django admin independent of the apps
0
0.291313
1
0
0
10,538
11,289,652
2012-07-02T07:50:00.000
1
0
0
0
0
python,image,compare
0
11,289,709
0
3
0
false
0
0
If you want to check if they are binary equal you can count a checksum on them and compare it. If you want to check if they are similar in some other way , it will be more complicated and definitely would not fit into simple answer posted on Stack Overflow. It just depends on how you define similarity but anyway it would require good programming skills and a lot of code written.
1
0
1
0
I'm looking for an algorithm to compare two images (I work with python). I find the PIL library, numpy/scipy and opencv. I know how to transform in greyscale, binary, make an histogram, .... that's ok but I don't know what I have to do with the two images to say "yes they're similar // they're probably similar // they don't match". Do you know the right way to go about it ?
How to compare image with python
0
0.066568
1
0
0
1,355
11,295,714
2012-07-02T14:30:00.000
0
0
1
0
0
python,string,algorithm
0
11,295,792
0
8
0
false
0
0
Can't you just scan from the last character to the first character and stop when the next char doesn't equal the previous. Then split at that index.
1
0
0
0
I want to now how do i split a string like 44664212666666 into [44664212 , 666666] or 58834888888888 into [58834, 888888888] without knowing where the first occurrence of the last recurring digit occurs. so passing it to a function say seperate(str) --> [non_recurring_part, end_recurring digits]
Splitting a number pattern
0
0
1
0
0
121
11,302,656
2012-07-02T23:40:00.000
3
1
1
0
0
python,c,shared-memory
0
11,305,191
0
4
0
false
0
0
If you don't want pickling, multiprocessing.sharedctypes might fit. It's a bit low-level, though; you get single values or arrays of specified types. Another way to distribute data to child processes (one way) is multiprocessing.Pipe. That can handle Python objects, and it's implemented in C, so I cannot tell you wether it uses pickling or not.
1
15
0
0
I'm trying to figure out a way to share memory between python processes. Basically there is are objects that exists that multiple python processes need to be able to READ (only read) and use (no mutation). Right now this is implemented using redis + strings + cPickle, but cPickle takes up precious CPU time so I'd like to not have to use that. Most of the python shared memory implementations I've seen on the internets seem to require files and pickles which is basically what I'm doing already and exactly what I'm trying to avoid. What I'm wondering is if there'd be a way to write a like...basically an in-memory python object database/server and a corresponding C module to interface with the database? Basically the C module would ask the server for an address to write an object to, the server would respond with an address, then the module would write the object, and notify the server that an object with a given key was written to disk at the specified location. Then when any of the processes wanted to retrieve an object with a given key they would just ask the db for the memory location for the given key, the server would respond with the location and the module would know how to load that space in memory and transfer the python object back to the python process. Is that wholly unreasonable or just really damn hard to implement? Am I chasing after something that's impossible? Any suggestions would be welcome. Thank you internet.
Shared memory between python processes
1
0.148885
1
0
0
19,934
11,319,116
2012-07-03T21:01:00.000
4
0
0
0
0
python,django,ide,eric-ide
0
11,370,886
0
2
0
false
1
0
In the Plugins dropdown menu click on Plugin repository... Make sure that the repository URL is: http://eric-ide.python-projects.org/plugins4/repository.xml and then click on update. The Django plugin will show up in the list of available plugins, click on it and then click the download button. That should download the plugin for you. After that you need to actually install the plugin as well: In the Plugins dropdown menu click on Install plugins. Then select your newly downloaded Django plugin and install it. Good luck!
1
0
0
0
I can't believe I have to ask this, but I have spent almost three hours looking for the answer. Anyway, I have Eric IDE 4 installed on my linux distro. I can't seem to download any plugins to the plugins repository. The only one I really want is the Django plugin so when I start a new project in Eric, the Django option shows. The plugin repository just shows me an empty folder for .eric4/eric4plugins and there's no follow up as to where I can get the plugins from somewhere else. Actually, there was a hinting at it on the Eric docs site, but what I ended up getting was the ENTIRE trunk for eric. And the plugins that came with the trunk are just the bare bones ones that ship with it. I didn't get the Django one and the documentation on the Eric site is seriously lacking and overly complex. Anyone know how I can just get the Django snap in?
How to install the Django plugin for the Eric IDE?
0
0.379949
1
0
0
4,909
11,324,804
2012-07-04T07:59:00.000
1
0
1
0
0
python,algorithm,math,boolean
0
26,163,606
0
6
0
false
0
0
Unfortunately, most of the given suggestions may not actually give @turtlesoup what he/she is looking for. @turtlesoup asked for a way to minimize the number of characters for a given boolean expression. Most simplification methods don't target the number of characters as a focus for simplification. When it comes to minimization in electronics, users typically want the fewest number of gates (or parts). This doesn't always result in a shorter expression in terms of the "length" of the expression -- most times it does, but not always. In fact, sometimes the expression can become larger, in terms of length, though it may be simpler from an electronics standpoint (requires fewer gates to build). boolengine.com is the best simplification tool that I know of when it comes to boolean simplification for digital circuits. It doesn't allow hundreds of inputs, but it allows 14, which is a lot more than most simplification tools. When working with electronics, simplification programs usually break down the expression into sum-of-product form. So the expression '(ab)+'cd becomes 'c+'b+'a+d. The "simplified" result requires more characters to print as an expression, but is easier to build from an electronics standpoint. It only requires a single 4-input OR gate and 3 inverters (4 parts). Whereas the original expression would require 2 AND gates, 2 inverters, and an OR gate (5 parts). After giving @turtlesoup's example to BoolEngine, it shows that BC(A+D)+DE becomes E+D+ABC. This is a shorter expression, and will usually be. But certainly not always.
1
7
0
0
I'm trying to write out a piece of code that can reduce the LENGTH of a boolean expression to the minimum, so the code should reduce the number of elements in the expression to as little as possible. Right now I'm stuck and I need some help =[ Here's the rule: there can be arbitrary number of elements in a boolean expression, but it only contains AND and OR operators, plus brackets. For example, if I pass in a boolean expression: ABC+BCD+DE, the optimum output would be BC(A+D)+DE, which saves 2 unit spaces compared to the original one because the two BCs are combined into one. My logic is that I will attempt to find the most frequently appeared element in the expression, and factor it out. Then I call the function recursively to do the same thing to the factored expression until it's completely factored. However, how can I find the most common element in the original expression? That is, in the above example, BC? It seems like I would have to try out all different combinations of elements, and find number of times each combination appears in the whole expression. But this sounds really naive. Second Can someone give a hint on how to do this efficiently? Even some keywords I can search up on Google will do.
algorithm - minimizing boolean expressions
0
0.033321
1
0
0
8,347
11,329,588
2012-07-04T12:56:00.000
0
1
0
0
1
php,python,mysql,json,pingdom
0
11,329,769
0
2
1
false
0
0
The most basic solution with the setup you have now would be to: Get a list of all events, ordered by server ID and then by time of the event Loop through that list and record the start of a new event / end of an old event for your new database when: the server ID changes the time between the current event and the previous event from the same server is bigger than a certain threshold you set. Store the old event you were monitoring in your new database The only complication I see, is that the next time you run the script, you need to make sure that you continue monitoring events that were still taking place at the time you last ran the script.
1
0
0
0
Not sure if the title is a great way to word my actual problem and I apologize if this is too general of a question but I'm having some trouble wrapping my head around how to do something. What I'm trying to do: The idea is to create a MySQL database of 'outages' for the thousands of servers I'm responsible for monitoring. This would give a historical record of downtime and an easy way to retroactively tell what happened. The database will be queried by a fairly simple PHP form where one could browse these outages by date or server hostname etc. What I have so far: I have a python script that runs as a cron periodically to call the Pingdom API to get a list of current down alerts reported by the pingdom service. For each down alert, a row is inserted into a database containing a hostname, time stamp, pingdom check id, etc. I then have a simple php form that works fine to query for down alerts. The problem: What I have now is missing some important features and isn't quite what I'm looking for. Currently, querying this database would give me a simple list of down alerts like this: Pindom alerts for Test_Check from 2012-05-01 to 2012-06-30: test_check was reported DOWN at 2012-05-24 00:11:11 test_check was reported DOWN at 2012-05-24 00:17:28 test_check was reported DOWN at 2012-05-24 00:25:24 test_check was reported DOWN at 2012-05-24 00:25:48 What I would like instead is something like this: test_check was reported down for 15 minutes (2012-05-24 00:11:11 to 2012-05-24 00:25:48)(link to comment on this outage)(link to info on this outage). In this ideal end result, there would be one row containing a outage ID, hostname of the server pingdom is reporting down, the timestamp for when that box was reported down originally and the timestamp for when it was reported up again along with a 'comment' field I (and other admins) would use to add notes about this particular event after the fact. I'm not sure if I should try to do this when pulling the alerts from pingdom or if I should re-process the alerts after they're collected to populate the new table and I'm not quite sure how I would work out either of those options. I'm a little lost as to how I will go about combining several down alerts that occur within a short period of time into a single 'outage' that would be inserted into a separate table in the existing MySQL database where individual down alerts are currently being stored. This would allow me to comment and add specific details for future reference and would generally make this thing a lot more usable. I'm not sure if I should try to do this when pulling the alerts from pingdom or if I should re-process the alerts after they're collected to populate the new table and I'm not quite sure how I would work out either of those options. I've been wracking my brain trying to figure out how to do this. It seems like a simple concept but I'm a somewhat inexperienced programmer (I'm a Linux admin by profession) and I'm stumped at this point. I'm looking for any thoughts, advice, examples or even just a more technical explanation of what I'm trying to do here to help point me in the right direction. I hope this makes sense. Thanks in advance for any advice :)
How can I combine rows of data into a new table based on similar timestamps? (python/MySQL/PHP)
1
0
1
1
0
131
11,338,044
2012-07-05T04:59:00.000
49
0
1
0
0
python,multiprocessing
0
11,338,089
0
3
0
true
0
0
That is the difference. One reason why you might use imap instead of map is if you wanted to start processing the first few results without waiting for the rest to be calculated. map waits for every result before returning. As for chunksize, it is sometimes more efficient to dole out work in larger quantities because every time the worker requests more work, there is IPC and synchronization overhead.
1
56
0
0
I'm trying to learn how to use Python's multiprocessing package, but I don't understand the difference between map and imap. Is the difference that map returns, say, an actual array or set, while imap returns an iterator over an array or set? When would I use one over the other? Also, I don't understand what the chunksize argument is. Is this the number of values that are passed to each process?
Python Multiprocessing: What's the difference between map and imap?
0
1.2
1
0
0
25,529
11,341,112
2012-07-05T09:05:00.000
3
0
0
0
0
python,plone
0
11,350,843
0
2
0
false
1
0
Cue tune from Hotel California: "You can check out any time you like, but you can never leave." You do not not really want to disable all downloading, I believe that you really just want to disable downloads from all users but Owner. There is no practical use for putting files into something with no vehicle for EVER getting them back out... ...so you need to solve this problem with workflow: Use a custom workflow definition that has a state for this behavior ("Confidential"). Ensure that "View" permission is not inherited from folder above in the permissions for this state, and check "Owner" (and possibly "Manager" if you see fit) as having "View" permission. Set the confidential state as the default state for files. You can do this using Workflow policy support ("placeful workflows") in parts of the site if you do not wish to do this site-wide. Should you wish to make the existence of the items viewable, but the download not, you are best advised to create a custom permission and a custom type to protect downloading with a permission other than "View" (but you still should use workflow state as permission-to-role mapping templates).
2
1
0
0
I wish to make the uploaded file contents only viewable on the browser i.e using atreal.richfile.preview for doc/xls/pdf files. The file should not be downloadable at any cost. How do I remove the hyperlink for the template in a particular folder for all the files in that folder? I use Plone 4.1 There is AT at_download.
In plone how can I make an uploaded file as NOT downloadable?
0
0.291313
1
0
0
362
11,341,112
2012-07-05T09:05:00.000
1
0
0
0
0
python,plone
0
11,355,784
0
2
0
true
1
0
Script (Python) at /mysite/portal_skins/archetypes/at_download Just customize to contain nothing. Thought this will be helpful to someone who would like to keep files/ image files in Plone confidential by sharing the folders with view permission and disable checkout and copy option for the role created
2
1
0
0
I wish to make the uploaded file contents only viewable on the browser i.e using atreal.richfile.preview for doc/xls/pdf files. The file should not be downloadable at any cost. How do I remove the hyperlink for the template in a particular folder for all the files in that folder? I use Plone 4.1 There is AT at_download.
In plone how can I make an uploaded file as NOT downloadable?
0
1.2
1
0
0
362
11,342,620
2012-07-05T10:38:00.000
0
0
0
0
0
python,wxpython,wxwidgets
0
11,382,239
0
1
0
true
0
1
I solved this by getting the width of the parent widget inside the scrolledpanel, instead of the width of the scrolledpanel itself. Sometimes the answer is so obvious :)
1
0
0
0
I'm trying to use the ScrolledPanel in wx.lib.scrolledpanel, and i would like to check if the scrollbar of the ScrolledPanel is currently visible, so i can give my StaticText the correct wrap width. Because when the scrollbar is visible i need to remove another 10 pixels or so from the wrap width... Anyone any idea how this is done? Thanks!
Check if wx.lib.scrolledpanel.ScrolledPanel is currently scrolling
0
1.2
1
0
0
134
11,349,476
2012-07-05T17:30:00.000
1
0
0
0
0
python,django,data-importer
0
16,125,317
0
8
0
false
1
0
I have done the same thing. Firstly, my script was already parsing the emails and storing them in a db, so I set the db up in settings.py and used python manage.py inspectdb to create a model based on that db. Then it's just a matter of building a view to display the information from your db. If your script doesn't already use a db it would be simple to create a model with what information you want stored, and then force your script to write to the tables described by the model.
4
1
0
0
I have a script which scans an email inbox for specific emails. That part's working well and I'm able to acquire the data I'm interested in. I'd now like to take that data and add it to a Django app which will be used to display the information. I can run the script on a CRON job to periodically grab new information, but how do I then get that data into the Django app? The Django server is running on a Linux box under Apache / FastCGI if that makes a difference. [Edit] - in response to Srikar's question When you are saying " get that data into the Django app" what exactly do you mean?... The Django app will be responsible for storing the data in a convenient form so that it can then be displayed via a series of views. So the app will include a model with suitable members to store the incoming data. I'm just unsure how you hook into Django to make new instances of those model objects and tell Django to store them.
How can I periodically run a Python script to import data into a Django app?
1
0.024995
1
0
0
3,419
11,349,476
2012-07-05T17:30:00.000
0
0
0
0
0
python,django,data-importer
0
11,349,554
0
8
0
false
1
0
When you are saying " get that data into the DJango app" what exactly do you mean? I am guessing that you are using some sort of database (like mysql). Insert whatever data you have collected from your cronjob into the respective tables that your Django app is accessing. Also insert this cron data into the same tables that your users are accessing. So that way your changes are immediately reflected to the users using the app as they will be accessing the data from the same table.
4
1
0
0
I have a script which scans an email inbox for specific emails. That part's working well and I'm able to acquire the data I'm interested in. I'd now like to take that data and add it to a Django app which will be used to display the information. I can run the script on a CRON job to periodically grab new information, but how do I then get that data into the Django app? The Django server is running on a Linux box under Apache / FastCGI if that makes a difference. [Edit] - in response to Srikar's question When you are saying " get that data into the Django app" what exactly do you mean?... The Django app will be responsible for storing the data in a convenient form so that it can then be displayed via a series of views. So the app will include a model with suitable members to store the incoming data. I'm just unsure how you hook into Django to make new instances of those model objects and tell Django to store them.
How can I periodically run a Python script to import data into a Django app?
1
0
1
0
0
3,419
11,349,476
2012-07-05T17:30:00.000
0
0
0
0
0
python,django,data-importer
0
11,349,556
0
8
0
false
1
0
Best way? Make a view on the django side to handle receiving the data, and have your script do a HTTP POST on a URL registered to that view. You could also import the model and such from inside your script, but I don't think that's a very good idea.
4
1
0
0
I have a script which scans an email inbox for specific emails. That part's working well and I'm able to acquire the data I'm interested in. I'd now like to take that data and add it to a Django app which will be used to display the information. I can run the script on a CRON job to periodically grab new information, but how do I then get that data into the Django app? The Django server is running on a Linux box under Apache / FastCGI if that makes a difference. [Edit] - in response to Srikar's question When you are saying " get that data into the Django app" what exactly do you mean?... The Django app will be responsible for storing the data in a convenient form so that it can then be displayed via a series of views. So the app will include a model with suitable members to store the incoming data. I'm just unsure how you hook into Django to make new instances of those model objects and tell Django to store them.
How can I periodically run a Python script to import data into a Django app?
1
0
1
0
0
3,419
11,349,476
2012-07-05T17:30:00.000
1
0
0
0
0
python,django,data-importer
0
16,125,548
0
8
0
false
1
0
Forget about this being a Django app for a second. It is just a load of Python code. What this means is, your Python script is absolutely free to import the database models you have in your Django app and use them as you would in a standard module in your project. The only difference here, is that you may need to take care to import everything Django needs to work with those modules, whereas when a request enters through the normal web interface it would take care of that for you. Just import Django and the required models.py/any other modules you need for it work from your app. It is your code, not a black box. You can import it from where ever the hell you want. EDIT: The link from Rohan's answer to the Django docs for custom management commands is definitely the least painful way to do what I said above.
4
1
0
0
I have a script which scans an email inbox for specific emails. That part's working well and I'm able to acquire the data I'm interested in. I'd now like to take that data and add it to a Django app which will be used to display the information. I can run the script on a CRON job to periodically grab new information, but how do I then get that data into the Django app? The Django server is running on a Linux box under Apache / FastCGI if that makes a difference. [Edit] - in response to Srikar's question When you are saying " get that data into the Django app" what exactly do you mean?... The Django app will be responsible for storing the data in a convenient form so that it can then be displayed via a series of views. So the app will include a model with suitable members to store the incoming data. I'm just unsure how you hook into Django to make new instances of those model objects and tell Django to store them.
How can I periodically run a Python script to import data into a Django app?
1
0.024995
1
0
0
3,419
11,349,709
2012-07-05T17:46:00.000
1
0
0
1
0
jquery,python,html,google-app-engine,steam-web-api
0
11,350,089
0
2
0
false
1
0
Since you have the steam id from the service you can then make another request to their steam community page via the id. From there you can use beautiful soup to return a dom to grab the required information for your project. Now onto your question. You can have all this happen within a request in a handler, if you are using a web framework such as Tornado, and the handler can return json in the page and you can render this json using your javascript code. Look into a web framework for python such as Tornado or Django to help you with return and displaying the data.
1
0
0
1
So basically, at the moment, we are trying to write a basic HTML 5 page that, when you press a button, returns whether the user, on Steam, is in-game, offline, or online. We have looked at the Steam API, and to find this information, it requires the person's 64 bit ID (steamID64) and we, on the website, are only given the username. In order to find their 64 bit id, we have tried to scrape off of a website (steamidconverter.com) to get the user's 64 bit id from their username. We tried doing this through the javascript, but of course we ran into the cross domain block, not allowing us to access that data from our google App Engine website. I have experience in Python, so I attempted to figure out how to get the HTML from that website (in the form of steamidconverter.com/(personsusername)) with Python. That was a success in scraping, thanks to another post on Stack Overflow. BUT, I have no idea how to get that data back to the javascript and get it to do the rest of the work. I am stumped and really need help. This is all on google App Engine. All it is at the moment, is a button that runs a simple javascript that attempts to use JQuery to get the contents of the page back, but fails. I don't know how to integrate the two! Please Help!
Scraper Google App Engine for Steam
0
0.099668
1
0
0
614
11,361,488
2012-07-06T11:39:00.000
3
1
1
0
1
python,design-patterns,singleton
0
11,362,386
0
3
0
false
0
0
I want to keep things in classes to retain consistency Why? Why is consistency important (other than being a hobgoblin of little minds)? Use classes where they make sense. Use modules where they don't. Classes in Python are really for encapsulating data and retaining state. If you're not doing those things, don't use classes. Otherwise you're fighting against the language.
2
0
0
0
I'm writing pretty big and complex application, so I want to stick to design patterns to keep code in good quality. I have problem with one instance that needs to be available for almost all other instances. Lets say I have instance of BusMonitor (class for logging messages) and other instances that use this instance for logging actions, in example Reactor that parses incoming frames from network protocol and depending on frame it logs different messages. I have one main instance that creates BusMonitor, Reactor and few more instances. Now I want Reactor to be able to use BusMonitor instance, how can I do that according to design patterns? Setting it as a variable for Reactor seems ugly for me: self._reactor.set_busmonitor(self._busmonitor) I would do that for every instance that needs access to BusMonitor. Importing this instance seems even worse. Altough I can make BusMonitor as Singleton, I mean not as Class but as Module and then import this module but I want to keep things in classes to retain consistency. What approach would be the best?
Python app design patterns - instance must be available for most other instances
0
0.197375
1
0
0
105
11,361,488
2012-07-06T11:39:00.000
0
1
1
0
1
python,design-patterns,singleton
0
11,471,003
0
3
0
true
0
0
I found good way I think. I made module with class BusMonitor, and in the same module, after class definition I make instance of this class. Now I can import it from everywhere in project and I retain consistency using classes and encapsulation.
2
0
0
0
I'm writing pretty big and complex application, so I want to stick to design patterns to keep code in good quality. I have problem with one instance that needs to be available for almost all other instances. Lets say I have instance of BusMonitor (class for logging messages) and other instances that use this instance for logging actions, in example Reactor that parses incoming frames from network protocol and depending on frame it logs different messages. I have one main instance that creates BusMonitor, Reactor and few more instances. Now I want Reactor to be able to use BusMonitor instance, how can I do that according to design patterns? Setting it as a variable for Reactor seems ugly for me: self._reactor.set_busmonitor(self._busmonitor) I would do that for every instance that needs access to BusMonitor. Importing this instance seems even worse. Altough I can make BusMonitor as Singleton, I mean not as Class but as Module and then import this module but I want to keep things in classes to retain consistency. What approach would be the best?
Python app design patterns - instance must be available for most other instances
0
1.2
1
0
0
105
11,371,057
2012-07-06T23:51:00.000
2
1
0
0
0
python,profiling
0
11,371,096
0
1
0
true
0
0
If you only need to know the amount of time spent in the Python code, and not (for example), where in the Python code the most time is spent, then the Python profiling tools are not what you want. I would write some simple C code that sampled the time before and after the Python interpreter invocation, and use that. Or, C-level profiling tools to measure the Python interpreter as a C function call. If you need to profile within the Python code, I wouldn't recommend writing your own profile function. All it does is provide you with raw data, you'd still have to aggregate and analyze it. Instead, write a Python wrapper around your Python code that invokes the cProfile module to capture data that you can then examine.
1
1
0
0
I have used the Python's C-API to call some Python code in my c code and now I want to profile my python code for bottlenecks. I came across the PyEval_SetProfile API and am not sure how to use it. Do I need to write my own profiling function? I will be very thankful if you can provide an example or point me to an example.
Profiling Python via C-api (How to ? )
0
1.2
1
0
0
276
11,379,910
2012-07-08T01:03:00.000
0
0
1
0
0
python,matplotlib
0
68,635,240
0
2
0
false
0
0
simply put, you can use the following command to set the range of the ticks and change the size of the ticks import matplotlib.pyplot as plt set the range of ticks for x-axis and y-axis plt.set_yticks(range(0,24,2)) plt.set_xticks(range(0,24,2)) change the size of ticks for x-axis and y-axis plt.yticks(fontsize=12,) plt.xticks(fontsize=12,)
1
14
1
0
While plotting using Matplotlib, I have found how to change the font size of the labels. But, how can I change the size of the numbers in the scale? For clarity, suppose you plot x^2 from (x0,y0) = 0,0 to (x1,y1) = (20,20). The scale in the x-axis below maybe something like 0 1 2 ... 20. I want to change the font size of such scale of the x-axis.
How do I change the font size of the scale in matplotlib plots?
0
0
1
0
0
44,462
11,389,331
2012-07-09T05:04:00.000
0
0
1
0
0
python,subprocess
0
11,389,343
0
2
0
false
0
0
Instead of process.communicate(), use process.stdout.read()
2
2
0
0
How can I read from output PIPE multiple times without using process.communicate() as communicate closes the PIPE after reading the output but I need to have sequential inputs and outputs. For example, 1) process.stdin.write('input_1') 2) After that, I need to read the output PIPE (how can I accomplish that without using communicate as it closes the PIPE) and then give another input as 3) process.stdin.write('input_2') 4) And then read the output of step 3 But if I use process.communicate after giving first input then it closes the output PIPE and i am unable to give second input as the PIPE is closed. Kindly help please.
Python Sub-process (Output PIPE)
0
0
1
0
0
176
11,389,331
2012-07-09T05:04:00.000
1
0
1
0
0
python,subprocess
0
11,389,339
0
2
0
false
0
0
flush() stdin, then read() stdout.
2
2
0
0
How can I read from output PIPE multiple times without using process.communicate() as communicate closes the PIPE after reading the output but I need to have sequential inputs and outputs. For example, 1) process.stdin.write('input_1') 2) After that, I need to read the output PIPE (how can I accomplish that without using communicate as it closes the PIPE) and then give another input as 3) process.stdin.write('input_2') 4) And then read the output of step 3 But if I use process.communicate after giving first input then it closes the output PIPE and i am unable to give second input as the PIPE is closed. Kindly help please.
Python Sub-process (Output PIPE)
0
0.099668
1
0
0
176
11,392,302
2012-07-09T09:26:00.000
0
0
0
0
0
python,xmpp,ejabberd
0
11,941,846
0
2
0
false
0
0
It is possible for a component to subscribe to a user's presence exactly the same way a user does. Also it is possible for the user to subscribe to a component's presence. You just have to follow the usual pattern, i.e. the component/user sends a <presence/> of type subscribe which the user/component can accept by sending a <presence/> of type subscribed. You can also have the user just send a presence to the component directly. There is no need to write custom hooks or create proxy users.
2
1
0
0
I have an ejabberd server at jabber.domain.com, with an xmpp component written in python (using sleekxmpp) at presence.domain.com. I wanted the component to get a notification each time a client changed his presence from available to unavailable and vice-versa. The clients themselves don't have any contacts. Currently, I have set up my clients to send their available presence stanzas to [email protected], and I do get their online/offline presence notifications. But I feel this isn't the right approach. I was hoping the clients wouldn't be aware of the component at presence.domain.com, and they would just connect to jabber.domain.com and the component should somehow get notified by the server about the clients presence. Is there a way to do that? Is my component setup correct? or should I think about using an xmpp plugin/module/etc.. Thanks
Getting ejabberd to notify an external module on client presence change
0
0
1
0
1
1,392
11,392,302
2012-07-09T09:26:00.000
5
0
0
0
0
python,xmpp,ejabberd
0
11,926,839
0
2
0
true
0
0
It is not difficult to write a custom ejabberd module for this. It will need to register to presence change hooks in ejabberd, and on each presence packet route a notification towards your external component. There is a pair of hooks 'set_presence_hook' and 'unset_presence_hook' that your module can register to, to be informed when the users starts/end a session. If you need to track other presence statuses, there is also a hook 'c2s_update_presence' that fires on any presence packets sent by your users. Other possibility, without using a custom module, is using shared rosters. Add [email protected] to the shared rosters of all your users, but in this case they will see this item reflected on their roster.
2
1
0
0
I have an ejabberd server at jabber.domain.com, with an xmpp component written in python (using sleekxmpp) at presence.domain.com. I wanted the component to get a notification each time a client changed his presence from available to unavailable and vice-versa. The clients themselves don't have any contacts. Currently, I have set up my clients to send their available presence stanzas to [email protected], and I do get their online/offline presence notifications. But I feel this isn't the right approach. I was hoping the clients wouldn't be aware of the component at presence.domain.com, and they would just connect to jabber.domain.com and the component should somehow get notified by the server about the clients presence. Is there a way to do that? Is my component setup correct? or should I think about using an xmpp plugin/module/etc.. Thanks
Getting ejabberd to notify an external module on client presence change
0
1.2
1
0
1
1,392
11,404,994
2012-07-10T00:11:00.000
0
0
0
0
0
python,wxpython,wxwidgets
0
11,411,866
0
2
0
false
0
1
Embedding one GUI application inside another is not a simple thing. Applications are written to provide their own main frame, for example. You could try to position Notepad to a particular place on the screen instead. If you're really talking about Notepad, then you have a different course of action. Notepad is nothing more than a text control with some code to save and load the contents to a file.
1
0
0
0
I am looking for a way to embed an .exe into a frame. (MDI) I am not sure how this can be done. I am using wxpython 2.9 and there is nothing online about this (until now).
Embed .exe in wxpython
0
0
1
0
0
389
11,406,085
2012-07-10T03:11:00.000
0
0
1
0
0
python,memory,data-mining
0
11,406,222
0
4
0
false
0
0
First thought - switch to 64-bit python and increase your computer's virtual memory settings ;-) Second thought - once you have a large dictionary, you can sort on key and write it to file. Once all your data has been written, you can then iterate through all the files simultaneously, comparing and writing out the final data as you go.
2
2
0
0
I am working on a research project in big data mining. I have written the code currently to organize the data I have into a dictionary. However, The amount of data is so huge that while forming the dictionary, my computer runs out of memory. I need to periodically write my dictionary to main memory and create multiple dictionaries this way. I then need to compare the resulting multiple dictionaries, update the keys and values accordingly and store the whole thing in one big dictionary on disk. Any idea how I can do this in python? I need an api that can quickly write a dict to disk and then compare 2 dicts and update keys. I can actually write the code to compare 2 dicts, that's not a problem but I need to do it without running out of memory.. My dict looks like this: "orange" : ["It is a fruit","It is very tasty",...]
Integrating multiple dictionaries in python (big data)
0
0
1
0
0
442
11,406,085
2012-07-10T03:11:00.000
0
0
1
0
0
python,memory,data-mining
0
11,406,103
0
4
0
false
0
0
You should use a database such as PostgreSQL.
2
2
0
0
I am working on a research project in big data mining. I have written the code currently to organize the data I have into a dictionary. However, The amount of data is so huge that while forming the dictionary, my computer runs out of memory. I need to periodically write my dictionary to main memory and create multiple dictionaries this way. I then need to compare the resulting multiple dictionaries, update the keys and values accordingly and store the whole thing in one big dictionary on disk. Any idea how I can do this in python? I need an api that can quickly write a dict to disk and then compare 2 dicts and update keys. I can actually write the code to compare 2 dicts, that's not a problem but I need to do it without running out of memory.. My dict looks like this: "orange" : ["It is a fruit","It is very tasty",...]
Integrating multiple dictionaries in python (big data)
0
0
1
0
0
442
11,411,182
2012-07-10T10:17:00.000
0
0
0
0
0
python,webdriver
0
11,412,106
0
1
0
false
0
0
You can use the get_attribute(name) method on a webelement to retrieve attributes.
1
0
0
0
Can anyone please tell me how to find the x-offset and y-offset default value of a Slider in a webpage using python for selenium webdriver. Thanks in Advance !
How to find x and y-offset for slider in python for a web-application
0
0
1
0
1
636
11,420,053
2012-07-10T18:59:00.000
2
0
0
0
0
python,c
0
11,420,313
0
4
0
false
0
1
There's also numpy which can be reasonably fast when dealing with "array operations" (sometimes called vector operations, but I find that term confusing with SIMD terminology). You'll probably need numpy if you decide to go the cython route, so if the algorithm isn't too complicated, you might want to see if it is good enough with numpy by itself first. Note that there are two different routes you can take here. You can use subprocess which basically issues system calls to some other program that you have written. This is slow because you need to start a new process and send the data into the process and then read the data back from the process. In other words, the data gets replicated multiple times for each call. The second route is calling a C function from python. Since Cpython (the reference and most common python implementation) is written in C, you can create C extensions. They're basically compiled libraries that adhere to a certain API. Then Cpython can load those libraries and use the functions inside, passing pointers to the data. In this way, the data isn't actually replicated -- You're working with the same block of memory in python that you're using in C. The downside here is that the C API is a little complex. That's where 3rd party extensions and existing libraries come in (numpy, cython, ctypes, etc). They all have different ways of pushing computations int C functions without you having to worry about the C API. Numpy removes loops so you can add, subtract, multiply arrays quickly (among MANY other things). Cython translates python code to C which you can then compile and import -- typically to gain speed here you need to provide additional hints which allow cython to optimize the generated code, ctypes is a little fragile since you have to re-specify your C function prototype, but otherwise it's pretty easy as long as you can compile your library into a shared object ... The list could go on. Also note that if you're not using numpy, you might want to check out pypy. It claims to run your python code faster than Cpython.
1
2
0
0
I'm new to programming and was wondering how I can have a python program execute and communicate with a c program. I am doing a mathematical computation in python, and was wondering if I could write up the main computation in C, that way the computation runs faster. I've been reading about "calling c functions from python", "including C or C++ code directly in your Python code", and "using c libraries from python". Is this the same thing? I want a python program to execute a c program and receive the results. What does it mean to "call C library functions" from python? Would it allow the python script to use c libraries or allow the script to execute code within a c compiler? thanks
executing c program from a python program
0
0.099668
1
0
0
634
11,421,476
2012-07-10T20:32:00.000
0
0
0
0
0
python,eclipse,networkx
1
11,421,571
0
2
0
false
0
0
I think there are two options: Rebuild your interpreter Add it to your python path by appending the location of networkx to sys.path in python
2
3
0
0
I'm using PyDev in eclipse with Python 2.7 on windows 7. I Installed networkx and it is properly running within Python shell but in eclipse it is showing error as it is unable to locate networkx can anyone tell me how to remove this error?
Integrate networkx in eclipse on windows
0
0
1
0
1
1,086
11,421,476
2012-07-10T20:32:00.000
4
0
0
0
0
python,eclipse,networkx
1
11,421,549
0
2
0
true
0
0
you need to rebuild your interpreter go to project > properties > pyDev-Interpreter/Grammar click the "click here to configure" remove the existing interpreter hit "Auto config" button and follow the prompts kind of a pain but the only way Ive found to autodiscover newly installed packages
2
3
0
0
I'm using PyDev in eclipse with Python 2.7 on windows 7. I Installed networkx and it is properly running within Python shell but in eclipse it is showing error as it is unable to locate networkx can anyone tell me how to remove this error?
Integrate networkx in eclipse on windows
0
1.2
1
0
1
1,086
11,427,168
2012-07-11T06:57:00.000
1
0
0
0
0
javascript,python
0
11,427,225
0
1
0
false
1
1
It takes time for the image to get from your phone to your server to your desktop client. There's nothing you can do to change that. The best you can hope to do is to benchmark your entire application, figure out where are your bottlenecks, and hope it's not the network connection itself.
1
0
0
0
I am capturing mobile snapshot(android) through monkeyrunner and with the help of some python script(i.e. for socket connection),i made it to display in an html page.but there is some time delay between the image i saw on my browser and that one on the android device.how can i synchronise these things so that the mobile screen snapshot should be visible at the sametime on the browser.
how to darw the image faster in canvas
0
0.197375
1
0
0
41
11,442,944
2012-07-11T23:18:00.000
0
0
0
1
0
python,for-loop,subprocess
0
20,869,726
0
2
0
false
0
0
A common way to detect things that have stopped working is to have them emit a signal at roughly regular intervals and have another process monitor the signal. If the monitor sees that no signal has arrived after, say, twice the interval it can take action such as killing and restarting the process. This general idea can be used not only for software but also for hardware. I have used it to restart embedded controllers by simply charging a capacitor from an a.c. coupled signal from an output bit. A simple detector monitors the capacitor and if the voltage ever falls below a threshold it just pulls the reset line low and at the same time holds the capacitor charged for long enough for the controller to restart. The principle for software is similar; one way is for the process to simply touch a file at intervals. The monitor checks the file modification time at intervals and if it is too old kills and restarts the process. In OP's case the subprocess could write a status code to a file to say how far it has got in its work.
2
2
0
0
I am running an os.system(cmd) in a for-loop. Since sometimes it hangs, I am trying to use process=subprocess.pOpen(cmd) in a for-loop. But I want to know the following: If I do sleep(60) and then check if the process is still running by using process.poll(), how do I differentiate between process actually running even after 1 minute and process that hung? If I kill the process which hung, will the for-loop still continue or will it exit? Thanks!
python handling subprocess
0
0
1
0
0
144
11,442,944
2012-07-11T23:18:00.000
4
0
0
1
0
python,for-loop,subprocess
0
11,443,356
0
2
0
true
0
0
I don't know of any general way to tell whether a process is hung or working. If a process hangs due to a locking issue, then it might consume 0% CPU and you might be able to guess that it is hung and not working; but if it hangs with an infinite loop, the process might make the CPU 100% busy but not accomplish any useful work. And you might have a process communicating on the network, talking to a really slow host with long timeouts; that would not be hung but would consume 0% CPU while waiting. I think that, in general, the only hope you have is to set up some sort of "watchdog" system, where your sub-process uses inter-process communication to periodically send a signal that means "I'm still alive". If you can't modify the program you are running as a sub-process, then at least try to figure out why it hangs, and see if you can then figure out a way to guess that it has hung. Maybe it normally has a balanced mix of CPU and I/O, but when it hangs it goes in a tight infinite loop and the CPU usage goes to 100%; that would be your clue that it is time to kill it and restart. Or, maybe it writes to a log file every 30 seconds, and you can monitor the size of the file and restart it if the file doesn't grow. Or, maybe you can put the program in a "verbose" mode where it prints messages as it works (either to stdout or stderr) and you can watch those. Or, if the program works as a daemon, maybe you can actively query it and see if it is alive; for example, if it is a database, send a simple query and see if it succeeds. So I can't give you a general answer, but I have some hope that you should be able to figure out a way to detect when your specific program hangs. Finally, the best possible solution would be to figure out why it hangs, and fix the problem so it doesn't happen anymore. This may not be possible, but at least keep it in mind. You don't need to detect the program hanging if the program never hangs anymore! P.S. I suggest you do a Google search for "how to monitor a process" and see if you get any useful ideas from that.
2
2
0
0
I am running an os.system(cmd) in a for-loop. Since sometimes it hangs, I am trying to use process=subprocess.pOpen(cmd) in a for-loop. But I want to know the following: If I do sleep(60) and then check if the process is still running by using process.poll(), how do I differentiate between process actually running even after 1 minute and process that hung? If I kill the process which hung, will the for-loop still continue or will it exit? Thanks!
python handling subprocess
0
1.2
1
0
0
144
11,445,143
2012-07-12T04:47:00.000
1
0
0
0
0
python,ios,django,apple-push-notifications
0
11,471,280
0
1
0
false
1
0
Never mind, I tried it again using an absolute file path, and it worked after I restarted Django.
1
1
0
0
I'm using Django to send iOS push notifications. To do that, I need to access a .pem certificate file currently stored on the server in my app directory along with views.py, models.py, admin.py, etc. When I try to send a push notification from a python shell on the server, everything works fine. But when I try to send a push notification by accessing a Django view, it doesn't send and gives me an SSLError. I think that this is because Django can't find the .pem certificate file. In short, I'm wondering how a Django view function can read in another file on the server.
Django Access File on Server
0
0.197375
1
0
0
846
11,460,864
2012-07-12T21:20:00.000
0
0
0
0
0
python,wxpython,wxwidgets
0
11,471,540
0
1
0
true
0
0
I think all you have to do is change the font size of the item. That's what it looks like in the wxPython demo anyway. You could ask on the wxPython users group though. The author of that widget is on there most of the time and is very helpful.
1
0
0
0
I gave my CustomTreeCtrl the TR_HAS_VARIABLE_ROW_HEIGHT style. But I am not sure where to go from there to change the height of the items inside the tree. I cant really find anything on the API or online.
how to use CustomTreeCtrl with the TR_HAS_VARIABLE_ROW_HEIGHT style to change the items height?
0
1.2
1
0
1
138
11,470,856
2012-07-13T12:49:00.000
2
0
0
0
0
python,rest,http-status-codes
0
11,470,884
0
1
0
true
0
0
Use an existing web framework such as Flask or Django. Doing this by yourself with sockets is way too much work, it's not worth it.
1
2
0
0
I want to make very simple application in Python which: When REST calls PUT/DEL/GET are recived than response code is 200 When REST call create is recived than response code is 201 I tried with sockets but I don't know how to send 201.
Recive REST and Response http codes in Python
0
1.2
1
0
1
81
11,472,810
2012-07-13T14:45:00.000
2
0
1
0
0
python,setuptools
0
55,708,951
0
5
0
false
0
0
Another way to it is to use wildcards. This does not apply to >= 0.5.0, < 0.7.0, but in case you decide that all maintenance releases should be supported (e.g. 0.5.0 to 0.5.x), you can use == 0.5.* e.g. docutils == 0.3.*
1
61
0
0
I want to make a package to depend the particular version range e.g. >= 0.5.0, < 0.7.0. Is it possible in install_requires option, and if so how should it be?
How to specify version ranges in install_requires (setuptools, distribute)
1
0.07983
1
0
0
21,786
11,475,925
2012-07-13T18:06:00.000
5
0
0
0
1
python,ironpython
0
11,476,423
0
1
0
true
0
1
There are a few reason why IronPython is slow to startup. First, if you didn't use the installer (which will ngen the assemblies), the JIT compiler has to convert the IronPython assemblies from MSIL bytecode to native code, and that takes time, as it's a lot of code. So use the installer on manually ngen the assemblies. Second, the actual Python code is also JIT compiled, although not right away to reduce the penalty; startup time used to be much worse when all Python code was JITted. The .NET JIT isn't fast enough for my liking. Finally, it's not a powerhouse of a laptop. That said, even on my SSD-equipped quad core it still takes a few seconds to get started. IronPython's startup time has improved a lot, to the point where it's now really hard to optimize further - profiling is hard (small sample size) and there's no obvious wins. It's "uniformly slow code" now, unfortunately. IronPython's strength right now lies in long-running processes where the JIT can get some big wins, and not in short ones where it's more of a hindrance.
1
2
0
0
I am launching IronPython 2.7.3 on Windows 7 and it is taking more than 15 seconds. Why is it so slow? And how to fix it? The computer is a Samsung NP300E5A(Celeron B800,2gb) notebook.
IronPython launching very slowly
0
1.2
1
0
0
345
11,479,978
2012-07-14T00:33:00.000
5
0
1
0
1
python
0
11,479,983
0
1
0
true
0
0
You want os.walk(). It will give you a list of files and folders in each directory under the starting directory.
1
0
0
0
I have been at this for a few hours... how to use python to get all enclosing files from a folder... now by enclosing I mean all the files enclosed within a folder within a folder etc. So all the files beyond a certain point using Python. I have tried to use glob.glob() and listdir() to do this, but those will just only work within the first level of code. I could get this to work if there was a way that python could differentiate between a file and a folder? Any suggestions?
Python get list of all enclosing files
1
1.2
1
0
0
104
11,497,376
2012-07-16T02:14:00.000
4
0
1
0
0
python,line-breaks,file-writing
0
11,497,399
0
15
0
false
0
0
Most escape characters in string literals from Java are also valid in Python, such as "\r" and "\n".
3
410
0
0
In comparison to Java (in a string), you would do something like "First Line\r\nSecond Line". So how would you do that in Python, for purposes of writing multiple lines to a regular file?
How do I specify new lines on Python, when writing on files?
0
0.053283
1
0
0
2,399,304
11,497,376
2012-07-16T02:14:00.000
6
0
1
0
0
python,line-breaks,file-writing
0
11,497,389
0
15
0
false
0
0
The same way with '\n', though you'd probably not need the '\r'. Is there a reason you have it in your Java version? If you do need/want it, you can use it in the same way in Python too.
3
410
0
0
In comparison to Java (in a string), you would do something like "First Line\r\nSecond Line". So how would you do that in Python, for purposes of writing multiple lines to a regular file?
How do I specify new lines on Python, when writing on files?
0
1
1
0
0
2,399,304
11,497,376
2012-07-16T02:14:00.000
10
0
1
0
0
python,line-breaks,file-writing
0
11,497,390
0
15
0
false
0
0
In Python you can just use the new-line character, i.e. \n
3
410
0
0
In comparison to Java (in a string), you would do something like "First Line\r\nSecond Line". So how would you do that in Python, for purposes of writing multiple lines to a regular file?
How do I specify new lines on Python, when writing on files?
0
1
1
0
0
2,399,304
11,505,231
2012-07-16T13:20:00.000
2
0
0
0
0
python,json,web-services,web-frameworks
0
11,505,774
0
1
0
true
1
0
I'm no doubt going to be shot down for this answer, but it needs to be said... You're going to write a service that allows tens of transfers a second, with very large file sizes... Uptime is going to be essential and so is transfer speeds etc... If this is for a business, and not just a personal pet project get the personal responsible for the IT budget to give "Box" or "DropBox" some pennies and use their services (I am not affiliated with either company). On a business level, this gets you up and running straight off, would probably end up cheaper than you coding, designing, debugging, paying for EC2 etc... More related to your question: Flask seems to be an up-coming and usable "simple" framework. That should provide all the functionality without all the bells and whistles. The other I would spend time looking at would be Pyramid - which when using a very basic starter template is very simple, but you've got the machinery behind it to really get quite complex things done. (You can mix url dispatch and traversal where necessary for instance).
1
0
0
0
I'm gonna write a web service which will allow upload/download of files, managing permissions and users. It will be the interface to which a Desktop app or Mobile App will communicate. I was wondering which of the web frameworks I should use to to that? It is a sort of remote storage for media files. I am going to host the web service on EC2 in a Linux environment. It should be fast (obviously) because It will have to handle tens of requests per second, transferring lots of data (GBs)... Communication will be done using JSon... But how to deal with binary data? If I use base64, it will grow by 33%... I think web2py should be ok, because it is very stable and mature project, but wanted other suggestions before choosing. Thank you.
Python web framework suggestion for a web service
0
1.2
1
0
1
170