Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
10,113,892
2012-04-11T20:58:00.000
1
1
1
0
python
10,114,026
2
false
0
0
You shouldn't have to worry about __module__ usually, sometimes its used for dark magic or knowing where a function came (example) debugging from, but most of the time everyone ignores it. If your really worried set __module__ = "dynamically_defined_function" or something similar.
1
17
0
I'm dynamically defining functions in a module and then updating the module's __all__ and the function's __name__ attribute to match the name it will have inside the module. I was wondering if it is a good idea to update the function's __module__ attribute as well to point to the module the function will reside. The docs say __module__ is: The name of the module the function was defined in, or None if unavailable. The code that creates the function resides in a different module which is pretty much unrelated to the module where the function resides. There is no reference to the function in this module. I've done some poking around on the mailing list but I'm a bit confused as to what the semantics of __module__ are and if I should set it to None or the module that the function resides or the module where the code resides that created the function. Gonna leave it be for now but am interested to see if anyone knows the answer.
semantics of __module__
0.099668
0
0
22,298
10,114,431
2012-04-11T21:43:00.000
0
0
0
0
mongodb,python-2.7,windows-server-2008-r2,pymongo,distributed-transactions
10,157,192
1
false
0
0
Try it with journaling turned off and see if the problem remains.
1
0
0
Looking for any advice I can get. I have 16 virtual CPUs all writing to a single remote MongoDB server. The machine that's being written to is a 64-bit machine with 32GB RAM, running Windows Server 2008 R2. After a certain amount of time, all the CPUs stop cold (no gradual performance reduction), and any attempt to get a Remote Desktop Connection hangs. I'm writing from Python via pymongo, and the insert statement is "[collection].insert([document], safe=True)" I decided to more actively monitor my server as the distributed write job progressed, remoting in from time to time and checking the Task Manager. What I see is a steady memory creep, from 0.0GB all the way up to 29.9GB, in a fairly linear fashion. My leading theory is therefore that my writes are filling up the memory and eventually overwhelming the machine. Am I missing something really basic? I'm new to MongoDB, but I remember that when writing to a MySQL database, inserts are typically followed by commits, where it's the commit statement that actually makes sure the record is written. Here I'm not doing any commits...? Thanks, Dave
Distributed write job crashes remote machine with MongoDB server
0
1
0
97
10,115,481
2012-04-11T23:38:00.000
0
0
1
0
python,qt
10,115,583
1
false
0
1
The icon size and the font can be set on the view widget with setIconSize and setFont.
1
1
0
So I'm using pyqt and I'm trying to populate my List with items that contain an icon and some text beside it. My icons gets shrunk down when inserted to the List and my font size is super small. How do I increase size of everything (i.e size of icon and text)?
Resizing row height of list widget
0
0
0
748
10,116,602
2012-04-12T02:25:00.000
2
1
0
1
python,native,popen
10,116,621
1
true
0
0
Forking a separate process to do something is almost always much more expensive than calling a function that does the same thing. But if that Python function is very inefficient, and the OS forks new processes quickly (i.e., is a UNIX variant,) you could imagine a rare case where this is not true -- but it will definitely be rare.
1
3
0
compared to invoking a python library function that does the same thing. I've some legacy code that uses Popen to invoke a executable with some parameters. Now there a python library that supports that same function. I was wondering what the performance implications are.
What is the performance overhead of Popen in python
1.2
0
0
601
10,122,625
2012-04-12T11:28:00.000
0
0
0
1
python,memory-management,suds,gevent
23,891,675
1
false
0
0
try client.clone() or client(..., cache=DocumentCache())
1
2
0
I have a large wsdl file, that takes 30MB to initialize with suds. I use gevent to spawn 100 greenlets that I use as workers for external service. How can I use single instance on suds Client but still get 100 parallel connections? It is a huge waste of memory to initialize all those suds Clients. What I really need is 100 transports and one single suds Client instance to translate xml messages in and out. Any help?
How to use suds in a memory efficient way?
0
0
0
599
10,122,992
2012-04-12T11:50:00.000
1
0
0
0
python,django,python-2.7,django-piston
10,225,848
1
false
1
0
Maybe you have defined the fields parameter to fetch many related objects(Objects with foreignkeys,onetoonekeys and manytomanyrelations to the object being fetched). This will slow down your response. Can you post your code?
1
1
0
When returning result from the read method, it takes a huge amount of time to generate/send response (for like 30,000 records with 6 columns, it takes around 14 seconds). Is this fine and it normally takes this much time? If this ins't fine, what can I do to reduce the time? What/Where could I refer to? Any help?
Django-piston takes huge response time in read
0.197375
0
0
149
10,123,958
2012-04-12T12:53:00.000
1
0
0
1
django,google-app-engine,python-2.7
10,139,804
2
false
1
0
After a week of racking my brain, I finally figured out the problem. The gaesessions code was the culprit. We put DEFAULT_LIFETIME = datetime.timedelta(hours=1) and originally it was DEFAULT_LIFETIME = datetime.timedelta(days=7). Not sure why running it through any debugger such as wing or pycharm would prevent the browser from getting a session. The interesting thing is the code change with hours=1 works fine on linux with wing debugger. Very Strange!
1
2
0
I am having a problem that has baffled me for over a week. I have a project that is written in python with Django on Google App Engine. The project has a login page and when I run the application in Google App Engine or from the command line using dev_server.py c:\project, it works fine. When I try to run the application through a debugger like Wing or Pycharm, I cannot get past the login page. After trying to login, it takes me back to the login screen again. When I look at the logs, it shows a 302 (redirect) in the debugger but normally it shows a 200 (OK). Could someone explain why this would be happening? Thanks -Dimitry
Debugger in python for Google App Engine and Django
0.099668
0
0
337
10,125,009
2012-04-12T13:53:00.000
0
1
0
0
python,serial-port,pyserial
67,317,060
5
false
0
0
For me the problem was it was overloading the buffer when receiving data from the Arduino. All I had to do was mySerialPort.flushInput() and it worked. I don't know why mySerialPort.flush() didn't work. flush() must only flush the outgoing data? All I know is mySerialPort.flushInput() solved my problems.
1
6
0
I am reading data from a microcontroller via serial, at a baudrate of 921600. I'm reading a large amount of ASCII csv data, and since it comes in so fast, the buffer get's filled and all the rest of the data gets lost before I can read it. I know I could manually edit the pyserial source code for serialwin32 to increase the buffer size, but I was wondering if there is another way around it? I can only estimate the amount of data I will receive, but it is somewhere around 200kB of data.
Pyserial buffer fills faster than I can read
0
0
0
12,564
10,125,860
2012-04-12T14:38:00.000
0
1
0
1
unit-testing,google-app-engine,python-2.7
21,678,252
2
false
1
0
app.yaml configuration is not applied when doing unit tests with webtest app and NoseGAE. use_library does not work neither. The right solution for this case is to provide proper python path to the preferred lib version, e.g. PYTHONPATH=../google_appengine/lib/django-1.5 when running nosetests.
1
1
0
I'm in the process of migrating my Google AppEngine solution from Python 2.5 to 2.7. The application migration was relatively easy, but I'm struggling with the unittests. In the 2.5 version I was using the use_library function to set the django version to 1.2, but this isn't supported anymore on 2.7. Now I set the default version in the app.yaml. When I'm now running my unittests the default django version becomes 0.96 and I can't manage to set the 1.2 as the default version. Who knows how I can set the default libraries for the unittest, so the match the settings in the app.yaml?
How to set the default libraries when doing unit tests under Python 2.7
0
0
0
111
10,125,983
2012-04-12T14:44:00.000
1
0
1
0
python,windows,com
10,127,663
1
true
0
0
I'm sure you're using your powers for good, but the approach you suggest is essentially removing the safeguard that UAC was intended to provide for your users. If they have configured UAC to run at a lower-than-maximum protection level, it might be possible to do what you want, but in general the UAC prompt is displayed in the secure desktop (that's why that background goes dark) and so no process of yours can automatically click its buttons.
1
1
0
So I'm using Python (though another language suggestion like C# or VB is fine too). I want to have a program launch an EXE file installer, and then tell that installer that it is alright to run the program using the UAC. I would also like to be able to select buttons (click here to install!). What library or language can do this? Where do I start? Would it deal with COM objects, or...?
UAC and Windows box selections in Windows
1.2
0
0
94
10,130,367
2012-04-12T19:25:00.000
0
0
0
0
python,scrapy,scrapyd
10,136,943
1
false
1
0
I think i had a similar situation. The reason that processes were dying was that spiders were generating an exception making the process to stop. To find out the exception look at the log files somewhere in .scrapy folder. For each started crawler process scrapy creates a log file with job id in its name.
1
2
0
I am facing a problem with crawler processes dying unexpectedly. I am using scrapy 0.14, the problem existed in 0.12 as well . The scrapyd log shows entries like: Process died: exitstatus=None The spider logs dont show spider closed information as depicted by my database status also. Has anybody else faced similar situation? How can i trace the reason for these processes vanishing, any ideas, suggestions?
Crawler processes dying unexpectedly
0
0
0
493
10,131,506
2012-04-12T20:44:00.000
0
1
0
0
python,http,cgi
40,729,146
2
false
0
0
Seems you're looking to cache queries made to your site. After calculating a response, save a record with the request url, method, params, and response in a your preferred storage. Depending on your environment and the number of requests, your may choose a database or filesystem. However, you need to take into account that some of the result data may change, in which case you'd need to remove cached data that depend on that data.
1
0
0
I have a cgi script wrote in Python that is receiving some complex http request, one that could be POST or GET. I am looking for a simple way to log the request in some way so I can replay it later any number of times I want.
How can I save a HTTP request from a python cgi scripts so I can easily repeat it?
0
0
1
91
10,132,266
2012-04-12T21:39:00.000
0
0
0
0
python,django
10,133,338
1
false
1
0
I've been do django projects for 4 years. And all I've got from all projects is few ContextProcessors. If it doesn't do any project-specific operations, I don't see it fitting on any of my project apps, but I also find it to be too small to have an application on its own. Is it? Just look at this like it's always project-specified until you need it in another project. So my answer is: Write what you want and how you want. If you need stuff in other place then you'll separate it from project. Don't optimize prematurely.
1
0
0
I've been reading about good practices regarding Django projects management. As I understand, is good to: Split the project into multiple small applications with specific responsibilities. Always code thinking in redistributable components. The second point has become quite important to me since I usually work on more than one project. So whenever I can, I modularize my components into installable packages which I can later reuse. The question is... to what extend is this a good practice? how should I handle very simple components which are also highly reusable by other applications? An example would be a simple reusable templatetag, which may be 40~60 lines of code + tests. If it doesn't do any project-specific operations, I don't see it fitting on any of my project apps, but I also find it to be too small to have an application on its own. Is it?
Is it bad practice to have independent Django applications to handle simple one-file components?
0
0
0
105
10,132,344
2012-04-12T21:45:00.000
3
0
1
1
macos,osx-lion,python-2.7,virtualenv
10,132,440
2
false
0
0
Creating a virtualenv actually creates a new folder with that name. You have to find that folder.
1
9
0
Silly question...I created a virtualenv months ago and can't remember what it's called. Where can I find it? OSX 10.7 Python 2.7.1 Virtualenv 1.6.4 Thanks!
List of virtualenvs
0.291313
0
0
11,296
10,132,427
2012-04-12T21:53:00.000
22
0
0
0
c++,python,qt,qt4,pyqt
10,132,617
4
false
0
1
What is the advantage of using the native C++ Qt over PyQt Speed/power/control. PyQt application will still require python. C++/Qt Application compiles to native exe. By using C++ you'll get access to 3rd party libraries that won't be available in python, plus you'll exterminate "middle man" - layer that sits between your program and qt dlls and potentially you can get better performance. For example, I would not write an archiver or mp3 decompressor in python, although it certainly can be done. However that comes at a cost - c++ does not have a garbage collector, is much more complex, has "slower" development (compilation time), requires years to master and you'll get better performance only if your bottleneck is in within interpreter (i.e. scripted language overhead). I.e. C++ gives more power at a cost of greater responsibility and longer development time. If you don't need that, then you don't have a reason to stick with C++. Choice of language depends on your application/situation and your personal preferences. If you need to make application SOON or make a mockup, then it'll be reasonable to use language you're familiar with. If you have serious performance problems, then it'll be reasonable to hire skilled C++ programmer to do the job - make native application, profile it, optimize, etc. Please note that language is a tool. If you want to use your language for everything simply because you like the language, you're not working efficiently. --EDIT-- Personally, I would not use python for a larger application I'm expected to maintain for a long time. However, this is because the language is not exactly compatible with my mindset (reliance on Murphy's Law) and (as a result) I'm not comfortable with it. Person that thinks differently will be probably much more comfortable with Python and might even think that C++ is too restrictive. Another thing is that judging from my experience of writing Blender plugins and various python scripts, there's some serious performance overheads that appears because language is scripted, and very heavy list/map/array manipulation that can be performed FAST for free in C++ might take 5x..10x times longer in python. Some people might insist that this can be fixed, however, cost of this "fixing" might overcome benefits you get from using scripted language. Regardless of my preference, I still use Python for making utility scripts that need to run several utilities, split/splice/parse their text output and do something with it (C++ isn't very good at this situations), and I'd still provide Python bindings (assuming Lua is no good) in a program that must be extensible. In the end it comes down to selection of most suitable tool - if C++ will not give you any benefit compared to Python, then there's no reason to switch.
3
14
0
I want to develop in Qt, and I already know Python. I am learning C++, so what are the advantages of programming Qt in C++ over Python? C++ seems more complicated, and seems like there is not much gain.
What is the advantage of using the native C++ Qt over PyQt
1
0
0
17,063
10,132,427
2012-04-12T21:53:00.000
1
0
0
0
c++,python,qt,qt4,pyqt
10,132,490
4
false
0
1
In short, I believe that unless you have strong performance requirements, you should stick with Python. Also, as Greg mention, your program will be more portable with Python than with C++. I love C++ but these days, for most project, I mostly turn to Python if not Java. However, if I'm writing a game or a graphics application, I might consider C++.
3
14
0
I want to develop in Qt, and I already know Python. I am learning C++, so what are the advantages of programming Qt in C++ over Python? C++ seems more complicated, and seems like there is not much gain.
What is the advantage of using the native C++ Qt over PyQt
0.049958
0
0
17,063
10,132,427
2012-04-12T21:53:00.000
6
0
0
0
c++,python,qt,qt4,pyqt
10,132,451
4
false
0
1
If you're planning on distributing your app, it's much easier to deliver a self-contained compiled executable than relying on your end users to install Python and PyQt first. But that may or may not be a consideration for you.
3
14
0
I want to develop in Qt, and I already know Python. I am learning C++, so what are the advantages of programming Qt in C++ over Python? C++ seems more complicated, and seems like there is not much gain.
What is the advantage of using the native C++ Qt over PyQt
1
0
0
17,063
10,132,632
2012-04-12T22:12:00.000
0
0
0
1
python,command-line,vlc
12,893,390
2
false
0
0
To close VLC after any actions append vlc://quit to your command line
1
0
0
I need to make VLC download then play songs. I'm planning on using the os.popen to issue commands to the VLC command line (I'm having some problems getting the python binding working...). My question is, is there any callback that I can get when VLC is done downloading so that I can know to start streaming?
Callback when VLC done on command line
0
0
0
414
10,134,617
2012-04-13T02:45:00.000
2
0
0
0
gearman,python-gearman
10,164,124
1
true
0
0
Workers process one request at a time. You have a few options: 1) You can run multiple workers (this is the most common method). Workers sit in poll() when they aren't processing so this model works pretty well. 2) Write a fork() implementation around the worker. This way you can fire up a set number of worker processes, but don't have to monitor multiple processes.
1
0
0
For example: I have a task named "URLDownload", the task's function is download a large file from internet. Now I have a Worker Process running, but have about 1000 files to download. It is easy for a Client Process to create 1000 task, and send them to Gearman Server. My Question is the Worker Process will do the task one by one, or it will accept multi-tasks at one time, If the Worker Process can accept multi-tasks, So How can I limit the task-pool-size in Worker Process.
is PYTHON Gearman Worker accept multi-tasks
1.2
0
1
594
10,137,594
2012-04-13T08:30:00.000
0
0
0
0
python,networking,ethernet
16,929,947
6
false
0
0
Another roundabout way to get a systems mac id is to use the ping command to ping the system's name then performing an arp -a request against the ip address that was pinged. the downfall to doing it that way thou is you need to write the ping response into the memory of python and performing a readline operation to retrieve the ip address and then writing the corresponding arp data into memory while writing the system name, ip address, and mac id to the machine in question to either the display or to a test file. Im trying to do something similar as a system verification check to improve the automation of a test procedure and the script is in python for the time being.
1
1
0
The goal is to collect the MAC address of the connected local NIC, not a list of all local NICs :) By using socket and connect (to_a_website), I can just use getsockname() to get the IP, that is being used to connect to the Internet. But from the IP how can I then get the MAC address of the local NIC ? Main reason for the question is if there are multiple NICs.
Python - Get MAC address of only the connected local NIC
0
0
1
8,993
10,138,917
2012-04-13T10:08:00.000
20
1
1
0
python,pydev,pylint
10,140,373
6
false
0
0
As said by cfedermann, you can specify messages to be disabled in a ~/.pylintrc file (notice you can generate a stub file using pylint --generate-rcfile if you don't want to use inline comments. You'll also see in the generated file, in the [BASIC] section, options like "method-rgx", "function-rgx", etc. which you can configure as you like to support camel cases style rather than pep8 underscore style.
1
40
0
I am using pydev where I have set up pylint. The problem is that even inside the comments, pylint reports warnings. I was looking to disable any sort of checking inside any line or a block comment. Also, I wish to follow camelCase naming convention instead of underscores for variables and arguments in my code. Is there any way to specify such a rule without inserting my code with any pylint: disable comments?
Can Pylint error checking be customized?
1
0
0
33,391
10,140,560
2012-04-13T12:11:00.000
3
1
0
0
python,user-interface,pyqt,pygtk,glade
10,141,980
4
false
0
0
There is no "best/documented GUI framework". There are many GUI toolkits, all more-or-less equally powerful. Tkinter, PyQT, wxPython... all have their strengths and weaknesses. Pick any one of them and start learning. I recommend Tkinter for learning, mainly because you probably already have it. Once you understand the fundamentals of event based programming (and Tkinter provides a fairly gentle way to learn that), you'll be in a better position to judge which of the available toolkits fits your definition of "best".
1
4
0
I stepped into python gui programming and I wanted to know what the best documented GUI builder like GLADE, which I'm using right now, however I struggle so much to find some good tutorials or documentation, mostly in the even handling area. I would like also to understand what's the best/documented GUI framework. Thanks to anyone who will answer.
what the best documented python friendly GUI builder like GLADE
0.148885
0
0
6,486
10,141,603
2012-04-13T13:19:00.000
1
0
1
0
python,python-2.7
10,142,014
3
false
0
0
For parsing similar lines of text, like log files, I often use regular expressions using the re module. Though split() would work well also for separating fields which don't contain spaces and the parts of the date, using regular expressions allows you to also make sure the format matches what you expect, and if need be warn you of a weird looking input line. Using regular expressions, you could get the individual fields of the date and time and construct date or datetime objects from them (both from the datetime module). Once you have those objects, you can compare them to other similar objects and write new entries, formatting the dates as you like. I would recommend parsing the whole input file (assuming you're reading a file) and writing a whole new output file instead of trying to alter it in place. As for keeping track of the date and time counts, when your input isn't too large, using a dictionary is normally the easiest way to do it. When you encounter a line with a certain ID, find the entry corresponding to this ID in your dictionary or add a new one to it if not. This entry could itself be a dictionary using dates and times as keys and whose values is the count of each encountered. I hope this answer will guide you on the way to a solution even though it contains no code.
1
0
0
In python I need a logic for below scenario I am using split function to this. I have string which contains input as show below. "ID674021384 25/01/1986 heloo hi thanks 5 minutes and 25-01-1988." "ID909900000 25-01-1986 hello 10 minutes." And output should be as shown below which replace date format to "date" and time format to "time". "ID674021384 date hello hi thanks time date." "ID909900000 date hello time." And also I need a count of date and time for each Id as show below ID674021384 DATE:2 TIME:1 ID909900000 DATE:1 TIME:1
Find and replace logic in Python
0.066568
0
0
363
10,143,637
2012-04-13T15:19:00.000
1
1
1
0
python,generator
10,144,447
3
false
0
0
"Nested" iterators amount to the composition of the functions that the iterators implement, so in general they pose no particularly novel performance considerations. Note that because generators are lazy, they also tend to cut down on memory allocation as compared with repeatedly allocating one sequence to transform into another.
1
1
1
Okay, so I probably shouldn't be worrying about this anyway, but I've got some code that is meant to pass a (possibly very long, possibly very short) list of possibilities through a set of filters and maps and other things, and I want to know if my implementation will perform well. As an example of the type of thing I want to do, consider this chain of operations: get all numbers from 1 to 100 keep only the even ones square each number generate all pairs [i, j] with i in the list above and j in [1, 2, 3, 4,5] keep only the pairs where i + j > 40 Now, after doing all this nonsense, I want to look through this set of pairs [i, j] for a pair which satisfies a certain condition. Usually, the solution is one of the first entries, in which case I don't even look at any of the others. Sometimes, however, I have to consume the entire list, and I don't find the answer and have to throw an error. I want to implement my "chain of operations" as a sequence of generators, i.e., each operation iterates through the items generated by the previous generator and "yields" its own output item by item (a la SICP streams). That way, if I never look at the last 300 entries of the output, they don't even get processed. I known that itertools provides things like imap and ifilter for doing many of the types of operations I would want to perform. My question is: will a series of nested generators be a major performance hit in the cases where I do have to iterate through all possibilities?
How fast are nested python generators?
0.066568
0
0
1,253
10,143,825
2012-04-13T15:31:00.000
3
0
1
0
python,pylint,docstring,pep8,code-standards
10,146,456
2
false
0
0
As pylint maintainer, I can tell this is definitly a bug. @Jacxel : if you've trouble registering on logilab.org, you can still post the pb on the [email protected] mailing list thanks
1
8
0
So im looking at some code and bringing it up to PEP 8 standard with the help of pylint and i noticed that if i was using triple quotes for a print statement where the text went past 120 chars (we are allowing 120 instead of 79) pylint didn't complain. Is this a bug in pylint or does it think it might be a comment and is more lenient with the length of lines or does it not care about how far over you go with strings in trippple quotes because you may want to format them that way? For clarity: yes pylint works normally in every other case of going over the line length.
Is docstring max line-length different to normal PEP8 standard?
0.291313
0
0
3,793
10,144,158
2012-04-13T15:51:00.000
1
0
0
1
python,zeromq
14,329,435
2
false
1
0
What I see as the only possibility is to use the DEALER-ROUTER combination. DEALER at the frontend, ROUTER at the backend. Every frontend server shall contain a DEALER socket for every backend server (for broadcast) and one DEALER socket on top connected to all the backend servers at once for the round-robin thing. Now let me explain why. You can't really use PUB-SUB in such a critical case, because that pattern can very easily drop messages silently, it does not queue. So in fact the message posted to PUB can arrive to any subset of SUB since it's (dis)connecting in the background. For this reason you need to simulate broadcast by looping over DEALER sockets assigned to all the background servers. It will queue messages if the backend part is not connected, but beware of the HWM. The only final solution is to use heartbeat to know when a backend is dead and destroy the socket assigned to it. A ROUTER socket at the background is a logical solution since you can asynchronously accept any number of requests and since it's a ROUTER socket it is super easy to send the response back to the frontend that requested the task. By having a single ROUTER in the background servers you can make it in a way that they are not even aware of the fact that there is a broadcast happening, they see everything as a direct request to them. Broadcasting is purely a frontend thing. The only issue with this solution might be that if your backend server is not fast enough, all the frontend servers may fill it up so that it reaches the HWM and starts dropping the packages. You can prevent this by having more threads/processes processing the messages from the ROUTER socket. zmq_proxy() is a useful function for this stuff. Hope this helps ;-)
1
4
0
I'm trying to design ZeroMQ architecture for N front-end servers and M back-end workers, where front-end servers would send task to back-end ones. Front-end servers do have information about back-end ones, but back-end ones do not know about front-end. I have two types of tasks, one type should use round robin and go to just one back-end server, while other type should be broadcasted to all back-end servers. I don't want to have a central broker, as it would be single point of failure. For the first type of tasks request/response pattern seems to be the right one, while for the second it would be publisher/subscriber pattern. But how about pattern combining the two? Is there any patter that would allow me to select at send time if I want to sent message to all or just one random back-end servers? The solution I've come up with is just use publisher/subscriber and prepend messages with back-end server ID and some magic value if it's addressed to all. However, this would create lot unnecessary traffic. Is there cleaner and more efficient way to do it?
ZeroMQ selective pub/sub pattern?
0.099668
0
0
956
10,144,325
2012-04-13T16:03:00.000
1
0
0
0
python,map,hadoop,mapreduce,reduce
10,144,991
2
false
0
0
Yea, you need to use hadoop streaming if you want to use write Python code for running MapReduce Jobs
1
2
1
I'm just starting out with Hadoop and writing some Map Reduce jobs. I was looking for help on writing a MR job in python that allows me to take some emails and put them into HDFS so I can search on the text or attachments of the email? Thank you!
Emails and Map Reduce Job
0.099668
0
0
251
10,145,201
2012-04-13T17:06:00.000
1
0
0
0
python,sql,sql-server,sql-server-2005,oracle-sqldeveloper
10,145,890
2
true
0
0
First of all what you need is profile the sql server to see if any activity is happening. Look for slow running queries, CPU and memory bottlenecks. Also you can include the timeout in the querystring like this: "Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=SSPI;Connection Timeout=30"; and extend that number if you want. But remember "timeout" doesn't means time connection, this is just the time to wait while trying to establish a connection before terminating. I think this problem is more about database performance or maybe a network issue.
1
0
0
We moved our SQL Server 2005 database to a new physical server, and since then it has been terminating any connection that persist for 30 seconds. We are experiencing this in Oracle SQL developer and when connecting from python using pyodbc Everything worked perfectly before, and now python returns this error after 30 seconds: ('08S01', '[08S01] [FreeTDS][SQL Server]Read from the server failed (20004) (SQLExecDirectW)')
SQL Server 2005 terminating connections after 30 sec
1.2
1
0
549
10,146,086
2012-04-13T18:11:00.000
0
0
0
0
python,django,checkbox,web
10,147,278
1
false
1
0
Your browser is what posts the value as 'on'. This is normal behavior for checkbox inputs without value="blah" attribute set. If it is always posting as 'on' even when the checkbox isn't checked then perhaps there is something on the browser side that is setting this.
1
0
0
I am finding trouble in posting the state of a checkbox in a Django form (Django v1.2). Here's the field in my model: subscribe = models.BooleanField(default=False, verbose_name="Subscribe") In the relevant template file: {{ form.subscribe }} This renders the checkbox as un-checked initially. But when I post the form (without touching anything else), django sends u'subscribe': [u'on'] in request.POST. That is, the response always contains u'subscribe': [u'on'] irrespective of whether the checkbox is checked or not. When the checkbox is not checked, the <input> tag in template is rendered as <input type="checkbox" name="subscribe" id="id_subscribe" /> And, when the checkbox is checked, it is rendered as <input type="checkbox" name="subscribe" id="id_subscribe" checked="checked" /> Am I missing anything here?
Trouble posting un-checked checkbox value in django
0
0
0
1,133
10,146,087
2012-04-13T18:11:00.000
0
0
0
0
django,mongodb,python-imaging-library,sorl-thumbnail
11,557,675
1
true
0
0
Only clear the collection if low disk usage is more important to you than fast access times. The downsides are that your users will all hit un-cached thumbs simultaneously (And simultaneously begin recomputing them). Just run python manage.py thumbnail cleanup This cleans up the Key Value Store from stale cache. It removes references to images that do not exist and thumbnail references and their actual files for images that do not exist. It removes thumbnails for unknown images.
1
2
0
I've extended sorl-thumbnail's KVStoreBase class, and made a key-value backend that uses a single MongoDB collection. This was done in order to avoid installing a discrete key-value store (e.g. Redis). Should I clear the collection every once in a while? What are the downsides?
Using sorl-thumbnail with MongoDB storage
1.2
1
0
298
10,151,806
2012-04-14T07:16:00.000
5
0
0
0
python,python-c-api
10,152,614
1
true
0
1
Yes, use tp_as_mapping instead. Its mp_subscript takes a PyObject * so you can use anything as index/key. To understand how they relate, you could have a look at the source of PyObject_GetItem() which (as the doc says) is the equivalent of Python o[key] expression. You will see that it first tries tp_as_mapping and if that's not there and key is int, it tries tp_as_sequence.
1
3
0
Is it possible to define a class with a __getitem__ that takes a tuple argument using the Python C-API? The sq_item member of the tp_as_sequence member of a PyTypeObject must be a ssizeargfunc, so I don't see how to do it. (But I assume that the NumPy ndarray does it.)
__getitem__ method with tuple argument using Python C-API
1.2
0
0
853
10,152,055
2012-04-14T08:01:00.000
3
0
0
1
python,google-app-engine
10,152,181
1
true
1
0
How about writing the XML data to the blobstore and then write a handler that uses send_blob to download to your local file system? You can use the files API to write to the blobstore from you application.
1
0
0
I have an application I am developing on top of GAE, using Python APIs. I am using the local development server right now. The application involves parsing large block of XML data received from outside service. So the question is - is there an easy way to get this XML data exported out of the GAE application - e.g., in regular app I would just write it to a temp file, but in GAE app I can not do that. So what could I do instead? I can not easily run all the code that produces the service call outside of GAE since it uses some GAE functions to create the call, but it would be much easier if I could take the XML result out and develop/test the parser part outside and then put it back to GAE app. I tried to log it using logging and then extract it from the console, but when XML is getting big it doesn't work well. I know there's bulk data import/export APIs but seems to be an overkill for extracting just this one piece of information to write it to data store and then export the whole store. So how to do it in the best way?
Getting a piece of information from development GAE server to local filesystem
1.2
0
1
66
10,152,317
2012-04-14T08:55:00.000
0
1
0
0
python
10,152,343
2
false
0
0
You should open a COM serial port to the gsm modem from python and use AT commands, search Wikipedia for those, they are used to communicate with gsm devices
1
0
0
How to call a phone number using gsm & python. i have try many software in window, but i can't found it using python.
How to call a phone number using gsm & python
0
0
0
1,039
10,152,721
2012-04-14T10:03:00.000
3
0
0
0
python,flask
10,152,732
3
false
1
0
Flask is great for all kind of projects. As long as you don't need Django's ORM (and all batteries like admin's pages), Flask is the right choice.
2
3
0
I'm about to try Flask framework and look if it fits my needs. I worked with Django, it is cool, but I want to try Flask. I have one small and maybe one medium-sized project and wanted to ask if Flask is the right framework to use for those? Do you guys have experience by running medium-sized (or even large-scale) projects with Flask? Would be nice to hear facts and not just things like "I like Django because it is cool" or "I like Flask just because it is small" :) Anyway, I will try to play with it, just from curiosity.
Flask framework for a small and medium size projects
0.197375
0
0
1,818
10,152,721
2012-04-14T10:03:00.000
0
0
0
0
python,flask
70,235,186
3
false
1
0
Normally I use Flask. But for rapid development I just wanted to build a project with Django however I didn't know it is a burden. When I don't want to use its default authentication mechanism and go with the things in my mind is hard to get rid of the built in structure. On the other hand, using Flask is way more flexible and maintainable. As I told I just wanted to do it for faster development but it took awhile to accept stiffness of Django. So it is not my personal favorite. Just wanted to write even this is an old post.
2
3
0
I'm about to try Flask framework and look if it fits my needs. I worked with Django, it is cool, but I want to try Flask. I have one small and maybe one medium-sized project and wanted to ask if Flask is the right framework to use for those? Do you guys have experience by running medium-sized (or even large-scale) projects with Flask? Would be nice to hear facts and not just things like "I like Django because it is cool" or "I like Flask just because it is small" :) Anyway, I will try to play with it, just from curiosity.
Flask framework for a small and medium size projects
0
0
0
1,818
10,155,542
2012-04-14T17:04:00.000
0
0
0
0
python,cluster-analysis,dbscan
10,155,633
1
false
0
0
DBSCAN is pretty often hard to estimate its parameters. Did you think about the OPTICS algorithm? You only need in this case Min_samples which would correspond to the minimal cluster size. Otherwise for DBSCAN I've done it in the past by trial and error : try some values and see what happens. A general rule to follow is that if your dataset is noisy, you should have a larger value, and it is also correlated with the number of dimensions (10 in this case).
1
0
1
I have written code in python to implement DBSCAN clustering algorithm. My dataset consists of 14k users with each user represented by 10 features. I am unable to decide what exactly to keep as the value of Min_samples and epsilon as input How should I decide that? Similarity measure is euclidean distance.(Hence it becomes even more tough to decide.) Any pointers?
Deciding input values to DBSCAN algorithm
0
0
0
2,333
10,156,386
2012-04-14T18:56:00.000
6
0
1
0
python,syntax-highlighting,python-idle
10,156,434
4
false
0
0
I usually have to save the file as .py before IDLE will do any syntax highlighting at all. Not sure why it would highlight for a few and then stop though. I've never had that happen.
4
6
0
Using IDLE and Python version 2.7.3. Only when I start a new file it highlights for a few lines and then just stops after I press F5. So all my text becomes plain black. If there are equally good/better command line and editor combinations out there, you may always suggest them.
IDLE won't highlight my syntax
1
0
0
21,324
10,156,386
2012-04-14T18:56:00.000
17
0
1
0
python,syntax-highlighting,python-idle
17,601,763
4
true
0
0
This happened to me too. Save it as .py (manually type .py in the document name), and the highlighting will come back.
4
6
0
Using IDLE and Python version 2.7.3. Only when I start a new file it highlights for a few lines and then just stops after I press F5. So all my text becomes plain black. If there are equally good/better command line and editor combinations out there, you may always suggest them.
IDLE won't highlight my syntax
1.2
0
0
21,324
10,156,386
2012-04-14T18:56:00.000
6
0
1
0
python,syntax-highlighting,python-idle
22,746,427
4
false
0
0
Check the key binding for the toggle-auto-coloring option under Options -> Configure IDLE -> Keys -> Custom Key Bindings. The default is Ctrl+/. This should allow you to turn the syntax highlighting back on. (You can't toggle it off though, heh) Works for me on both IDLE 2.7 and IDLE 3.3.3.
4
6
0
Using IDLE and Python version 2.7.3. Only when I start a new file it highlights for a few lines and then just stops after I press F5. So all my text becomes plain black. If there are equally good/better command line and editor combinations out there, you may always suggest them.
IDLE won't highlight my syntax
1
0
0
21,324
10,156,386
2012-04-14T18:56:00.000
2
0
1
0
python,syntax-highlighting,python-idle
10,157,208
4
false
0
0
Is this under Windows? Is it possible your file association for Python files have possibly changed? (not quite sure why/how this could happen, but perhaps something worth checking)
4
6
0
Using IDLE and Python version 2.7.3. Only when I start a new file it highlights for a few lines and then just stops after I press F5. So all my text becomes plain black. If there are equally good/better command line and editor combinations out there, you may always suggest them.
IDLE won't highlight my syntax
0.099668
0
0
21,324
10,157,380
2012-04-14T21:07:00.000
2
0
0
0
python,mysql,mysql-python
10,157,409
3
false
0
0
Go to Cpanel and add the wildcard % on remote Mysql Connection options (cPanel > Remote MySQL)
1
0
0
I am currently writing a script in Python which uploads data to a localhost MySql DB. I am now looking to relocate this MySql DB to a remote server with a static IP address. I have a web hosting facility but this only allows clients to connect to the MySql DB if I specify the domain / IP address from which clients will connect. My Python script will be ran on a number of computers that will connect via a mobile broadband dongle and therefore, the IP addresses will vary on a day-to-day basis as the IP address is allocated dynamically. Any suggestions on how to overcome this issue either with my web hosting facility (cPanel) or alternatively, any suggestions on MySql hosting services that allow remote access from any IP addresses (assuming they successfully authenticate with passwords etc...) Would SSH possibly address this and allow me to transmit data?
Remote Access to MySql DB (Hosting Options)
0.132549
1
0
1,360
10,158,096
2012-04-14T23:05:00.000
1
1
0
1
python,image,nginx
10,165,928
1
false
1
0
Yes, set the proxy_max_temp_file_size to zero, or some other reasonably small value. Another option (which might be a better choice) is to set the proxy_temp_path to faster storage so that nginx can do a slightly better job of insulating the application from buggy or malicious hosts.
1
1
0
My python application sits behind an Nginx instance. When I upload an image, which is one of the purpose of my app, I notice that nginx first saves the image in filesystem (used 'watch ls -l /tmp') and then hands it over to the app. Can I configure Nginx to work in-memory with image POST? My intent is to avoid touching the slow filesystem (the server runs on an embedded device).
Nginx: Speeding up Image Upload?
0.197375
0
0
673
10,158,613
2012-04-15T00:37:00.000
0
0
0
0
python,pandas
12,068,757
3
false
0
0
I had the same error. I did not build pandas myself so i thought i should not get this error as mentioned on the pandas site. So i was confused on how to resolved this error. The pandas site says that matplotlib is an optional depenedency so i didn't install it initially. But interestingly, after installing matplotlib the error disappeared. I am not sure what effect it had. it found something!
2
4
1
I had 0.71 pandas before today. I tried to update and I simply ran the .exe file supplied by the website. now I tried " import pandas" but then it gives me an error ImportError: C extensions not built: if you installed already verify that you are not importing from the source directory. I am new to python and pandas in general. Anything will help. thanks,
Importing Confusion Pandas
0
0
0
2,351
10,158,613
2012-04-15T00:37:00.000
1
0
0
0
python,pandas
11,630,790
3
false
0
0
Had the same issue. Resolved by checking dependencies - make sure you have numpy > 1.6.1 and python-dateutil > 1.5 installed.
2
4
1
I had 0.71 pandas before today. I tried to update and I simply ran the .exe file supplied by the website. now I tried " import pandas" but then it gives me an error ImportError: C extensions not built: if you installed already verify that you are not importing from the source directory. I am new to python and pandas in general. Anything will help. thanks,
Importing Confusion Pandas
0.066568
0
0
2,351
10,159,430
2012-04-15T04:11:00.000
0
0
1
1
python,c
10,159,496
1
false
0
0
There are a lot of important details to your scenario that aren't mentioned, but working on the assumption that you can't write a locking mechanism in to the C program and then use it in the Python program (for example, you're using an existing application on your system), you could look in to os.stat and check the last modified time, m_time. That is of course reliant on you knowing that a recent m_time means the file won't be opened again in the C program and used again. If the file handle is kept open in the C program at all times, and written to occasionally then there is a not a lot of easy options for knowing when it is and isn't being written to.
1
0
0
I have a c program which is running in thread and appending some data in a file. I want to run a python thread which will copy the same file(which c thread is writing) after some time interval. Is there any safe way to do this? I am doing this in linux OS.
Python thread waiting for copying the file
0
0
0
115
10,162,707
2012-04-15T14:04:00.000
1
0
1
0
python,jupyter-notebook,ipython,jupyter
64,937,562
15
false
0
0
The best way now is to use the "Quit" button that is just to the left of the "Logout" button. I have to admit that I do not understand the utility of the Logout button. However, I am glad that they have added the exceedingly useful Quit button.
5
142
0
How to close IPython Notebook properly? Currently, I just close the browser tabs and then use Ctrl+C in the terminal. Unfortunately, neither exit() nor ticking Kill kernel upon exit does help (they do kill the kernel they but don't exit the iPython).
How to close IPython Notebook properly?
0.013333
0
0
228,853
10,162,707
2012-04-15T14:04:00.000
0
0
1
0
python,jupyter-notebook,ipython,jupyter
56,401,587
15
false
0
0
Step 1 - On shell just do control+z (control+c) Step 2 _ close the web browser
5
142
0
How to close IPython Notebook properly? Currently, I just close the browser tabs and then use Ctrl+C in the terminal. Unfortunately, neither exit() nor ticking Kill kernel upon exit does help (they do kill the kernel they but don't exit the iPython).
How to close IPython Notebook properly?
0
0
0
228,853
10,162,707
2012-04-15T14:04:00.000
3
0
1
0
python,jupyter-notebook,ipython,jupyter
45,340,171
15
false
0
0
Actually, I believe there's a cleaner way than killing the process(es) using kill or task manager. In the Jupyter Notebook Dashboard (the browser interface you see when you first launch 'jupyter notebook'), browse to the location of notebook files you have closed in the browser, but whose kernels may still be running. iPython Notebook files appear with a book icon, shown in green if it has a running kernel, or gray if the kernel is not running. Just select the tick box next to the running file, then click on the Shutdown button that appears above it. This will properly shut down the kernel associated with that specific notebook.
5
142
0
How to close IPython Notebook properly? Currently, I just close the browser tabs and then use Ctrl+C in the terminal. Unfortunately, neither exit() nor ticking Kill kernel upon exit does help (they do kill the kernel they but don't exit the iPython).
How to close IPython Notebook properly?
0.039979
0
0
228,853
10,162,707
2012-04-15T14:04:00.000
0
0
1
0
python,jupyter-notebook,ipython,jupyter
35,571,575
15
false
0
0
In the browser session you can also go to Kernel and then click Restart and Clear Output.
5
142
0
How to close IPython Notebook properly? Currently, I just close the browser tabs and then use Ctrl+C in the terminal. Unfortunately, neither exit() nor ticking Kill kernel upon exit does help (they do kill the kernel they but don't exit the iPython).
How to close IPython Notebook properly?
0
0
0
228,853
10,162,707
2012-04-15T14:04:00.000
5
0
1
0
python,jupyter-notebook,ipython,jupyter
17,741,797
15
false
0
0
Try killing the pythonw process from the Task Manager (if Windows) if nothing else works.
5
142
0
How to close IPython Notebook properly? Currently, I just close the browser tabs and then use Ctrl+C in the terminal. Unfortunately, neither exit() nor ticking Kill kernel upon exit does help (they do kill the kernel they but don't exit the iPython).
How to close IPython Notebook properly?
0.066568
0
0
228,853
10,163,877
2012-04-15T16:35:00.000
1
1
0
1
python,bash,ubuntu,file-monitoring
10,176,476
3
true
0
0
One technique I use works with FTP. You issue a command to the FTP server to transfer the file to an auxiliary directory. Once the command completes, you send a second command to the server, this time telling it to rename the file from from the aux directory to the final destination directory.If you're using inotify or polling the directory, the filename won't appear until the rename has completed, thus, you're guaranteed that the file is complete. I'm not familar with rsync so I don't know if it has a similar rename capability.
1
6
0
I have a video encoding script that I would like to run as soon as a file is moved into a specific directory. If I use something like inotify, how do I ensure that the file isn't encoded until it is done moving? I've considered doing something like: Copy (rsync) file into a temporary directory. Once finished, move (simple 'mv') into the encode directory. Have my script monitor the encode directory. However, how do I get step #2 to work properly and only run once #1 is complete? I am using Ubuntu Server 11.10 and I'd like to use bash, but I could be persuaded to use Python if that'd simplify issues. I am not "downloading" files into this directory, per se; rather I will be using rsync the vast majority of the time. Additionally, this Ubuntu Server is running on a VM. I have my main file storage mounted via NFS from a FreeBSD server.
Move file to another directory once it is done transferring
1.2
0
0
1,292
10,165,046
2012-04-15T18:56:00.000
0
0
0
0
python,django,forms,django-forms
10,165,663
2
false
1
0
The short answer is yes. You would have to be careful with your template and views. Can you please share your code... view, django models and template. Are you using model forms? Why are you keeping them as separate models (tables)? My suggestion is if you don't need to keep the models separate, edit the Product model to include Pictures. Then your form will suit your needs nicely. Hope this helps. If not, share code.
1
2
0
I have this table Products: size color etc and another table Pictures: product_id picture and I have generated form from Products table, but I also need there field for adding a picture to that product. Is it possible to add a field to the product generated form for a picture? Thanks in advance.
django: add form field to generated form from another table
0
0
0
1,987
10,165,342
2012-04-15T19:34:00.000
0
1
0
0
python,buildout
10,168,036
2
false
0
0
Putting it in find-links should work. I've done that in the past. You have to make sure the link is of the correct format as any python egg.
1
0
0
zc.recipe.egg allows you to install any egg and its script with buildout. However, zc.recipe.egg relies on find-links and index behavior, inherit from setuptools I guess. It would like to take an egg server / HTML for scanning. What if I just want to point zc.recipe.egg to a egg direct download URL how would I do that? Looks like putting it to find-links is no go.
Buildout and zc.recipe.egg - specifying egg download URL directly?
0
0
0
748
10,165,457
2012-04-15T19:49:00.000
11
0
1
0
python,python-3.x
10,165,484
7
false
0
0
L[3] is 5, L[L[3]] is L[5] is 0, and 0 - 1 is -1.
6
3
0
Im studying for a computer science final right now and I do not understand how this works at all: L = [ 8, 6, 7, 5, 3, 0, 9 ] 30. L[L[3]] - 1 is? (A) * -1 (B) an error (C) 8 (D) 4 (E) 7 The answer is -1.. SO to test how this works I just did L[L[3]] and the answer is 0, then i did L[L[4]] and thar equals 5, then i did L[L[1]] that gave a 9 but if I go L[L[2]] I get a list index out of range error. Im beyond confused here Anyone can help please?
What is L[L[3]] in a list?
1
0
0
203
10,165,457
2012-04-15T19:49:00.000
0
0
1
0
python,python-3.x
10,166,018
7
false
0
0
Here is the answer broken down in steps: L = [ 8, 6, 7, 5, 3, 0, 9 ] L[L[3]] - 1 = -1 L[3] = 5, so L[L[3]] = L[5] = 0
6
3
0
Im studying for a computer science final right now and I do not understand how this works at all: L = [ 8, 6, 7, 5, 3, 0, 9 ] 30. L[L[3]] - 1 is? (A) * -1 (B) an error (C) 8 (D) 4 (E) 7 The answer is -1.. SO to test how this works I just did L[L[3]] and the answer is 0, then i did L[L[4]] and thar equals 5, then i did L[L[1]] that gave a 9 but if I go L[L[2]] I get a list index out of range error. Im beyond confused here Anyone can help please?
What is L[L[3]] in a list?
0
0
0
203
10,165,457
2012-04-15T19:49:00.000
0
0
1
0
python,python-3.x
10,165,499
7
false
0
0
it says, get the number at l[3] which is 5, then use that, so get the number at l[5] which is 0, then subtract 1.... ta dah!
6
3
0
Im studying for a computer science final right now and I do not understand how this works at all: L = [ 8, 6, 7, 5, 3, 0, 9 ] 30. L[L[3]] - 1 is? (A) * -1 (B) an error (C) 8 (D) 4 (E) 7 The answer is -1.. SO to test how this works I just did L[L[3]] and the answer is 0, then i did L[L[4]] and thar equals 5, then i did L[L[1]] that gave a 9 but if I go L[L[2]] I get a list index out of range error. Im beyond confused here Anyone can help please?
What is L[L[3]] in a list?
0
0
0
203
10,165,457
2012-04-15T19:49:00.000
0
0
1
0
python,python-3.x
10,165,497
7
false
0
0
L[2] = 7 and the list only has only 7 elements, that is indice 0..6. So accessing L[L[2]] = L[7] is out of range...
6
3
0
Im studying for a computer science final right now and I do not understand how this works at all: L = [ 8, 6, 7, 5, 3, 0, 9 ] 30. L[L[3]] - 1 is? (A) * -1 (B) an error (C) 8 (D) 4 (E) 7 The answer is -1.. SO to test how this works I just did L[L[3]] and the answer is 0, then i did L[L[4]] and thar equals 5, then i did L[L[1]] that gave a 9 but if I go L[L[2]] I get a list index out of range error. Im beyond confused here Anyone can help please?
What is L[L[3]] in a list?
0
0
0
203
10,165,457
2012-04-15T19:49:00.000
0
0
1
0
python,python-3.x
10,165,496
7
false
0
0
Let's walk through the answer by showing the intermediate results, even though they are not displayed: L[3] is 5, L[L[3]] is therefore L[5] = 0, and L[L[3]] -1 = 0-1 = -1
6
3
0
Im studying for a computer science final right now and I do not understand how this works at all: L = [ 8, 6, 7, 5, 3, 0, 9 ] 30. L[L[3]] - 1 is? (A) * -1 (B) an error (C) 8 (D) 4 (E) 7 The answer is -1.. SO to test how this works I just did L[L[3]] and the answer is 0, then i did L[L[4]] and thar equals 5, then i did L[L[1]] that gave a 9 but if I go L[L[2]] I get a list index out of range error. Im beyond confused here Anyone can help please?
What is L[L[3]] in a list?
0
0
0
203
10,165,457
2012-04-15T19:49:00.000
4
0
1
0
python,python-3.x
10,165,510
7
false
0
0
These sort of problems are best worked one step at at time, from the inside out. So, L[3] gives you 5. Use this value (5) as the index into the list again, i.e., L[5] and this gives you 0. Finally, 0 - 1 = -1, your answer.
6
3
0
Im studying for a computer science final right now and I do not understand how this works at all: L = [ 8, 6, 7, 5, 3, 0, 9 ] 30. L[L[3]] - 1 is? (A) * -1 (B) an error (C) 8 (D) 4 (E) 7 The answer is -1.. SO to test how this works I just did L[L[3]] and the answer is 0, then i did L[L[4]] and thar equals 5, then i did L[L[1]] that gave a 9 but if I go L[L[2]] I get a list index out of range error. Im beyond confused here Anyone can help please?
What is L[L[3]] in a list?
0.113791
0
0
203
10,167,900
2012-04-16T01:57:00.000
2
1
1
0
java,php,python,constructor
10,167,913
3
true
0
0
Both of those languages instantiate it before calling the constructor. In Java, you have access to this, in Python self. Also, in Java, it's like a method, except with no return type. In Python, the syntax is exactly that of a method (__init__).
2
1
0
PHP uses __construct() to set properties for a newly created object. From what I understand, it's not really a constructor, but a method. Why? Also - for less .. inconsistent languages like Java or Python does the object gets instantiated before or after the constructor is called? And how is this different from the PHP way? Thanks!
Why is the PHP constructor a method?
1.2
0
0
244
10,167,900
2012-04-16T01:57:00.000
0
1
1
0
java,php,python,constructor
10,169,430
3
false
0
0
In every object-oriented language (that I know of; I'm hardly an expert in all of them), the constructor is called after the object is created, to initialise the contents of the object. No code in the constructor creates the object, or can in anyway influence the creation process[1]. (Note I don't refer to memory; in languages like C++ and Java "the object has been created" means the memory its fields occupy has been allocated, whereas in Python "the object has been created" means there is a dictionary that will hold attributes of the object once they are assigned) In most OO languages that I know of, constructors also have extremely similar syntax to methods, and I don't see any conceptual difficulty in thinking about them as methods in most senses (in Python the __init__ method is literally a method in every sense; there's just a protocol that the runtime system invokes it on new objects after they're created). [1]Python has additionally a feature that does let you control the object creation process; but you don't do it with the __init__ method (the special method that most closely corresponds with constructors from Java/PHP), you do with with __new__.
2
1
0
PHP uses __construct() to set properties for a newly created object. From what I understand, it's not really a constructor, but a method. Why? Also - for less .. inconsistent languages like Java or Python does the object gets instantiated before or after the constructor is called? And how is this different from the PHP way? Thanks!
Why is the PHP constructor a method?
0
0
0
244
10,168,761
2012-04-16T04:34:00.000
1
0
0
0
python,django,convention,directory-structure,django-1.4
10,169,482
3
false
1
0
STATIC_ROOT is just a file path where the staticfiles contrib app will collect and deposit all static files. It is a location to collect items, that's all. The key thing is that this location is temporary storage and is used mainly when packaging your app for deployment. The staticfiles app searches for items to collect from any directory called static in any apps that are listed in INSTALLED_APPS and in addition any extra file path locations listed in STATICFILES_DIRS. For my projects I create a deploy directory in which I create a www folder that I use for static files, and various other files used only when deploying. This directory is at the top level of the project. You can point the variable to any location to which your user has write permissions, it doesn't need to be in the project directory.
1
7
0
This is the new project structure (from the Django 1.4 release notes). myproject |-- manage.py |-- myproject | |-- __init__.py | |-- settings.py | |-- urls.py | `-- wsgi.py `-- polls |-- __init__.py |-- models.py |-- tests.py `-- views.py What I am not sure about is whether I should point STATIC_ROOT to myproject/myproject/static/ (together with settings.py, urls.py...) OR The top-level directory myproject/static (next to myproject, myapp1, myapp2)?
Static folders structure in Django 1.4?
0.066568
0
0
3,453
10,169,290
2012-04-16T05:55:00.000
1
1
1
0
python,git,continuous-integration,buildbot
10,268,147
3
false
0
0
Currently GitPoller can only watch a single branch at a time. However, you can have as many GitPollers as you want.
1
6
0
I'm looking for a way to have a GitPoller changesource watch all branches instead of just one. For now, either I specify branch='some branch' in the GitPoller constructor, or it defaults to master. Even better would be to be able to specify some ref pattern to watch. Is that something one does already? Or does it need to code another kind of GitPoller ? Thanks.
How to have a buildbot GitPoller change source watch all branches?
0.066568
0
0
2,319
10,169,500
2012-04-16T06:19:00.000
1
0
0
0
python,url,web,siteminder
19,700,519
3
false
0
0
Agree with Martin - you need to just replicate what the browser does. Siteminder will pass you a token once successfully authenticated. I have to do this as well, will post once I find a good way.
1
3
0
I am trying to access and parse a website at work using Python. The sites authorization is done via siteminder, so the usual urllib/urllib2 user password does not work. Does anyone have an idea how to do that? Thanks NoamM
Use Python/urllib to access web sites with "siteminder" authentication?
0.066568
0
1
2,741
10,169,574
2012-04-16T06:26:00.000
9
0
0
1
python,django,google-app-engine
10,192,419
1
false
1
0
Are you using appstats? It looks like this can happen when appstats is recording state about your app, especially if you're storing lots of data on the stack. It isn't harmful, but you won't be able to see everything when inspecting calls in appstats.
1
9
0
I got this error while rendering google app engine code. Do any body have knowledge about this error?
Full proto too large to save, cleared variables
1
0
0
1,244
10,169,949
2012-04-16T07:06:00.000
0
0
0
0
python,string,floating-point,xls,xlrd
10,169,963
2
false
0
0
Did you try using int(phoneNumberVar) or in your case int(8889997777.0)?
2
2
0
I am trying to read phonenumber field from xls using xlrd (python). But, I always get float no. e.g. I get phone number as 8889997777.0 How can I get rid of floating format and convert it to string to store it in my local mongodb within python as string as regular phone number e.g. 8889997777
python xlrd reading phone nunmber from xls becomes float
0
1
0
2,371
10,169,949
2012-04-16T07:06:00.000
4
0
0
0
python,string,floating-point,xls,xlrd
10,170,261
2
false
0
0
You say: python xlrd reading phone nunmber from xls becomes float This is incorrect. It is already a float inside your xls file. xlrd reports exactly what it finds. You can use str(int(some_float_value)) to do what you want to do.
2
2
0
I am trying to read phonenumber field from xls using xlrd (python). But, I always get float no. e.g. I get phone number as 8889997777.0 How can I get rid of floating format and convert it to string to store it in my local mongodb within python as string as regular phone number e.g. 8889997777
python xlrd reading phone nunmber from xls becomes float
0.379949
1
0
2,371
10,171,249
2012-04-16T09:00:00.000
1
0
0
0
python,text,grid,tkinter,row
10,172,879
2
false
0
1
You can put multiple items in one cell but it is highly unusual, may have surprising behavior, and there are better ways to accomplish the same effect. For example, the grid is invisible so you can have as many rows as you want to achieve any look you can imagine. Also, the definition of "item" is pretty loose -- you can create a frame, and in that frame put two labels, and that frame can go in a single row using grid to give the appearance of two lines of text in a single grid row. You can also use a text widget which lets you put as many lines of text that you want.
1
0
0
Does anybody know if it's possible to put two lines of text in a single row using grid in TKinter? If I make the font small enough, can I distribute the text in two lines?
Two lines of text in a single grid row
0.099668
0
0
1,178
10,178,313
2012-04-16T16:52:00.000
1
0
0
0
python,django,virtualenv
10,178,380
1
true
1
0
Are you running ./manage.py shell or python manage.py shell? It can make a difference. Using the ./ version uses the shebang line for the interpreter and normally results in using the system-level interpreter. As you've seen yourself, running python uses the virtualenv's version, so python manage.py shell should as well.
1
0
0
I have created a virtualenv for developing in Django but Django is not using the correct instance of Python. Here's what I've found out: C:\Python27 is not in my path. If I run python from a command prompt it says it's not recognized When I start up the virtualenv, run python and check sys.executable it does point to the virtualenv's instance of python and sys.path is also pointing to the correct place When I run manage.py shell from within the virtualenv and check the sys.executable and sys.path they are both pointing to the C:\python27 installation Any ideas as to what's going on?
Django not using correct Python instance from within virtualenv
1.2
0
0
332
10,178,875
2012-04-16T17:36:00.000
0
0
0
0
python,qt4,pyqt4,markdown,rtf
24,965,112
2
false
0
1
I came here because I'm looking for a solution for the same task. Here is what I would (or hopefully will) try: Subclass QTextEdit, which can display both plain and rich text. supply two string properties, one containing Markdown source, the other generated HTML. For entering "edit mode" (however your UI will handle this) self.setText(self.markdown) self.setReadOnly(False) For leaving "edit mode": self.markdown = self.toPlainText() self.toHtml() # convert self.markdown to self.html # don't know yet how to achieve that self.setHtml(self.html) self.setReadOnly(True) For displaying the HTML one can use a CSS stylesheet. As UI interface I could imagine: clicking on the readonly display mode switches to edit mode, [Ctrl]-[Enter] triggers HTML generation.
1
0
0
I'd like to make a small desktop editor to take notes, that uses markdowns to format text quickly. The application should transcribe markdowns instantaneously or after clicking on a button. For this I'd like to use Qt4 and Python. What, in your opinion, is the most efficient way to proceed? In the case the rich text is rendered after pressing a button, I suppose I could use QTextEdit widget for the edit-mode, but what to use to display the rich text? I want this to look good. Should I render the text in HTML? Or something else? Please advise.
How to make an editor using markdowns with Qt4 and Python?
0
0
0
1,474
10,180,941
2012-04-16T20:09:00.000
0
0
1
0
python,macos,emacs,macports,enthought
10,421,030
2
false
0
0
It would be cleaner to install the additional packages once more for the Enthought Python. Trying to reuse packages from another installation seems neither clean nor safe to me.
1
2
0
I have two installations of Python 2.7.2 -- from MacPorts and Enthought -- on my Mac. I use the Enthought Python as the primary one; however, the MacPorts distribution has several additional packages like pymacs, rope etc., which I would like to make available to the Enthought Python. (I'm actually trying to use Emacs w/ Enthought Python, but also make use of the MacPorts-installed Rope, Pymacs for code completion in Emacs). Is there a clean way to make the MacPorts packages available to the Enthought Python without breaking anything?
Using MacPorts-installed Python packages with Enthought(or some other) Python on OS X?
0
0
0
556
10,181,609
2012-04-16T20:57:00.000
1
0
0
0
python,django,apache,debugging
10,182,151
1
true
1
0
I find it easiest to setup a whole extra subdomain for testing different versions of a django site. It really is probably bad form that django doesn't plain out give examples of how to do this as it leads to people doing odd things. My setup is nginx/ uwsgi emperor so posting config file examples prob won't help you much.
1
1
0
I have a production server which uses Apache / FastCGI / DJango to serve up my website. This works well and I have some cunning settings which mean that if a maintenance file exists, the world sees the maintenance message but my IP address can still work on the site. The detection of the maintenance file is done at the apache level, but is there a way I can set the DEBUG setting (normally configured through settings.py) so that debug is enabled for my IP address?
Is it possible to enable debug for specific hosts in django?
1.2
0
0
401
10,182,828
2012-04-16T22:52:00.000
0
0
0
0
python,django,installation,setup.py,django-1.4
10,183,287
1
false
1
0
Try adding "C:\Python26\;C:\Python26\Scripts;" to your PATHenvironmental variable and then running django-admin.py startproject mysite.
1
0
0
On windows 7 I have python2.6.6 installed at C:\Python26 I wanted to install Django, so I: downloaded and untarred the files into C:\Python26\Django-1.4 ran python setup.py install verified it was installed by opening IDLE and typing, import django The next part is the problem... in the tutorial, it says to now run django-admin.py startproject mysite, however django-admin.py wasn't found, and while looking for it, I discovered that there is a duplication in the directories C:\Python26\Django-1.4\build\lib\django C:\Python26\Django-1.4\django I didn't see anything in setup.cfg that would allow me to make sure that didn't happen or to pick a different setup folder, etc... but in the file C:\Python26\Django-1.4\INSTALL, it is stated that "AS AN ALTERNATIVE, you can just copy the entire "django" directory to python's site-packages directory" So for my question: besides avoiding this duplication of code in the Django directories, what else is the difference with using the setup.py install command versus copying the directory? Are there other pros/cons?
Django: setup.py versus copying directory
0
0
0
560
10,182,841
2012-04-16T22:53:00.000
1
0
1
0
python,performance,dictionary,nested,tuples
10,182,927
2
false
0
0
A little surprisingly, the dictionary of dictionaries is faster than the tuple in both CPython 2.7 and Pypy 1.8. I didn't check on space, but you can do that with ps.
1
6
0
What is more efficient in terms of memory and speed between d[(first,second)] and d[first][second], where d is a dictionary of either tuples or dictionaries?
Two-dimensional vs. One-dimensional dictionary efficiency in Python
0.099668
0
0
1,909
10,184,591
2012-04-17T03:20:00.000
0
0
0
1
python,google-app-engine,memcached,google-cloud-datastore
10,188,925
5
false
1
0
I think you could create tasks which will persist the data. This has the advantage that, unlike memcached the tasks are persisted and so no chats would be lost. when a new chat comes in, create a task to save the chat data. In the task handler do the persist. You could either configure the task queue to pull at 1 per second (or slightly slower) and save each bit of chat data held in the task persist the incoming chats in a temporary table (in different entity groups), and every have the tasks pull all unsaved chats from the temporary table, persist them to the chat entity then remove them from the temporary table.
2
3
0
I'm writing a chat application using Google App Engine. I would like chats to be logged. Unfortunately, the Google App Engine datastore only lets you write to it once per second. To get around this limitation, I was thinking of using a memcache to buffer writes. In order to ensure that no data is lost, I need to periodically push the data from the memcache into the data store. Is there any way to schedule jobs like this on Google App. Engine? Or am I going about this in entirely the wrong way? I'm using the Python version of the API, so a Python solution would be preferred, but I know Java well enough that I could translate a Java solution into Python.
Synchronize Memcache and Datastore on Google App Engine
0
0
0
1,816
10,184,591
2012-04-17T03:20:00.000
0
0
0
1
python,google-app-engine,memcached,google-cloud-datastore
10,191,468
5
false
1
0
i think you would be fine by using the chat session as entity group and save the chat messages . this once per second limit is not the reality, you can update/save at a higher rate and im doing it all the time and i don't have any problem with it. memcache is volatile and is the wrong choice for what you want to do. if you start encountering issues with the write rate you can start setting up tasks to save the data.
2
3
0
I'm writing a chat application using Google App Engine. I would like chats to be logged. Unfortunately, the Google App Engine datastore only lets you write to it once per second. To get around this limitation, I was thinking of using a memcache to buffer writes. In order to ensure that no data is lost, I need to periodically push the data from the memcache into the data store. Is there any way to schedule jobs like this on Google App. Engine? Or am I going about this in entirely the wrong way? I'm using the Python version of the API, so a Python solution would be preferred, but I know Java well enough that I could translate a Java solution into Python.
Synchronize Memcache and Datastore on Google App Engine
0
0
0
1,816
10,186,367
2012-04-17T06:39:00.000
3
0
1
0
python,oop
10,186,430
2
false
0
0
Those are default parameters. You could initialize a class instance like my_class(1, 2, 3) and it would set derp1 to 1, derp2 to 2 and derp3 to 3. Because defaults are provided, you could also call it like my_class(5) and it would set derp1 to 5, derp2 to 0 and derp3 to 0. Keep in mind that the derp variables are local to the __init__ function so you need to assign them to some class variable if you want to hold onto them. For example, in __init__, you could save off the derp1 value by doing self.derpaderp = derp1 and then refer to that value as self.derpaderp elsewhere in your class.
1
1
0
I am tasked with converting some Python code to Java. I have some experience with Python, but am unfamiliar with some of its features. I see an __init__ method which I understand is essentially a constructor. I expect to see arguments like this: def __init__(self,derp1, derp2, derp3): But in one part of the code, I see: def __init__(self,derp1=0, derp2=0, derp3=0): Now, to me, it looks like this is some sort of conditional constructor, used specifically when (self,0,0,0) has been passed. This shouldn't be the case, because there is no alternative constructor. It also shouldn't be an inline assignment, as that just doesn't make sense. I've tried Googling to figure out what this means, but I'm not having much luck. I appreciate any help you can offer.
Python __init__/General Function Inline Equals In Argument(s):
0.291313
0
0
335
10,187,072
2012-04-17T07:35:00.000
29
0
1
0
python,win64
12,448,411
4
true
0
0
This appears to be working for me on Windows 7 64 bit. Choose one version to be your default installation, e.g. 64 bit, and install it first. Before doing anything else install the other version. Specify a different installation directory and in the Customize Python 2.7.3 screen select Register Extensions and select Entire feature will be unavailable.
2
32
0
I have Windows Vista 64. I have some projects requiring Python 2.7.3 64 bit and others requiring Python 2.7.3 32 bit (because some extensions do not work in 64 bit). How do I prevent the Python 2.7.3 MSI installer (32 or 64 bot) from deleting the other version. Side by side worked for me with Python 2.7.2 without problems.
How do I install Python 2.7.3 32 bit and 64 bit on Windows side by side
1.2
0
0
33,163
10,187,072
2012-04-17T07:35:00.000
-1
0
1
0
python,win64
72,223,277
4
false
0
0
Can installing 32-bit python on other user and install 64-bit on the other user, solve this problem?
2
32
0
I have Windows Vista 64. I have some projects requiring Python 2.7.3 64 bit and others requiring Python 2.7.3 32 bit (because some extensions do not work in 64 bit). How do I prevent the Python 2.7.3 MSI installer (32 or 64 bot) from deleting the other version. Side by side worked for me with Python 2.7.2 without problems.
How do I install Python 2.7.3 32 bit and 64 bit on Windows side by side
-0.049958
0
0
33,163
10,189,273
2012-04-17T10:14:00.000
12
0
1
0
python,python-2.7
10,189,937
4
true
0
0
No, there's no advantage to iterkeys over viewkeys, in the same way that there's no advantage to keys over either of them. iterkeys is only around for back compatibility. Indeed, in Python 3, viewkeys is the only behaviour that still exists, and it has been renamed to keys - the viewkeys method is actually a backport of the Python 3 behaviour.
1
27
0
In Python 2.7, dictionaries have both an iterkeys method and a viewkeys method (and similar pairs for values and items), giving two different ways to lazily iterate over the keys of the dictionary. The viewkeys method provides the principal feature of iterkeys, with iter(d.viewkeys()) effectively equivalent to d.iterkeys(). Additionally, objects returned viewkeys have convenient set-like features. There thus are strong reasons to favor viewkeys over iterkeys. What about the other direction? Apart from compatibility with earlier versions of Python, are there any ways in which iterkeys would be preferable to viewkeys? Would anything be lost by just always using viewkeys?
For a Python dictionary, does iterkeys offer any advantages over viewkeys?
1.2
0
0
41,165
10,193,285
2012-04-17T14:33:00.000
5
0
1
0
python,scipy,enthought
10,308,121
1
false
0
0
If you have academic affiliation (or tell them you do), they will send you the link to get the full version for free. Don't abuse that privilege!
1
1
0
Because I dont have sudo permission in our linux server, I need to compile scipy and numpy from source. After several failures(mainly about ATALS), I gave up and installed Enthought Python Distribution Free instead. However, ever time I use ipython, I will get the following messenger. Enthought Python Distribution (free version) -- www.enthought.com (type 'upgrade' or see www.enthought.com/epd/upgrade to get the full EPD) It is annoying. Could I set somewhere so that the info doesn't appear? Thanks in advance.
scipy ipython Enthought
0.761594
0
0
279
10,195,955
2012-04-17T17:20:00.000
1
0
0
0
python,django,validation,analytics,tracking
10,196,121
1
false
1
0
The easy way would be to grab the code from the tracking site and hard-code everything but the unique portion (usually a user ID number), offer the user a choice of approved trackers (with a radio button) and have them paste in their ID, then insert that value when you render the page. If I remember correctly, blogger worked like this before they tied it in directly with analytics via the google account.
1
0
0
I provide in the administration interface of the site I make an area for the webmaster to place tracking codes from external analytic tools. Essentially, this codes must be included 'as-is', but my concern is that any typo could render the page useless, mess-up the HTML, etc... It's possible (at some extent) to cleanup/validate these codes so I least it ensures the HTML won't be corrupted? I'm using Python/Django, but i guess the Django part is somewhat irrelevant for this topic. Regards
It's possible to validate/lint/bleach a piece of code given by analytics/tracking sites like Google Analytics or Piwik
0.197375
0
0
69
10,196,305
2012-04-17T17:47:00.000
0
0
0
0
python,fonts,styles,widget,ttk
10,249,969
2
false
0
1
I believe the code in this area is buggy and will open a ticket. Using 'TLableframe.Label' (note uppercase 'L' in 'Label' works. 'TButton.label' and 'TButton.Label' don't work, but just 'TButton' does; 'TCheckbutton' is the same. I was unable to change the fonts for 'TEntry' with any combination, including adding 'textarea.'
1
1
0
On OS X, ttk.Style().configure('TLabelframe.label', font='helvetica 14 bold') works to change the font used by the ttk.LabelFrame widget. On Windows, ttk.Style().configure('TLabelframe.label', font='arial 14 bold') has no effect other than returning the same font info to ttk.Style().lookup('TLabelframe.label','font'). I've tried different font names and formats, creating a derived style, using TkDefaultFont and just changing the size, and different widgets (TButton.label, TCheckbutton.label). So far, no matter what I've tried, it always appears to use TkDefaultFont in the default size. Changing the font setting in python27/tcl/tk8.5/ttk/xpTheme.tcl (the default theme on windows) does change the font being displayed. Removing the -font TkDefaultFont setting from the theme settings does not change what is displayed. Any suggestions as to how this actually works? Edit: I hadn't tried changing the font for the Label widget before, and that one actually works.
How to changes fonts using ttk themed widgets in windows
0
0
0
1,687
10,196,716
2012-04-17T18:16:00.000
0
0
0
0
python,pyqt4
10,201,627
1
false
0
1
After inserting the row, call QTableWidget.scrollToBottom() to show that last row.
1
0
0
I am inserting some data in qtablewidget after some time interval. For this i am clearing all content of qtable and then inserting data. But the table showing data from first row, I want that the table always show last row. This will feel like real updation of table. how to do this?
Pyqt table widget updation
0
0
0
303
10,196,803
2012-04-17T18:22:00.000
1
0
0
1
python,windows
10,197,024
2
false
0
0
for lot of automation tasks but confined to local machine use fabric
1
3
0
I am using shutil.copy() to transfer files from one server to another server on a network, both Windows. I have used shutil and os modules for lot of automation tasks but confined to local machine. Are there better approaches to transfer file (I mean in terms of performance) from one server to another?
Transferring files between Windows Servers using shutil copy/move
0.099668
0
1
1,457
10,198,359
2012-04-17T20:08:00.000
0
0
0
0
python,http,apache2,webserver,wsgi
10,201,507
2
false
1
0
The 'reload' and 'graceful' would have the same effect as far as reloading your web application. If you are seeing issues with imports like you describe, it is likely to be an issue in your application code with you having import order dependencies or import cycles. One sees this a lot with people using Django. Suggest you actually post an example of the error you are getting.
1
0
0
When the code for my python WSGI applicaiton changes should I use apache2's reload or graceful restart feature? Currently we use reload, but have noticed that sometimes the application does not load properly and errors pertaining to missing modules are logged to the error files even though the modules have existed for a long time.
Is apache2 reload for .conf changes only or is it allowable to be used when application code changes?
0
0
0
67
10,199,697
2012-04-17T21:49:00.000
1
0
0
0
python,web-services,api
10,332,534
1
true
1
0
Well, your question is a little bit generic, but here are a few pointers/tips: Webfaction allows you to install pretty much anything you want (you need to compile it / or ask the admins to install some CentOS package for you). They provide some default Apache server with mod_wsgi, so you can run web2py, Django or any other wsgi frameworks. Most popular Python web frameworks have available installers in Webfaction (web2py, django...), so I would recommend you to go with one of them. I would also install supervisord to keep your service running after some reboot/crash/problem. I would be glad to help you if you have any specific question...
1
1
0
I have developed a few python programs that I want to make available online. I am new to web services, and I am not sure what I need to do in order to create a service where somebody makes a request to an URL (for example), and the URL triggers a Python program that displays something in the user's browser, or a set of inputs are given to the program via browser, and then python does whatver it is supposed to do. I was playing with the google app engine, which runs fine with the tutorial, and was planning to use it becuase it looks easy, but the problem with GAE is that it does not work well (or does not work at all) with some libraries that I plan to use. I guess what I am trying to do is some sort of API using my WebFaction account. Can anybody point me in the right directions? What choices do I have in WebFaction? What are the easiest tools available? Thank you very much for your help in advance. Cheers
RESTful Web service or API for a Python program in WebFaction
1.2
0
1
597
10,199,963
2012-04-17T22:12:00.000
7
0
0
1
python,google-app-engine,memory-management,python-2.7,task-queue
10,200,635
1
false
1
0
There's not currently any way to advise the App Engine infrastructure about this. You could have your tasks return a non-200 status code if they shouldn't run now, in which case they'll be automatically retried (possibly on another instance), but that could lead to a lot of churn. Backends are probably your best option. If you set up dynamic backends, they'll only be spun up as required for task queue traffic. You can send tasks to a backend by specifying the URL of the backend as the 'target' argument. You can gain even more control over task execution by using pull queues. Then, you can spin up backends as you choose (or use push queue tasks, for that matter), and have the instances pull tasks off the pull queue in whatever manner suits.
1
1
0
I have a Google App Engine app that periodically processes bursts of memory-intensive long-running tasks. I'm using the taskqueue API on the Python2.7 run-time in thread-safe mode, so each of my instances is handling multiple tasks concurrently. As a result, I frequently get these errors: Exceeded soft private memory limit with 137.496 MB after servicing 8 requests total After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application. As far as I can tell, each instance is taking on 8 tasks each and eventually hitting the soft memory limit. The tasks start off using very low amounts of memory but eventually grow to about 15-20MB. Is there any way to restrict tell App Engine to assign no more than 5 requests to an instance? Or tell App Engine that the task is expected to use 20MB of memory over 10 minutes and to adjust accordingly? I'd prefer not to use the backend APIs since I want the number of instances handling tasks to automatically scale, but if that's the only way, I'd like to know how to structure that.
Restricting the number of task requests per App Engine instance
1
0
0
228
10,200,594
2012-04-17T23:19:00.000
1
0
1
0
python,plugins,pyqt,contextmenu,pyqt4
10,543,269
2
true
0
0
Avaris' comments basically answered this question the best. I can't select it, though, because it's comments, rather than an answer. So I'm summarizing that answer here, so this Q can be answered: "If you are given a context menu (QMenu to be specific) you can access it." - Avaris "You can't do this without modifying that file [the one containing the context menu creation code]. It assumes it has given a list of QActions (see line 301) and it wouldn't be expecting QMenu. Though, if you can get to the plugin_menu reference in your plugin, that's a different story." - Avaris So either you can access to manipulate the menu in the same file (and same place in that file) where the menu creation/definition code is, or you can gain access to do so via that file (in the same place) exposing the menu via an API. On the flip side, if you don't have such API access, and/or modify that file/function, then there's no way to do this.
1
3
0
Note: I have very little python and PyQt experience... Given a context menu already created, I'm looking for a functional example of how I would gain access to that context menu, so it can be extended for a plugin which is doing the python equivalent of a javascript greasemonkey script. Then I'm also looking for a functional example of how I could add a submenu to that context menu. Thanks!
PyQt: Basic example of accessing and adding a submenu to an existing context menu?
1.2
0
0
3,646
10,203,839
2012-04-18T06:17:00.000
0
0
0
0
python,django,apache,mod-wsgi,mod-python
10,205,585
3
false
1
0
There is no preferred version of mod_python. It's deprecated. Don't use it.
3
1
0
I am building a web application in django 1.4 which I have to deploy on apache using mod_wsgi. The problem is that there is already a raw python web application running on it using mod_python. On studying through internet, I found that its possible to use both applications. My question is what combination of versions(of course more recent versions are more prefered) of python, mod_python, mod_wsgi, apache and django are compatible? Thanks in advance
using django based mod_wsgi and raw python based mod_python together on same apache
0
0
0
209
10,203,839
2012-04-18T06:17:00.000
1
0
0
0
python,django,apache,mod-wsgi,mod-python
10,204,865
3
true
1
0
Believe it or not, I have the exact same setup. The simplest way to handle it is to partition the applications under VirtualHosts. If you can do this, then it's all super easy. You just have a VirtualHost entry for each application. If you need to run them under HTTP/S, then you may run into problems. Apache can only have one VirtualHost for all HTTP/S sites on the same server. We are running the following versions on our main production machine: Apache/2.2.14 (Ubuntu) mod_python/3.3.1 Python/2.6.5 mod_ssl/2.2.14 OpenSSL/0.9.8k mod_wsgi/2.8
3
1
0
I am building a web application in django 1.4 which I have to deploy on apache using mod_wsgi. The problem is that there is already a raw python web application running on it using mod_python. On studying through internet, I found that its possible to use both applications. My question is what combination of versions(of course more recent versions are more prefered) of python, mod_python, mod_wsgi, apache and django are compatible? Thanks in advance
using django based mod_wsgi and raw python based mod_python together on same apache
1.2
0
0
209
10,203,839
2012-04-18T06:17:00.000
0
0
0
0
python,django,apache,mod-wsgi,mod-python
10,204,877
3
false
1
0
Django runs best and is recommended to run in production using mod_wsgi IF you are using apache. uwsgi is better if you are using nginx ( I find nginx far better than apache personally ) You can run it whatever way you want but that's the best way. You can run mod_python fcgi or cgi processes at the same time as other mod_wsgi apps and use apache as a reverse proxy ( or sit nginx in front of apache as a reverse proxy ). You can diver traffic to the relevant apps this way.
3
1
0
I am building a web application in django 1.4 which I have to deploy on apache using mod_wsgi. The problem is that there is already a raw python web application running on it using mod_python. On studying through internet, I found that its possible to use both applications. My question is what combination of versions(of course more recent versions are more prefered) of python, mod_python, mod_wsgi, apache and django are compatible? Thanks in advance
using django based mod_wsgi and raw python based mod_python together on same apache
0
0
0
209
10,204,230
2012-04-18T06:46:00.000
1
1
0
0
python,websocket,socket.io,gevent
11,271,531
3
false
0
0
what browser do you use. I saw this behavior with IE. both Mozilla and chrome were fine. there were issues with the flashscket protocol which I have fixed so ie should work but the jquery UI does not work that is the issue. don't know enough JS to fix it
1
4
0
All the forks of gevent-socketio in bitbucket and github have examples/chat.py that do not work. Can anyone find me a working example of gevent-socketio?
Do anyone have a working example of gevent-socketio?
0.066568
0
1
7,663
10,204,521
2012-04-18T07:05:00.000
1
0
0
1
python,google-app-engine
10,212,501
1
true
1
0
Since you are just getting started I assume you don't care much about what is in your local datastore. Therefore, when starting your app, pass the --clear_datastore to dev_appserver.py. What is happening? As daemonfire300 said, you are having conflicting application IDs here. The app you are trying to run has the ID "sample-app". Your datastore holds data for "template-builder". The easiest way to deal with it is clearing the datastore (as described above). If you indeed want to keep both data, pass --default_partition=dev~sample-app to dev_appserver.py (or the other way around, depending on which app ID you want to use).
1
1
0
I am new to Python and Google App Engine. I have installed an existing Python application on localhost, and it is running fine for the static pages. But, when I try to open a page which is fetching data, it is showing an error message: BadRequestError: app "dev~sample-app" cannot access app "dev~template-builder"'s data template-builder is the name of my online application. I think there is some problem with accessing the Google App Engine data on localhost. What should I do to get this to work?
Google app engine database access on localhost
1.2
0
0
577
10,205,990
2012-04-18T08:52:00.000
3
0
0
0
python,sql,htsql
10,210,348
1
false
0
0
If you want TAB as a delimiter, use tsv format (e.g. /query/:tsv instead of /query/:csv). There is no way to specify the encoding other than UTF-8. You can reencode the output manually on the client.
1
1
1
I would like to know if somebody knows a way to customize the csv output in htsql, and especially the delimiter and the encoding ? I would like to avoid iterating over each result and find a way through configuration and/or extensions. Thank in advance. Anthony
Customizing csv output in htsql
0.53705
1
0
170
10,207,087
2012-04-18T10:00:00.000
0
0
0
0
python,django,apache,mod-wsgi,mod-python
10,222,042
3
false
1
0
Conceptually: store a cookie using your raw python web page that you process in a "welcome" view or custom middleware class in Django, and insert them into the sessions db. This is basically what hungnv suggests. The most ridiculous way to do this would be to figure out how Django deals with sessions and session cookies, insert the correct row into Django's session database from your raw python app, and then custom-set the session cookie using Django's auth functions.
1
0
0
I have a web application written in raw python and hosted on apache using mod_python. I am building another web application which is django based and will be hosted on same server using mod_wsgi. Now, the scenerio is such that user will login from the web page which is using mod_python and a link will send him to my application which will be using mod_wsgi. My question is how can I maintain session? I need the same authentication to work for my application. Thanks in advance.
maintaining user authentication if i have some web pages on mod_python and some on mod_wsgi
0
0
0
332
10,208,147
2012-04-18T11:10:00.000
0
0
0
0
python,postgresql,openerp
10,208,766
3
false
1
0
OpenERP works with PostgreSQl as the Back-end Structure. Postgresql is managed by pgadmin3 (Postgres GUI),you can write sql queries there and can add/delete records from there. It is not advisable to insert/remove data directly into Database!!!!
2
2
0
I am new in OpenERP, I have installed OpenERP v6. I want to know how can I insert data in database? Which files I have to modify to do the job? (files for the SQL code)
OpenERP: insert Data code
0
1
0
794
10,208,147
2012-04-18T11:10:00.000
0
0
0
0
python,postgresql,openerp
10,225,346
3
false
1
0
The addition of columns in the .py files of the corresponding modules you want to chnage will insert coumns to the pgadmin3 also defenition of classes will create tables...when the fields are displayed in xml file and values are entered to the fields through the interface the values get stored to the table values to the database...
2
2
0
I am new in OpenERP, I have installed OpenERP v6. I want to know how can I insert data in database? Which files I have to modify to do the job? (files for the SQL code)
OpenERP: insert Data code
0
1
0
794
10,208,960
2012-04-18T12:02:00.000
2
1
1
0
c#,.net,python,interop,ironpython
10,210,229
2
true
0
1
I think you misunderstand Python. It's an interpreted1 language. You just provide the text source files and the interpreter will execute them. There is a difference between the language Python and the implementations CPython, IronPython, Jython, PyPy, what have you. Each of them attempts to implement the language Python as accurately as possible, while also adding implementation-specific functionality. This is just like how, say, the C# compiler was written in C++. For example, any (pure) Python file can be executed by the IronPython interpreter. But if you know that you're going to use IronPython, you can use the special IronPython features that let you into the .NET library. Now, most Python doesn't use any of the implementation-specific functionality, so it doesn't matter what you use to run it. Some Python does, though. 1Well, it's compiled into .pyc files... but then "compile" isn't really a well-defined term anyway. Why does this matter to you? Well, you have a bunch of Python source code that you want to use with the .NET framework. If that code doesn't use any of the CPython-specific features -- such as using C extension modules -- then you can just run it in IronPython.
2
3
0
I am working on a design of an application. The Core should be written in C# but i also want to use some already finished CPython modules (un-managed). So I am interested in the interoperability (Call CPython method from C# and Call C# from CPython). And if there are problems, because C# runs within the .NET runtime (managed) and CPython directly un-managed. I already investigated this issue with Google and came out to these solutions: Use IronPython via DLR + "CPython extension" + maybe "IronClad" and call from IronPython the CPython modules and vice versa -> are these modules executed managed or unmanaged ? Are There any problems if i want to use C# classes and methods from CPython ? Use "Python for .NET" -> the same question as above. What do you think, which way would be better ? or do you have another solution ? And the last but maybe most important question, did I understand the above mentioned points right, or do I mess up ? Many thanks in advance !!
C# .NET interoperabillity with managed Python (CPython) -> any problems?
1.2
0
0
1,191
10,208,960
2012-04-18T12:02:00.000
1
1
1
0
c#,.net,python,interop,ironpython
14,318,714
2
false
0
1
Expose your Python code via COM and call that from C#. Used this avenue (both ways) many times.
2
3
0
I am working on a design of an application. The Core should be written in C# but i also want to use some already finished CPython modules (un-managed). So I am interested in the interoperability (Call CPython method from C# and Call C# from CPython). And if there are problems, because C# runs within the .NET runtime (managed) and CPython directly un-managed. I already investigated this issue with Google and came out to these solutions: Use IronPython via DLR + "CPython extension" + maybe "IronClad" and call from IronPython the CPython modules and vice versa -> are these modules executed managed or unmanaged ? Are There any problems if i want to use C# classes and methods from CPython ? Use "Python for .NET" -> the same question as above. What do you think, which way would be better ? or do you have another solution ? And the last but maybe most important question, did I understand the above mentioned points right, or do I mess up ? Many thanks in advance !!
C# .NET interoperabillity with managed Python (CPython) -> any problems?
0.099668
0
0
1,191