Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
7,033,542
2011-08-11T22:11:00.000
1
0
1
0
1
python,random,numbers,generator
0
7,033,665
0
4
0
false
0
0
If you need to maintain the file (which I think you do, in order to add new numbers), I would suggest you to "forget" using a plain text file and use SQLite or any other embedded DB that is backed up in a file, as you probably don't want to load all the numbers in memory. The "feature" (or better said, data structure) you want from SQLite is a B-Tree, so you can retrieve the numbers fast. I'm saying this, because you could also try to find a library that implements B-Trees, and then you wouldn't need SQLite.
1
1
0
0
How do i generate random numbers but have the numbers avoid numbers already used. I have a TXT file with thousands of sets of numbers and i need to generate a series of random numbers while avoiding these. IE, TXT - 0102030405 my random number needs to avoid this number. on a side note, how can i split up the TXT 10 digit number into 5, two digit numbers? then how can i generate random numbers based off of that.
How do generate random numbers, while avoiding numbers already used
0
0.049958
1
0
0
613
7,033,612
2011-08-11T22:19:00.000
0
1
0
0
0
c++,python,profiling
0
7,033,696
0
2
0
false
0
1
The usual technique for profiling already-existing functions that we use in Lua a lot is to overwrite the function with your own version that will start timing, call the original function, and stop timing, returning the value that the original function returned.
1
2
0
0
I'm implementing a profiler in an application and I'm a little flummoxed about how to implement Python profiling such that the results can be displayed in my existing tool. The application allows for Python scripting via communication with the python interpreter. I was wondering if anyone has ideas on how to profile Python functions from C++ Thank you for your suggestions :)
How can I "hook into" Python from C++ when it executes a function? My goal is to profile
0
0
1
0
0
447
7,037,269
2011-08-12T08:10:00.000
1
0
0
0
0
python,google-app-engine,datastore
0
7,041,161
0
2
0
true
1
0
You cannot filter for property non-existence. Every query must be satisfied by an index, and there's no "negative index" of entities that lack a given property. Generally, you'll need to iterate over all entities, and just ignore the ones that already have the property.
1
1
0
0
I have updated my model in Datastore so now it has an additional field. Now I have entities with and without that field but I need to add this field to all entities that don't yet have it. Idea is to get entities in a function without that field and add it. So, I wonder how I can filter such entities in Datastore requests?
Check if a field is present in an entity
0
1.2
1
0
0
487
7,039,343
2011-08-12T11:20:00.000
3
0
0
0
0
python,html,logging,format
0
7,039,432
0
2
0
false
1
0
I would not recommend storing logs as HTML, having logs easily onward processable is a very important and useful feature, and HTML is hard to parse, and it is also verbose - and logs get large fast :-) However if you really want to you can write your own formatter that will output to HTML - I am not aware of one already in existence - I suspect for the reasons above.
1
6
0
0
I read in wikipedia that python logging module was inspired by log4j .In log4j ,there is an HTMLLayout with which one can create the log file as html.Is there any such facility in python logging? Or do anyone know how I can format the log output into an html file ?
python logging how to create logfile as html
0
0.291313
1
0
0
7,716
7,049,637
2011-08-13T09:33:00.000
9
0
1
0
0
python,oop
0
7,049,703
0
2
0
true
0
0
Any Python project of any size will become unmanageable without objects. Is it possible to use modules as your objects instead of classes? Yes, if you really want to. (In Python, a module is what you call a single file.) However, even if you aren't creating user-defined classes, you still need to learn how to use objects. Why? Because everything is an object in Python. You need to at least know how to use objects to understand how to use the built in types and functions in Python. In Python, dictionaries are very important to how things are implemented internally, so your intuition that dictionaries would be an alternative is good. But dictionaries are objects too, so you'd still have to learn to use objects, without getting the benefit of their ease of use for many things vs. a plain dictionary. My advice? Do a Python tutorial (or three). The whole thing. Even the parts on networking, or whatever else you don't think you'll be using. You'll learn things about the language that you'll end up using later in ways you'd never anticipate. Do you need to learn about metaclasses? No. Multiprocessing? No. But you need to learn all of the core language features, and classes and object oriented programming are the core language feature. It's so central to nearly all modern languages that people don't even mention it any more.
1
0
0
0
OK to be short =) I'm noob yes. -.- but I'm convinced i don't need to learn everything.. and that a specific field of knowledge suited to my project may save me time..? I'm going to build a game that i have designed but not begun constructing. I'm going to learn everything i need to to get my project finished. however to save time... I'm using python the game needs only to reference locations on a grid with values: so a green 'pine tree' square = *information *values = tree type / tree's hardness / tree's life ect... I am perhaps wrong in the assumption there are no objects needed? should i focus all my time on learning how to use the dictionary's function.. what is the most efficient coding for a grid based tile game that simulates change per turn. So that; a blue square spreads according to sets of rules and according to the adjacent tiles at the end of a turn. And these tiles have values attached...? even just the name of the best feasible technique... i can learn how to do it once i know what i need to learn...
Do I need to learn about objects, or can I save time and just learn dictionaries?
0
1.2
1
0
0
627
7,052,169
2011-08-13T17:37:00.000
2
1
0
0
0
java,c++,python,audio,signal-processing
0
7,052,252
0
2
0
false
1
0
I'd start by computing the FFT spectrogram of both the haystack and needle files (so to speak). Then you could try and (fuzzily) match the spectrograms - if you format them as images, you could even use off-the-shelf algorithms for that. Not sure if that's the canonical or optimal way, but I feel like it should work.
1
2
0
0
I want to be able to identify an audio sample (that is provided by the user) in a audio file I've got (mp3). The mp3 file is a radio stream that I've kept for testing purposes, and I have the Pre-roll of the show. I want to identify it in the file and get the timestamp where it's playing in the file. Note: The solution can be in any of the following programming languages: Java, Python or C++. I don't know how to analyze the video file and any reference about this subject will help.
Identify audio sample in a file
0
0.197375
1
0
1
573
7,053,905
2011-08-13T23:19:00.000
1
0
1
0
0
java,c++,python,communication
0
7,053,927
0
3
0
false
0
0
Sockets, shared memory, events / signals, pipes, semaphores, message queues, mailslots. Just search the Internet for either.
1
6
0
0
I am very new to programming, and have had no formal training in it before so please bear with me if this is a vague question. I was just curious: how do different programs on the same computer communicate with each other? From my programming experience I believe it can be achieved by socket programming? Thanks
Communicating between applications?
0
0.066568
1
0
1
650
7,064,564
2011-08-15T11:51:00.000
1
0
0
0
0
python,django
0
7,064,785
0
2
0
false
1
0
Could you post a link to that piece of documentation, please? In Django you configure, in settings.py, the search path for templates (through TEMPLATE_DIRS variable). Then, inside a view, you render a template naming its file relative to one of the path included in TEMPLATE_DIRS. That way, whenever you move you template dir you just need to modify your settings.py As for static files, like CSS docs, Django does not need to know anything about them (unless you are serving static files through django itself, which is discouraged by django's documentation): you only need to tell your web server where to find them.
1
0
0
0
In django, the documentation asks to use the absolute paths and not the relative paths. Then, how do they manage portability ? If I have my template in the project folder then, even a rename of the folder will cause breakage.. ! Then what is the reason behind this practice ? Please explain ?
Why absolute paths to templates and css in django ? (Isn't that a bad practice ?)
0
0.099668
1
0
0
612
7,065,283
2011-08-15T13:08:00.000
1
0
0
1
0
python,django,postgresql,comet,gevent
0
11,276,439
0
2
0
false
1
0
Instead of Apache + X-Sendfile you could use Nginx + X-Accel-Redirect. That way you can run a gevent/wsgi/django server behind Nginx with views that provide long-polling. No need for a separate websockets server. I've used both Apache + X-Sendfile and Nginx + X-Accel-Redirect to serve (access-protected) content on Webfaction without any problems.
1
5
0
0
I have been working with Django for some time now and have written several apps on a setup that uses Apache 2 mod_wsgi and a PostgreSQL database on ubuntu. I have aa app that uses xsendfile to serve files from Apache via a Django view, and also allow users to upload files via a form as well. All this working great, but I now want to ramp up the features (and the complexity I am sure) by allowing users to chat and to see when new files have been uploaded without refreshing their browser. As I want this to be scale-able, I don't want to poll continually with AJAX as this is going to get very heavy with large numbers of users. I have read more posts, sites and blogs then I can count on integrating comet functionality into a Django app but there are so many different opinions out there on how to do this that I am now completely confused. Should I be using orbited, gevent, iosocket? Where does Tornado fit into this debate? I want the messages also be stored on the database, so do I need any special configuration to prevent my application blocking when writing to the database? Will running a chat server with Django have any impact on my ability to serve files from Apache?
Should I use orbited or gevent for integrating comet functionality into a django app
0
0.099668
1
0
0
1,434
7,066,395
2011-08-15T14:50:00.000
1
0
1
1
1
python,centos
0
7,066,558
0
2
0
true
0
0
Try specifying the full path to the python executable (i.e. /opt/python27/python) rather than using a bare python command. Alternatively, place /opt/python27/ on your PATH earlier than /usr/local/bin (where the python command is presumably already present, you can check with which python).
2
2
0
0
I recently attempted to upgrade our Python install on a CentOS server from 2.4.3 to 2.7, however it lists 2.4.3 as the newest stable release. This is a problem because I have a Python program that requires at least 2.7 to run properly. After contacting support they installed Python 2.7 in a separate directory, however I'm not sure how to access this version. Anytime I try to run the python program it uses the 2.4.3 version. I have looked into changing the PythonHome variable, but can't get it to work correctly. Is there anything I can do via the command line or inside the program itself to specify which Python version I want to use?
Using Non-Standard Python Install
0
1.2
1
0
0
608
7,066,395
2011-08-15T14:50:00.000
2
0
1
1
1
python,centos
0
7,066,607
0
2
0
false
0
0
I'd suggest you ask your support team to build a separate RPM for Python2.7 and have it installed in a separate location and not conflicting with the OS version. I've done this before it was great to have a consistent Python released across my RHEL3, 4, and 5 systems. Next, I'd suggest you use the following for your sh-bang (first line) "#!/bin/env python2.7" and then ensure your PATH includes the supplemental Python install path. This way, your script stays the same as you run it on your workstation, with it's own unique path to python2.7, and the production environment as well.
2
2
0
0
I recently attempted to upgrade our Python install on a CentOS server from 2.4.3 to 2.7, however it lists 2.4.3 as the newest stable release. This is a problem because I have a Python program that requires at least 2.7 to run properly. After contacting support they installed Python 2.7 in a separate directory, however I'm not sure how to access this version. Anytime I try to run the python program it uses the 2.4.3 version. I have looked into changing the PythonHome variable, but can't get it to work correctly. Is there anything I can do via the command line or inside the program itself to specify which Python version I want to use?
Using Non-Standard Python Install
0
0.197375
1
0
0
608
7,067,726
2011-08-15T16:30:00.000
7
0
1
0
0
python,linked-list
0
7,067,801
0
2
1
false
0
0
This sounds like a perfect use for a dictionary.
1
0
1
0
there are huge number of data, there are various groups. i want to check whether the new data fits in any group and if it does i want to put that data into that group. If datum doesn't fit to any of the group, i want to create new group. So, i want to use linked list for the purpose or is there any other way to doing so?? P.S. i have way to check the similarity between data and group representative(lets not go that in deatil for now) but i dont know how to add the data to group (each group may be list) or create new one if required. i guess what i needis linked list implementation in python, isn't it?
linked list in python
1
1
1
0
0
468
7,082,529
2011-08-16T17:42:00.000
0
0
1
0
1
python,loops,wxpython,infinite
0
7,083,015
0
3
0
false
0
1
Have you considered having wxPython invoke an event handler of yours periodically, and perform the background processing in this? This depends on you being able to divide your work into discrete pieces, of course. Be aware that your background processing would have to be non-blocking so control would return to wxPython in a timely manner, to allow responsive GUI processing. Not sure what's the idiomatic way to implement such background processing in wxPython, but if I recall correctly the technique in (Py)Qt was to use a timer.
1
3
0
0
I'm trying to write a python program with wxPython GUI. Program must to collect some information in background (infinite loop), but GUI should be active at this time. Like, if I click on the some button, some variable or another information must change, and at the new cycle this variable should be used instead of the old. But I don't know, how to make it. I think that I must use threading, but I don't understand how to use it. Anyone can suggest how to solve this problem? Thanks in advance!
Python: Infinite loop and GUI
0
0
1
0
0
4,727
7,088,175
2011-08-17T05:01:00.000
0
0
0
1
1
python,linux,pipe
0
7,088,283
0
2
0
false
0
0
First of all, for what you're doing, it should be better to generate the string using python directly. Anyway, when using subprocess, the correct way to pipe data from a process to another is by redirecting stdout and/or stderr to a subprocess.PIPE, and feed the new process' stdin with the previous process' stdout.
1
4
0
0
I'm trying to generate a random string using this command: strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'; Works fine, but when I try to do subprocess.call(cmd,shell=True) it just gets stuck on the strings /dev/urandom command and spams my screen with grep: writing output: Broken pipe What's causing this and how do I fix it?
Getting output of system commands that use pipes (Python)
0
0
1
0
0
250
7,089,620
2011-08-17T08:09:00.000
1
0
1
0
0
python,compilation,cmake,vtk
0
28,362,827
0
4
0
false
0
1
You can use VTK with python just installing pythonxy framework, it is just a python with many many libraries included VTK, ITK, Qt and many other, also it is very well documented, and also have many examples, all in python. I recommend it to you, I have worked a lot with it, and is very amazing. Just make a try. All you have to do is in the installer wizard select among the tools, VTK, and it will be installed with vtk.
1
6
0
0
I want to use VTK together with Python on a Windows system. It seems that I cannot use the windows installer but "have to compile VTK from source code using CMake and a native build system". So far I have installed CMake. But now I wonder how to proceed? It seems that I need MS Visual Studio to create the project files?! But I don't have Visual Studio. So what can I do?
VTK / Python / compile
0
0.049958
1
0
0
1,416
7,114,946
2011-08-18T22:32:00.000
0
0
1
0
0
python,image,generator,python-imaging-library
0
7,115,394
0
1
0
false
0
0
If you want sub-pixel positioning, one possible-if-inefficient solution would be to scale everything up by a factor of, say, 10, rounding (original_floating_point_coordinates * 10) to integers, then resize the whole image back down at the end. Doesn't help with angles, I suppose.
1
1
0
0
Is there a way to set PIL and by extension Imagedraw to allow float values for such commands as Arc and Ellipse? I am running into major problems and cannot do what I need to do because of the seeming requirement that angles and bounding box position specifiers must be integers, and I cannot use a different package, nor is approximating everything with short straight lines a viable alternative either.
Using Float values in Imagedraw
0
0
1
0
0
197
7,116,553
2011-08-19T03:12:00.000
0
1
0
1
0
python,ssh,twisted
0
8,927,201
0
3
0
false
0
0
A cheap and posibly very dangerous hack is to put your app as the default shell for a particular user. you need to be very careful though (suggestion chroot it to hell and back) as it might be possible to break out of the app and into the server.
1
5
0
0
I'm in the process of writing an application with an Urwid front-end and a MongoDB back-end in python. The ultimate goal is to be able to be able to serve the application over SSH. The application has its own authentication/identity system. I'm not concerned about the overhead of launching a new process for each user, the expected number of concurrent users is low. Since the client does not recall any state information and instead it is all stored in the DB I'm not concerned about sessions as such except for authentication purposes. I was wondering if there are any methods to serving the application as is without having to roll my own socket-server code or re-code the app using Twisted. I honestly don't know how Urwid and Twisted play together. I see that Urwid has a TwistedEventLoop method which purports to use the twisted reactor but I cannot find any example code running an Urwid application over a twisted connection. Examples would be appreciated, even simple ones. I've also looked at ZeroMQ but that seems even more inscrutable than Twisted. In short I explored a number of different libraries which purport to serve applications over tcp, most of them by telnet. And nearly all of them focusing on http. Worst case scenario I expect that I may create an extremely locked down user as a global login and use chrooted SSH sessions. that way each user gets their own chroot/process/client. Yes, I know that's probably a "Very Bad Idea(tm)". But I had to throw it out there as a possibility. I appreciate any constructive feedback. Insults, chides, and arrogance will be scowled at, printed out and spat upon. -CH
Howto Serve Python CLI Application Over SSH
0
0
1
0
0
643
7,119,630
2011-08-19T09:31:00.000
2
0
0
1
0
python,filesystems,filepath
0
7,119,780
0
3
0
false
0
0
As df itself opens and parses /etc/mtab, you could either go this way and parse this file as well (an alternative would be /proc/mounts), or you indeed parse the df output.
2
3
0
0
In python, given a directory or file path like /usr/local, I need to get the file system where its available. In some systems it could be / (root) itself and in some others it could be /usr. I tried os.statvfs it doesnt help. Do I have to run the df command with the path name and extract the file system from the output? Is there a better solution? Its for linux/unix platforms only. Thanks
In Python, how can I get the file system of a given file path
0
0.132549
1
0
0
5,990
7,119,630
2011-08-19T09:31:00.000
4
0
0
1
0
python,filesystems,filepath
0
7,119,758
0
3
0
false
0
0
Use os.stat to obtain device number of the file/directory in question (st_dev field), and then iterate through system mounts (/etc/mtab or /proc/mounts), comparing st_dev of each mount point with this number.
2
3
0
0
In python, given a directory or file path like /usr/local, I need to get the file system where its available. In some systems it could be / (root) itself and in some others it could be /usr. I tried os.statvfs it doesnt help. Do I have to run the df command with the path name and extract the file system from the output? Is there a better solution? Its for linux/unix platforms only. Thanks
In Python, how can I get the file system of a given file path
0
0.26052
1
0
0
5,990
7,123,387
2011-08-19T14:51:00.000
4
0
0
0
0
python,scrapy,web-crawler,pipeline
0
7,135,102
0
3
0
false
1
0
It's a perfect tool for the job. The way Scrapy works is that you have spiders that transform web pages into structured data(items). Pipelines are postprocessors, but they use same asynchronous infrastructure as spiders so it's perfect for fetching media files. In your case, you'd first extract location of PDFs in spider, fetch them in pipeline and have another pipeline to save items.
1
17
0
0
I need to save a file (.pdf) but I'm unsure how to do it. I need to save .pdfs and store them in such a way that they are organized in a directories much like they are stored on the site I'm scraping them off. From what I can gather I need to make a pipeline, but from what I understand pipelines save "Items" and "items" are just basic data like strings/numbers. Is saving files a proper use of pipelines, or should I save file in spider instead?
Should I create pipeline to save files with scrapy?
0
0.26052
1
0
0
16,042
7,129,285
2011-08-20T02:44:00.000
5
0
1
0
0
python,return,output
0
58,644,500
0
13
0
false
0
0
I think a really simple answer might be useful here: return makes the value (a variable, often) available for use by the caller (for example, to be stored by a function that the function using return is within). Without return, your value or variable wouldn't be available for the caller to store/re-use. print prints to the screen, but does not make the value or variable available for use by the caller. (Fully admitting that the more thorough answers are more accurate.)
1
94
0
0
What is the simple basic explanation of what the return statement is, how to use it in Python? And what is the difference between it and the print statement?
What is the purpose of the return statement? How is it different from printing?
1
0.076772
1
0
0
683,021
7,131,834
2011-08-20T12:48:00.000
1
0
0
1
0
java,python,api,google-app-engine
0
7,142,758
0
2
0
false
1
0
No, but you can get a very close estimate of this by adding up the length of the request headers and body for incoming requests, and the response body and headers for responses.
1
0
0
0
I want to know if Google App Engine support using google.appengine.api.quota package to get bandwidth usage, not cpu usage? If so, how to get with Python or Java and print in webpage?
How to get bandwidth quota usage with Google app engine api?
0
0.099668
1
0
0
475
7,138,229
2011-08-21T12:23:00.000
4
0
1
1
0
python,virtualenv
0
9,660,148
0
2
0
false
0
0
Virtualenv lets you specify a python binary to use instead of the default. On your machine, python probably maps to /usr/bin/python, which will be a symlink to /usr/bin/python2.6. If you've got Python 2.5 installed, it will be /usr/bin/python2.5 You can create a virtualenv called envname with virtualenv -p /usr/bin/python2.5 envname
1
2
0
0
I am trying to get started with Google App Engine. I have python 2.6 installed in my virtual environment which I wanted to use. But Google App Engine supports python2.5. So I want to set up another python virtual environment with python 2.5. Can you help me how to do exactly that?
Set up a python virtualenv with a specific version of Python
0
0.379949
1
0
0
2,418
7,144,011
2011-08-22T06:52:00.000
1
0
0
0
0
python,apache,webserver,mod-wsgi
1
7,145,199
0
1
0
true
1
0
You cant. It is a limitation of the API defined by the WSGI specification. So, nothing to do with Apache or mod_wsgi really as you will have the same issue with any WSGI server if you follow the WSGI specification. If you search through the mod_wsgi mailing list on Google Groups you will find a number of discussions about this sort of problem in the past.
1
0
0
0
i'm trying to build a web server using apache as the http server, mod_wsgi + python as the logic handler, the server was supposed to handler long request without returning, meaning i want to keep writing stuff into this request. the problem is, when the link is broken, the socket is in a CLOSE_WAIT status, apache will NOT notify my python program, which means, i have to write something to get an exception, says the link is broken, but those messages were lost and can't be restored. i tried to get the socket status before writing through /proc/net/tcp, but it could not prevent a quick connect/break connection. anybody has any ideas, please help, very much thanks in advance!
apache server with mod_wsgi + python as backend, how can i be able to notified my connection status?
0
1.2
1
1
0
393
7,149,287
2011-08-22T14:50:00.000
0
0
1
0
0
python,multiprocessing
0
7,161,916
0
1
0
false
0
0
I use a SQL table to perform such a thing, because my users may launch dozens of tasks alltogether if I do not limit them. When a new file shows up, a daemon writes its name in the table (with allsort of other information such as size, date, time, user, ...) Another daemon is then reading the table, getting the first not-executed task, doing it, and marking it as executed. When nothing is found to be done, it just waits for another minute or so. This table is also the log of jobs performed and may carry results too. And you can get averages from it.
1
2
0
0
My question might be quite vague. I am looking for some reasonable approach to perform my task. I have developed a webpage where the user uploads a file used as an input file for my code developed in python. When the input file is submitted through the webpage, it is saved in a temporary folder from where the daemon copies it to another location. What I want is to looks for the files in the folder regularly (can write a daemon) If it finds more than one file it runs the code as separate jobs with the input files found in the directory limiting to a max of 5 processes running at the same time and when one process finishes it starts the next if there are files in the folder (in a chronological order). I am aware of multiprocessing in python but don't know how to implement to achieve what I want or should I go for something like XGrid to mange my jobs. The code usually takes few hours to few day to finish one job. But the jobs are independent of each other.
multiprocessing in python
0
0
1
0
0
1,157
7,152,441
2011-08-22T19:20:00.000
7
0
1
0
0
python,comparison,boolean,readability
0
7,152,501
0
5
0
false
0
0
My answer is simple, as it applies to most coding problems: Don't try to write something that just works. Try to express your intent as clearly as possible. If you want to check if a value is false, use if not value. If you want to check for None, write it down. It always depends on the situation and your judgement. You should not try to find rules which can be applied without thinking. If you find those rules, it's a job for a computer, not for a human! ;-)
3
65
0
0
I've always coded in the style of if not value, however, a few guides have brought to my attention that while this style works, it seems to have 2 potential problems: It's not completely readable; if value is None is surely more understandable. This can have implications later (and cause subtle bugs), since things like [] and 0 will evaluate to False as well. I am also starting to apply this idea to other comparisons, such as: if not value vs if value is False if not value vs if value is [] And so goes the list... The question is, how far do you go with the principle? Where to draw the line, while keeping your code safe? Should I always use the if value is None style no matter what?
Python: if not val, vs if val is None
0
1
1
0
0
40,121
7,152,441
2011-08-22T19:20:00.000
34
0
1
0
0
python,comparison,boolean,readability
0
7,152,491
0
5
0
false
0
0
No. If you want to run code when the value is false but isn't None, this would fail horribly. Use is None if you're checking for identity with the None object. Use not value if you just want the value to be False.
3
65
0
0
I've always coded in the style of if not value, however, a few guides have brought to my attention that while this style works, it seems to have 2 potential problems: It's not completely readable; if value is None is surely more understandable. This can have implications later (and cause subtle bugs), since things like [] and 0 will evaluate to False as well. I am also starting to apply this idea to other comparisons, such as: if not value vs if value is False if not value vs if value is [] And so goes the list... The question is, how far do you go with the principle? Where to draw the line, while keeping your code safe? Should I always use the if value is None style no matter what?
Python: if not val, vs if val is None
0
1
1
0
0
40,121
7,152,441
2011-08-22T19:20:00.000
5
0
1
0
0
python,comparison,boolean,readability
0
7,153,236
0
5
0
false
0
0
Your use of the is operator is a little problematic. if value is [] will always be false, for example, because no two active lists have the same identity. It works great with None because None is a singleton (all references to None are the same object) but for other comparisons, use ==. However, if value and if not value are perfectly readable and useful. IMHO there's no need to be more specific, unless you need to treat various types of truthy or falsy values differently, as, for example, distinguishing between 0 and None.
3
65
0
0
I've always coded in the style of if not value, however, a few guides have brought to my attention that while this style works, it seems to have 2 potential problems: It's not completely readable; if value is None is surely more understandable. This can have implications later (and cause subtle bugs), since things like [] and 0 will evaluate to False as well. I am also starting to apply this idea to other comparisons, such as: if not value vs if value is False if not value vs if value is [] And so goes the list... The question is, how far do you go with the principle? Where to draw the line, while keeping your code safe? Should I always use the if value is None style no matter what?
Python: if not val, vs if val is None
0
0.197375
1
0
0
40,121
7,158,635
2011-08-23T09:10:00.000
0
0
0
0
1
javascript,python,html,javascript-engine
0
7,158,707
0
4
0
false
1
0
This would be very hard to accomplish without the use of external libraries. You'd need a HTML parser to start with, so you can actually make sense of the HTML. Then you'd need a Javascript parser/lexer/engine so you could do the actual calculations. I guess it would be possible to implement this in Python, but I'd recommend looking for an open source project which already implemented this. You'd then have to parse/lex/interpret the javascript and pass back the result to python. All in all I'd say it's easier to just port the Javascript calculation to Python, but that's just me.
1
2
0
0
I have a python script and this python script shall call an html-file (i.e. a web page) stored locally on the computer. The html-file does some calculations (jquery,javascript and so on) and should pass the result back to the python script. I don't want to change the setting (python script calls html-file and result is passed back to python-script) so please don't ask why. Could anyone tell me how to solve this? How can I pass the result from html-file to the calling python function? That troubles me since 2 weeks. Thanks!
How can my HTML file pass JavaScript results back to a Python script that calls it?
0
0
1
0
0
367
7,164,843
2011-08-23T17:08:00.000
-1
0
1
1
1
python
0
16,995,242
0
5
0
false
0
0
we can used follow API to detect current is 32bit or 64 bit platform.architecture()[0] '64bit
1
12
0
0
I'm running python 2.6 on Linux, Mac OS, and Windows, and need to determine whether the kernel is running in 32-bit or 64-bit mode. Is there an easy way to do this? I've looked at platform.machine(), but this doesn't work properly on Windows. I've also looked at platform.architecture(), and this doesn't work when running 32-bit python on 64-bit Windows. Note: It looks like python 2.7 has a fix that makes platform.architecture() work correctly. Unfortunately, I need to use python 2.6 (at least for now). (edit: From talking to folks off-line, it sounds like there probably isn't a robust python-only way to make this determination without resorting to evil hacks. I'm just curious what evil hacks folks have used in their projects that use python 2.6. For example, on Windows it may be necessary to look at the PROCESSOR_ARCHITEW6432 environment variable and check for AMD64)
In Python, how do you determine whether the kernel is running in 32-bit or 64-bit mode?
0
-0.039979
1
0
0
8,553
7,169,845
2011-08-24T02:34:00.000
6
0
1
0
1
python,windows,networking
0
51,795,325
0
3
0
false
0
0
I had the same issue as OP but none of the current answers solved my issue so to add a slightly different answer that did work for me: Running Python 3.6.5 on a Windows Machine, I used the format r"\DriveName\then\file\path\txt.md" so the combination of double backslashes from reading @Johnsyweb UNC link and adding the r in front as recommended solved my similar to OP's issue.
1
73
0
0
I have a file that I would like to copy from a shared folder which is in a shared folder on a different system, but on the same network. How can I access the folder/file? The usual open() method does not seem to work?
Using Python, how can I access a shared folder on windows network?
0
1
1
0
0
187,115
7,182,165
2011-08-24T20:56:00.000
6
0
0
0
0
python,django,command-line
0
7,182,225
0
3
0
true
1
0
If you are using a recent version of Django, the manage.py file should be an "executable" file by default. Please note, you cannot just type manage.py somecommand into the terminal as manage.py is not on the PATH, you will have to type ./ before it to run it from the current directory, i.e. ./manage.py somecommand. If that does not work please be sure that the manage.py file has: #!/usr/bin/env python as its first line. And make sure it is executable: chmod +x manage.py
1
2
0
0
Why is it that I have to run python manage.py somecommand and others simply run manage.py somecommand? I'm on OSX 10.6. Is this because there is a pre-set way to enable .py files to automatically run as Python scripts, and I've somehow disabled the functionality, or is that something that you explicitly enable?
Django manage.py question
1
1.2
1
0
0
802
7,192,763
2011-08-25T15:07:00.000
0
1
1
0
0
ironpython,monodevelop
0
7,242,146
0
1
0
false
0
0
The Python addin is for cpython, not IronPython, so it doesn't support assembly references.
1
0
0
0
I use mono develop 2.4 in ubuntu 10.10, with monodevelop-python and ironpython When I created a empty python project mono develop don't show the reference node. how I can add a new reference file ?
mono develop don´t show reference node
0
0
1
0
0
124
7,200,745
2011-08-26T06:22:00.000
0
1
0
0
0
python,mysql,sqlite,web-hosting
0
7,200,805
0
1
0
false
0
0
This very much depends about how much control your webhost gives you about the Python environment. For normal shared hosting, the runtime Python environment is fixed.
1
1
0
0
My webhost where our website is runs Python 2.4 & the website uses the SQLite3 python module (which is only part of Python 2.5 & up). This means that I cant use the module SQLite3 because its not part of Python 2.4. Is there a way for me to upload the python SQLite3 module myself & just import/refernce that in my script? Do you know how I would do this? Usually I would just install Python25 on my webhost home directory, but this webhost wont allow me to do this. Is there any way I can just upload & import a specific module - coming from c++ it seems this must be possible right? Because in C++ I spend my whole life writting libraries & just importing specific parts of them & importing specific classes of a namespace & etc.
Importing a specific module thats not part of my current Python 2.X version
1
0
1
0
0
52
7,211,204
2011-08-26T22:36:00.000
3
0
0
0
0
c#,java,python,sql,database
0
7,211,297
0
1
1
true
0
0
A full-on database engine is a pretty serious undertaking. You're not going to sit down and have a complete engine next week, so I'd have thought you would want to write the SQL parser piecemeal: adding features to the parser as the features are supported in the engine. I'm guessing this is just something fun to do, rather than something you want working ASAP. Given that, I'd have thought writing an SQL parser is one of the best bits of the project! I've done lots of work with flat file database engines, because the response times required for queries don't allow a RDBMS. One of the most enjoyable bits has been adding support for SQL fragments in e.g. the UI, where response time isn't quite as vital. The implementation I work on is plain old C, but in fact from what I've seen, most relational databases are still written primarily in C. And there is something satisfying about writing these things in a really low level language :)
1
1
0
0
As a personal project, I have been developing my own database software in C#. Many current database systems can use SQL commands for queries. Is there anyone here that could point me in the right direction of implementing such a system in a database software written completely from scratch? For example a user familiar with SQL could enter a statement as a string into an application, that statement will be analyzed by my application and the proper query will be run. Does anyone have any experience with something like that here? This is probably a very unusual questions haha. Basically what I am asking, are there any tools available out there that can dissect SQL statements or will I have to write my own from scratch for that? Thanks in advance for any help! (I may transfer some of my stuff to Python and Java, so any potential answers need not be limited to C#) ALSO: I am not using any current SQL database or anything like that, my system is completely from scratch, I hope my question makes sense. Basically I want my application to be able to interface with programs that send SQL commands.
C# custom database engine, how to implement SQL
1
1.2
1
1
0
2,117
7,219,541
2011-08-28T07:12:00.000
8
0
1
0
0
javascript,python,syntax
0
7,219,710
0
7
0
false
0
0
Aside from the syntactical issues, it is partly cultural. In Python culture any extraneous characters are an anathema, and those that are not white-space or alphanumeric, doubly so. So things like leading $ signs, semi-colons, and curly braces, are not liked. What you do in your code though, is up to you, but to really understand a language it is not enough just to learn the syntax.
3
61
0
1
Python and JavaScript both allow developers to use or to omit semicolons. However, I've often seen it suggested (in books and blogs) that I should not use semicolons in Python, while I should always use them in JavaScript. Is there a technical difference between how the languages use semicolons or is this just a cultural difference?
What is the difference between semicolons in JavaScript and in Python?
0
1
1
0
0
6,980
7,219,541
2011-08-28T07:12:00.000
8
0
1
0
0
javascript,python,syntax
0
7,219,723
0
7
0
false
0
0
JavaScript is designed to "look like C", so semicolons are part of the culture. Python syntax is different enough to not make programmers feel uncomfortable if the semicolons are "missing".
3
61
0
1
Python and JavaScript both allow developers to use or to omit semicolons. However, I've often seen it suggested (in books and blogs) that I should not use semicolons in Python, while I should always use them in JavaScript. Is there a technical difference between how the languages use semicolons or is this just a cultural difference?
What is the difference between semicolons in JavaScript and in Python?
0
1
1
0
0
6,980
7,219,541
2011-08-28T07:12:00.000
7
0
1
0
0
javascript,python,syntax
0
7,219,731
0
7
0
false
0
0
The answer why you don't see them in Python code is: no one needs them, and the code looks cleaner without them. Generally speaking, semicolons is just a tradition. Many new languages have just dropped them for good (take Python, Ruby, Scala, Go, Groovy, Io for example). Programmers don't need them, neither do compilers. If a language lets you not type an extra character you never needed, you will want to take advantage of that, won't you? It's just that JavaScript's attempt to drop them wasn't very successful, and many prefer the convention to always use them, because that makes code less ambiguous.
3
61
0
1
Python and JavaScript both allow developers to use or to omit semicolons. However, I've often seen it suggested (in books and blogs) that I should not use semicolons in Python, while I should always use them in JavaScript. Is there a technical difference between how the languages use semicolons or is this just a cultural difference?
What is the difference between semicolons in JavaScript and in Python?
0
1
1
0
0
6,980
7,225,684
2011-08-29T03:04:00.000
1
0
1
0
0
php,python,multiprocess
0
7,225,770
0
1
0
false
0
0
One way to this would be though web services. You could have a PHP process sitting on a web server and running and calling a Python web service (using CherryPy) multiple times. You would pass variables using a standard object notation such as JSON which can be efficiently encoded/decoded by both PHP and Python. You could also do the reverse of this with a Python service making multiple calls to a PHP service. In both cases, since you're limited by a certain number of concurrent connections you don't need to worry about handling the threads yourself.
1
0
0
0
I asked a question before about how to do multiprocessing in Python (although I did not use the word multiprocessing in the former question, because I was not sure I was asking about that yet). Now I am wondering how to do multiprocessing when you have more then one language involved, particularly if you have a PHP script that calls a Python script. In that case how can you have both the PHP script and the Python script multiprocess, and communicate with each other (send variables back and forth) as they multiprocess?
Multiprocessing between different languages such as PHP and Python
0
0.197375
1
0
0
335
7,233,631
2011-08-29T17:33:00.000
0
0
0
0
0
python,firefox,selenium
0
7,267,936
0
2
0
false
0
0
Open Firefox with this profile (with profile manager), go to Firefox preferences, turn off updates - this works for me.
1
4
0
0
When I use selenium webdriver with a custom firefox profile I get the firefox Add-ons pop up showing 2 extensions: Firefx WebDriver 2.5.0 and Ubuntu Firefox Modifications 0.9rc2. How can I get rid of this popup? I looked in the server jar to see if I could find the extensions, no luck. Looked online for the extensions, no luck. When I run the code without using the custom profile there is no popup.
Using Selenium webdriver with a custom firefox profile - how to get rid of addins popup
0
0
1
0
1
1,461
7,250,126
2011-08-30T21:37:00.000
0
0
0
1
0
python
0
7,250,363
0
4
0
false
0
0
You will receive the process ID of the newly created process when you create it. At least, you will if you used fork() (Unix), posix_spawn(), CreateProcess() (Win32) or probably any other reasonable mechanism to create it. If you invoke the "python" binary, the python PID will be the PID of this binary that you invoke. It's not going to create another subprocess for itself (Unless your python code does that).
2
18
0
0
I am in Windows and Suppose I have a main python code that calls python interpreter in command line to execute another python script ,say test.py . So test.py is executed as a new process.How can I find the processId for this porcess in Python ? Update: To be more specific , we have os.getpid() in os module. It returns the current process id. If I have a main program that runs Python interpreter to run another script , how can I get the process Id for that executing script ?
getting ProcessId within Python code
0
0
1
0
0
36,362
7,250,126
2011-08-30T21:37:00.000
0
0
0
1
0
python
0
16,131,293
0
4
0
false
0
0
Another option is that the process you execute will set a console window title for himself. And the searching process will enumerate all windows, find the relevant window handle by name and use the handle to find PID. It works on windows using ctypes.
2
18
0
0
I am in Windows and Suppose I have a main python code that calls python interpreter in command line to execute another python script ,say test.py . So test.py is executed as a new process.How can I find the processId for this porcess in Python ? Update: To be more specific , we have os.getpid() in os module. It returns the current process id. If I have a main program that runs Python interpreter to run another script , how can I get the process Id for that executing script ?
getting ProcessId within Python code
0
0
1
0
0
36,362
7,261,451
2011-08-31T18:12:00.000
3
1
0
1
1
python,mercurial,eclipse-plugin,osx-lion
1
7,278,773
0
2
0
false
1
0
Nobody answered me, but I figured out the answer. Maybe it will help someone. I finally realized that since 'hg -y debuginstall' at the command line was giving me the same error message, it wasn't an Eclipse problem at all (duh). Reinstalling a newer version of Mercurial solved the problem.
2
1
0
0
I'm on Mac OS X 10.7.1 (Lion). I just downloaded a fresh copy of Eclipse IDE for Java EE Developers, and installed the Mercurial plugin. I get the following error message: abort: couldn't find mercurial libraries in [...assorted Python directories...]. I do have Python 2.6.1 and 3.2.1 installed. I also have a directory System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7, which is on the list of places it looked for the Mercurial libraries. hg -y debuginstall gives me the same message. What are these libraries named, where is Eclipse likely to have put them when I installed the plugin, and how do I tell Eclipse where they are (or where should I move them to)? Thanks, Dave Full error message follows: abort: couldn't find mercurial libraries in [/usr/platlib/Library/Python/2.6/site-packages /usr/local/bin /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7 /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC /Library/Python/2.7/site-packages] (check your install and PYTHONPATH)
Mercurial plugin for Eclipse can't find Python--how to fix?
0
0.291313
1
0
0
1,116
7,261,451
2011-08-31T18:12:00.000
0
1
0
1
1
python,mercurial,eclipse-plugin,osx-lion
1
12,130,976
0
2
0
false
1
0
I had two installation of mercurial in mac. One was installed directly and another using macport. Removing the direct installation solved the problem. Remove the direct installation using easy_install -m mercurial Update "Mercurial Executable" path to "/opt/local/bin/hg" Eclipse->Preference->Team->Mercurial-> Restart eclipse
2
1
0
0
I'm on Mac OS X 10.7.1 (Lion). I just downloaded a fresh copy of Eclipse IDE for Java EE Developers, and installed the Mercurial plugin. I get the following error message: abort: couldn't find mercurial libraries in [...assorted Python directories...]. I do have Python 2.6.1 and 3.2.1 installed. I also have a directory System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7, which is on the list of places it looked for the Mercurial libraries. hg -y debuginstall gives me the same message. What are these libraries named, where is Eclipse likely to have put them when I installed the plugin, and how do I tell Eclipse where they are (or where should I move them to)? Thanks, Dave Full error message follows: abort: couldn't find mercurial libraries in [/usr/platlib/Library/Python/2.6/site-packages /usr/local/bin /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7 /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC /Library/Python/2.7/site-packages] (check your install and PYTHONPATH)
Mercurial plugin for Eclipse can't find Python--how to fix?
0
0
1
0
0
1,116
7,268,426
2011-09-01T09:26:00.000
1
0
1
0
0
python,eclipse,pydev
0
7,268,461
0
2
0
false
0
0
Did you try rename? it seems to work in other language plugins.(make sure to right click on the module)
2
1
0
0
I'm using eclipse + pydev. I want to refactor and change the name of my module. I clicked on my module in the package explorer, but no refactoring (other than 'rename') in the context menu. Similarly, the refactoring on the top navigation menu is greyed out. So how do i change my module name and have it reflected across my project.?
eclipse+pydev how to rename module?
0
0.099668
1
0
0
1,767
7,268,426
2011-09-01T09:26:00.000
2
0
1
0
0
python,eclipse,pydev
0
7,272,489
0
2
0
true
0
0
This feature is still not properly integrated in the pydev package explorer, so, for now, you have to find a place that uses the module you want in the code and rename that reference in the editor (you don't need to report that as a bug as that's already known and should be fixed soon).
2
1
0
0
I'm using eclipse + pydev. I want to refactor and change the name of my module. I clicked on my module in the package explorer, but no refactoring (other than 'rename') in the context menu. Similarly, the refactoring on the top navigation menu is greyed out. So how do i change my module name and have it reflected across my project.?
eclipse+pydev how to rename module?
0
1.2
1
0
0
1,767
7,279,761
2011-09-02T06:05:00.000
7
0
0
0
0
python,sql,performance
0
7,279,821
0
5
0
false
0
0
Let the DB figure out how best to retrieve the information that you want, else you'll have to duplicate the functionality of the RDBMS in your code, and that will be way more complex than your SQL queries. Plus, you'll waste time transferring all that unneeded information from the DB to your app, so that you can filter and process it in code. All this is true because you say you're dealing with large data.
3
9
0
0
I am dealing with an application with huge SQL queries. They are so complex that when I finish understanding one I have already forgotten how it all started. I was wondering if it will be a good practice to pull more data from database and make the final query in my code, let's say, with Python. Am I nuts? Would it be that bad for performance? Note, results are huge too, I am talking about an ERP in production developed by other people.
Should I use complex SQL queries or process results in the application?
0
1
1
1
0
3,111
7,279,761
2011-09-02T06:05:00.000
3
0
0
0
0
python,sql,performance
0
7,280,826
0
5
0
true
0
0
I would have the business logic in the application, as much as possible. Complex business logic in queries are difficult to maintain. (when I finish understanding one I have already forgotten how it all started)Complex logic in stored procedures are ok. But with a typical python application, you would want your business logic to be in python. Now, the database is way better in handling data than your application code. So if your logic involves huge amount of data, you may get better performance with the logic in the database. But this will be for complex reports, bookkeeping operations and such, that operate on a large volume of data. You may want to use stored procedures, or systems that specialize in such operations (a data warehouse for reports) for these types of operations. Normal OLTP operations do not involve much of data. The database may be huge, but the data required for a typical transaction will be (typically) a very small part of it. Querying this in a large database may cause performance issues, but you can optimize this in several ways (indexes, full text searches, redundancy, summary tables... depends on your actual problem). Every rule has exceptions, but as a general guideline, try to have your business logic in your application code. Stored procedures for complex logic. A separate data warehouse or a set of procedures for reporting.
3
9
0
0
I am dealing with an application with huge SQL queries. They are so complex that when I finish understanding one I have already forgotten how it all started. I was wondering if it will be a good practice to pull more data from database and make the final query in my code, let's say, with Python. Am I nuts? Would it be that bad for performance? Note, results are huge too, I am talking about an ERP in production developed by other people.
Should I use complex SQL queries or process results in the application?
0
1.2
1
1
0
3,111
7,279,761
2011-09-02T06:05:00.000
1
0
0
0
0
python,sql,performance
0
7,282,367
0
5
0
false
0
0
@Nivas is generally correct. These are pretty common patterns Division of labour - the DBAs have to return all the data the business need, but they only have a database to work with. The developers could work with the DBAs to do it better but departmental responsbilities make it nearly impossible. So SQL to do morethan retrieve data is used. lack of smaller functions. Could the massive query be broken down into smaller stages, using working tables? Yes, but I have known environments where a new table needs reams of approavals - a heavy Query is just written So, in general, getting data out of the database - thats down to the database. But if a SQL query is too long its going to be hard for the RDBMS to optimise, and it probably means the query is spanning data, business logic and even presentation in one go. I would suggest a saner approach is usually to seperate out the "get me the data" portions into stored procedures or other controllable queries that populate staging tables. Then the business logic can be written into a scripting language sitting above and controlling the stored procedures. And presentation is left elsewhere. In essence solutions like cognos try to do this anyway. But if you are looking at an ERP in production, the constraints and the solutions above probably already exist - are you talking to the right people?
3
9
0
0
I am dealing with an application with huge SQL queries. They are so complex that when I finish understanding one I have already forgotten how it all started. I was wondering if it will be a good practice to pull more data from database and make the final query in my code, let's say, with Python. Am I nuts? Would it be that bad for performance? Note, results are huge too, I am talking about an ERP in production developed by other people.
Should I use complex SQL queries or process results in the application?
0
0.039979
1
1
0
3,111
7,285,135
2011-09-02T14:43:00.000
0
0
0
0
1
python,cx-oracle
0
7,530,424
0
1
0
true
0
0
Do you have an Oracle support contract? If I would file an SR and upload the trace to Oracle and have them tell you what it is complaining about. Those code calls are deep in their codebase from the looks of it.
1
1
0
0
My python application is dying, this oracle trace file is being generated. I am using cx_Oracle, how do I go about using this trace file to resolve this crash? ora_18225_139690296567552.trc kpedbg_dmp_stack()+360<-kpeDbgCrash()+192<-kpureq2()+3194<-OCIStmtPrepare2()+157<-Cursor_InternalPrepare()+298<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010<-0000000000EA3010
I have an Oracle Stack trace file Python cx_Oracle
0
1.2
1
1
0
653
7,293,290
2011-09-03T13:09:00.000
0
0
0
0
0
python,apache,memory-management,mod-wsgi
0
7,293,404
0
2
0
false
1
0
All Python globals are created when the module is imported. When module is re-imported the same globals are used. Python web servers do not do threading, but pre-forked processes. Thus there is no threading issues with Apache. The lifecycle of Python processes under Apache depends. Apache has settings how many child processes are spawned, keep in reserve and killed. This means that you can use globals in Python processes for caching (in-process cache), but the process may terminate after any request so you cannot put any persistent data in the globals. But the process does not necessarily need to terminate and in this regard Python is much more efficient than PHP (the source code is not parsed for every request - but you need to have the server in reload mode to read source code changes during the development). Since globals are per-process and there can be N processes, the processes share "web server global" state using mechanisms like memcached. Usually Python globals only contain Setting variables set during the process initialization Cached data (session/user neutral)
1
4
0
0
In a regular application (like on Windows), when objects/variables are created on a global level it is available to the entire program during the entire duration the program is running. In a web application written in PHP for instance, all variables/objects are destroyed at the end of the script so everything has to be written to the database. a) So what about python running under apache/modwsgi? How does that work in regards to the memory? b) How do you create objects that persist between web page requests and how do you ensure there isn't threading issues in apache/modwsgi?
Memory model for apache/modwsgi application in python?
0
0
1
1
0
193
7,296,987
2011-09-04T02:02:00.000
1
0
1
0
0
python,transition,pygame,blit
0
7,300,851
0
3
0
false
0
1
Here, what is wanted is to be able to share the variables between two different applications: 2 different scripts with event loops, blits, etc. So by definition, they should be on different process (if they must be running at the same time). There is two major ways of doing this: 1 - Client-Server Architecture (like a Game server would be) (the server and client can both run on the same machine) 2 - Multiprocessing with 2 process running on the same machine with different ways of communicating and synchronize the variables. (Pipe Queue, Event, etc) I understand that you're trying to do a kind of variables profiling of your game? If it is used to debug your game or test it. I consider that you need a lot of code to gain a little useful information (because the game might run too fast for you to analyse the variables) You have those alternatives: 1 - import pdb, pdb.set_trace(): it will stop the process at the line where you called the function and, on the terminal, you can see the variables values. 2 - You can use Eclipse (with pyDev): Very good debugger (line by line) 3 - Unittest, Mock: Is something you should start to use, because it is useful, because you can easily see when you break some old code (with unittest) and/or testing new code... I hope it helps you :)
2
1
0
0
Sorry if this question is incredibly easy to answer, or I sound like an idiot. I'm wondering how I would execute a script in one file, pygame event loop, blits, etc, then switch to another file, SelectWorld.py, which has it's own event loop, and blits, and etc. If I just call it's main function, does it create any slowdown because I still have the original file open, or am I fine just doing that? SelectWorld.transition() sort of thing. Thanks in advance.
Python/Pygame transitioning from one file to another
0
0.066568
1
0
0
790
7,296,987
2011-09-04T02:02:00.000
0
0
1
0
0
python,transition,pygame,blit
0
10,609,284
0
3
0
true
0
1
Turns out the answer to this was painfully simple, and I asked this back when I was just learning Python. There is no speed downgrade from just calling a function from another file and letting it do all the work. Thanks for all the answers, guys.
2
1
0
0
Sorry if this question is incredibly easy to answer, or I sound like an idiot. I'm wondering how I would execute a script in one file, pygame event loop, blits, etc, then switch to another file, SelectWorld.py, which has it's own event loop, and blits, and etc. If I just call it's main function, does it create any slowdown because I still have the original file open, or am I fine just doing that? SelectWorld.transition() sort of thing. Thanks in advance.
Python/Pygame transitioning from one file to another
0
1.2
1
0
0
790
7,303,309
2011-09-05T02:02:00.000
1
0
0
0
0
javascript,python,http,foursquare
0
7,354,603
0
2
0
false
0
0
The current API limits results to 50. You should try altering your coordinates to be more precise to avoid not finding your venue. Pagination would be nice but 50 is a lot of venues for a search.
1
1
0
0
I am trying to get some locations in New York using FourSquare API using the following API call: https://api.foursquare.com/v2/venues/search?ll=40.7,-74&limit=50 What I don't understand is that if the call imposes a limit of 50 search results (which is the maximum), how can I get more locations? When using Facebook API, the results being returned were random so I could issue multiple calls to get more results but FourSquare seems to be returning the same result set. Is there a good way to get more locations? EDIT: Ok. So there was a comment saying that I could be breaking a contractual agreement and I am not sure why this would be the case. I would gladly accept a reasoning for this. My doubt is this: Let us say that hypothetically, the location I am searching for is not in the 50 results returned? In that case, shouldn't there be a pagination mechanism somewhere?
How do I get more locations?
1
0.099668
1
0
1
2,222
7,313,761
2011-09-06T00:08:00.000
-9
0
1
0
0
python,asynchronous,twisted,addition,arithmetic-expressions
0
20,604,305
0
3
0
false
0
0
Good question, and Twisted (or Python) should have a way to at least spawn "a + b" of to several cores (on my 8 core i7). Unfortunately the Python GIL prevents this from happening, meaning that you will have to wait, not only for the CPU bound task, but for one core doing the job, while the seven others core are doing nothing. Note: Maybe a better example would be "a() + b()", or even "fact(sqrt(a()**b())" etc. but the important fact is that above operation will lock one core and the GIL pretty much prevents Python for doing anything else during that operation, which could be several ms...
1
48
0
0
I have two integers in my program; let's call them "a" and "b". I would like to add them together and get another integer as a result. These are regular Python int objects. I'm wondering; how do I add them together with Twisted? Is there a special performAsynchronousAddition function somewhere? Do I need a Deferred? What about the reactor? Is the reactor involved?
How do I add two integers together with Twisted?
0
-1
1
0
0
5,861
7,334,587
2011-09-07T13:23:00.000
3
1
0
1
0
php,python,process,centos,process-management
0
7,334,651
0
1
0
false
0
0
is there any way to keep these processes running in such a way that all the variables are saved and i can restart the script from where it stopped? Yes. It's called creating a "checkpoint" or "memento". i know i can program this Good. Get started. Each problem is unique, so you have to create, save, and reload the mementos. but would prefer a generalised utility which could just keep these things running so that the script completed even if there were trivial errors. It doesn't generalize well. Not all variables can be saved. Only you know what's required to restart your process in a meaningful way. perhaps i need some sort of process-management tool? Not really. trivial errors eg. string encoding issues Usually, we find these by unit testing. That saves a lot of programming to work around the error. An ounce of prevention is worth a pound of silly work-arounds. sometimes because the process seems to get killed by the server. What? You'd better find out why. An ounce of prevention is worth a pound of silly work-arounds.
1
1
0
0
I need to run a bunch of long running processes on a CENTOS server. If I leave the processes (python/php scripts) to run sometimes the processes will stop running because of trivial errors eg. string encoding issues or sometimes because the process seems to get killed by the server. I try to use nohup and fire the jobs from the crontab Is there any way to keep these processes running in such a way that all the variables are saved and I can restart the script from where it stopped? I know I can program this into the code but would prefer a generalised utility which could just keep these things running so that the script completed even if there were trivial errors. Perhaps I need some sort of process-management tool? Many thanks for any suggestions
running really long scripts - how to keep them running and start them again if they fail?
0
0.53705
1
0
0
137
7,335,957
2011-09-07T14:53:00.000
0
0
1
0
0
python
0
7,336,599
0
2
0
false
0
0
You can make this by open cmd.exe and type here "C:\Python32\python". Path is depend on the version of python. Mine is 3.2.
1
0
0
0
Is there a simple module that let's you paste input in python ? Asking someone to type letter by letter is kinda harsh . By default a .py file is opened with python.exe if is installed and this does not allow "rightclick+paste" in the console .So how can i make this happen with python ? i think this would be more of a exact question .
Python raw_input() unable to paste in windows?
0
0
1
0
0
1,567
7,346,079
2011-09-08T09:44:00.000
0
0
0
0
0
python,sqlite
0
34,628,302
0
3
0
false
0
0
To follow up on Thilo's answer, as a data point, I have a sqlite table with 2.3 million rows. Using select count(*) from table, it took over 3 seconds to count the rows. I also tried using SELECT rowid FROM table, (thinking that rowid is a default primary indexed key) but that was no faster. Then I made an index on one of the fields in the database (just an arbitrary field, but I chose an integer field because I knew from past experience that indexes on short fields can be very fast, I think because the index is stored a copy of the value in the index itself). SELECT my_short_field FROM table brought the time down to less than a second.
3
2
0
0
I have a single table in an Sqlite DB, with many rows. I need to get the number of rows (total count of items in the table). I tried select count(*) from table, but that seems to access each row and is super slow. I also tried select max(rowid) from table. That's fast, but not really safe -- ids can be re-used, table can be empty etc. It's more of a hack. Any ideas on how to find the table size quickly and cleanly? Using Python 2.5's sqlite3 version 2.3.2, which uses Sqlite engine 3.4.0.
Fast number of rows in Sqlite
0
0
1
1
0
3,251
7,346,079
2011-09-08T09:44:00.000
1
0
0
0
0
python,sqlite
0
7,346,136
0
3
0
false
0
0
Do you have any kind of index on a not-null column (for example a primary key)? If yes, the index can be scanned (which hopefully does not take that long). If not, a full table scan is the only way to count all rows.
3
2
0
0
I have a single table in an Sqlite DB, with many rows. I need to get the number of rows (total count of items in the table). I tried select count(*) from table, but that seems to access each row and is super slow. I also tried select max(rowid) from table. That's fast, but not really safe -- ids can be re-used, table can be empty etc. It's more of a hack. Any ideas on how to find the table size quickly and cleanly? Using Python 2.5's sqlite3 version 2.3.2, which uses Sqlite engine 3.4.0.
Fast number of rows in Sqlite
0
0.066568
1
1
0
3,251
7,346,079
2011-09-08T09:44:00.000
1
0
0
0
0
python,sqlite
0
7,346,821
0
3
0
false
0
0
Other way to get the rows number of a table is by using a trigger that stores the actual number of rows in other table (each insert operation will increment a counter). In this way inserting a new record will be a little slower, but you can immediately get the number of rows.
3
2
0
0
I have a single table in an Sqlite DB, with many rows. I need to get the number of rows (total count of items in the table). I tried select count(*) from table, but that seems to access each row and is super slow. I also tried select max(rowid) from table. That's fast, but not really safe -- ids can be re-used, table can be empty etc. It's more of a hack. Any ideas on how to find the table size quickly and cleanly? Using Python 2.5's sqlite3 version 2.3.2, which uses Sqlite engine 3.4.0.
Fast number of rows in Sqlite
0
0.066568
1
1
0
3,251
7,351,744
2011-09-08T16:56:00.000
122
0
1
0
0
python,string
0
7,351,789
0
3
0
true
0
0
Use rpartition(s). It does exactly that. You can also use rsplit(s, 1).
1
103
0
0
I would like to know if there is any built in function in python to break the string in to 2 parts, based on the last occurrence of a separator. for eg: consider the string "a b c,d,e,f" , after the split over separator ",", i want the output as "a b c,d,e" and "f". I know how to manipulate the string to get the desired output, but i want to know if there is any in built function in python.
split string in to 2 based on last occurrence of a separator
0
1.2
1
0
0
62,919
7,358,224
2011-09-09T07:05:00.000
0
0
0
0
0
python
0
7,358,989
0
2
0
false
0
0
First I'd check if the browser has some kind of command line argument which could print such informations. I only checked Opera and it doesn't have one. What you could do is parse session file. I'd bet that every browser stores list of opened tabs/windows on disk (so it could recover after crash). Opera has this information in ~/.opera/sessions/autosave.win. It's pretty straight-forward text file. Find other browsers' session files in .mozzila, .google, etc.. or if you are on windows in /user/ directories. There might be commands to ask running instance for its working directory (as you can specify it on startup and it doesn't have to be the default one). That's the way I'd go. Might be the wrong one.
1
1
0
0
I would like to ask how can I get list of urls which are opened in my webbrowser, from example in firefox. I need it in Python. Thanks
Get url from webbrowser in python
0
0
1
0
1
1,872
7,368,119
2011-09-09T22:29:00.000
0
0
1
0
0
python,functional-programming
0
7,391,685
0
3
0
false
0
0
It's not a literal answer to your question, but I'd recommend to your friend to practice in Javascript instead of python. With python you can do some functional programming, but most projects don't need to do much if any. Javascript really requires doing this, and is at least as common/useful of a language these days. You'll find a lot more useful educational material on closures in javascript than python.
1
7
0
1
I recommended to a friend to learn some functional programming using Python to expand his knowledge and overcome programmer's fatigue. I chose Python because that way there's a good chance he'll be able to use the new knowledge in practical daily work. I tried to find him some tutorials, and found a lot of guides - diving deep into how to use map, reduce, filter, etc., but don't provide exercises where he can learn while coding. Where can I find a tutorial that uses functional python to solve problems while teaching? An optimal answer for me would be homework from a functional programming course, that needs to be written in Python. Such a thing is probably rare because an academic course will usually prefer a purer functional language for such work.
What good homework style tutorials are recommended for learning functional programming in Python?
0
0
1
0
0
539
7,371,442
2011-09-10T11:39:00.000
1
1
0
0
0
python,sockets,proxy
0
7,378,232
0
1
0
false
0
0
If it's a HTTP traffic, you can scan for headers like X-Forwarded-For. But whatever you do it will always be only a heuristic.
1
2
0
0
Is there any way how to find out, if ip address comming to the server is proxy in Python? I tried to scan most common ports, but i don't want to ban all ips with open 80 port, because it don't have to be proxy. Is there any way how to do it in Python? I would prefere it before using some external/paid services.
Python - Determine if ip is proxy or not
1
0.197375
1
0
1
328
7,375,572
2011-09-11T00:33:00.000
2
0
0
0
1
python,postgresql,psycopg2
1
7,378,101
0
2
0
true
0
0
Can you paste in the data from the row that's causing the problem? At a guess I'd say it's a badly formatted date entry, but hard to say. (Can't comment, so has to be in a answer...)
2
0
0
0
I am using Psycopg2 with PostgreSQL 8.4. While reading from a huge table, I suddenly get this cryptic error at the following line of code, after this same line of code has successfully fetched a few hundred thousand rows. somerows = cursorToFetchData.fetchmany(30000) psycopg2.DataError: invalid value "LÃ" for "DD" DETAIL: Value must be an integer. My problem is that I have no column named "DD", and about 300 columns in that table (I know 300 columns is a design flaw). I would appreciate a hint about the meaning of this error message, or how to figure out where the problem lies. I do not understand how Psycop2 can have any requirements about the datatype while fetching rows.
Cryptic Psycopg2 error message
0
1.2
1
1
0
176
7,375,572
2011-09-11T00:33:00.000
1
0
0
0
1
python,postgresql,psycopg2
1
40,247,155
0
2
0
false
0
0
This is not a psycopg error, it is a postgres error. After the error is raised, take a look at cur.query to see the query generated. Copy and paste it into psql and you'll see the same error. Then debug it from there.
2
0
0
0
I am using Psycopg2 with PostgreSQL 8.4. While reading from a huge table, I suddenly get this cryptic error at the following line of code, after this same line of code has successfully fetched a few hundred thousand rows. somerows = cursorToFetchData.fetchmany(30000) psycopg2.DataError: invalid value "LÃ" for "DD" DETAIL: Value must be an integer. My problem is that I have no column named "DD", and about 300 columns in that table (I know 300 columns is a design flaw). I would appreciate a hint about the meaning of this error message, or how to figure out where the problem lies. I do not understand how Psycop2 can have any requirements about the datatype while fetching rows.
Cryptic Psycopg2 error message
0
0.099668
1
1
0
176
7,378,398
2011-09-11T13:11:00.000
1
0
0
0
1
python,django
0
7,378,820
0
4
0
false
1
0
I would also browse the documentation for Paste and read a bit from Ian Bicking.. He lays out the conceptual blocks so to speak, quite well and has bloggedthe lessons learned as it was developed. Web2py doco also but yes as UKU said: WSGI is a modern requirement.
1
14
0
0
I'm just wondering what knowledge or techniques are needed to make a web framework like django. so the webframework is able to serve as a cloud computing (a website can be scaled horizontally by sending some stuffs that must be solved to other server.) when needed, and can be used to build a website fast like django if a developer want to build a just simple website. Sorry. my English is very awkward cause I'm an Korean. just give me some approaches or instructions about what techniques are needed to build a web framework or what I have to do or learn. Thanks a lot.
how to make web framework based on Python like django?
0
0.049958
1
0
0
12,556
7,381,258
2011-09-11T21:08:00.000
2
0
0
0
1
python,memory,io
0
7,381,424
0
11
1
false
0
0
Two ideas: Use numpy arrays to represent vectors. They are much more memory-efficient, at the cost that they will force elements of the vector to be of the same type (all ints or all doubles...). Do multiple passes, each with a different set of vectors. That is, choose first 1M vectors and do only the calculations involving them (you said they are independent, so I assume this is viable). Then another pass over all the data with second 1M vectors. It seems you're on the edge of what you can do with your hardware. It would help if you could describe what hardware (mostly, RAM) is available to you for this task. If there are 100k vectors, each of them with 1M ints, this gives ~370GB. If multiple passes method is viable and you've got a machine with 16GB RAM, then it is about ~25 passes -- should be easy to parallelize if you've got a cluster.
4
17
1
0
Background I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well. Requirements The key aspect of this particular program I must write is that it must: Read through a large corpus (between 5G and 30G, and potentially larger stuff down the line) Process the data on each line. From this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors. These vectors must all be saved to disk in some format or other. Steps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4) Parenthesis: Technical Details In case the actual procedure for building vectors affects the solution: For each line in the corpus, one or more vectors must have its basis weights updated. If you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index). Vectors do not depend on each other, nor does it matter which order the corpus lines are read in. Attempted Solutions There are three extrema when it comes to how to do this: I could build all the vectors in memory. Then write them to disk. I could build all the vectors directly on the disk, using shelf of pickle or some such library. I could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector. All these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons. Goals A good solution would involve: Building as much as possible in memory. Once memory is full, dump everything to disk. If bits are needed from disk again, recover them back into memory to add stuff to those vectors. Go back to 1 until all vectors are built. The problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing. Question Does anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it? Many thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way. Additional Details The sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (à la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus. Part of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available?
Minimising reading from and writing to disk in Python for a memory-heavy operation
1
0.036348
1
0
0
3,167
7,381,258
2011-09-11T21:08:00.000
1
0
0
0
1
python,memory,io
0
7,381,462
0
11
1
false
0
0
Use a database. That problem seems large enough that language choice (Python, Perl, Java, etc) won't make a difference. If each dimension of the vector is a column in the table, adding some indexes is probably a good idea. In any case this is a lot of data and won't process terribly quickly.
4
17
1
0
Background I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well. Requirements The key aspect of this particular program I must write is that it must: Read through a large corpus (between 5G and 30G, and potentially larger stuff down the line) Process the data on each line. From this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors. These vectors must all be saved to disk in some format or other. Steps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4) Parenthesis: Technical Details In case the actual procedure for building vectors affects the solution: For each line in the corpus, one or more vectors must have its basis weights updated. If you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index). Vectors do not depend on each other, nor does it matter which order the corpus lines are read in. Attempted Solutions There are three extrema when it comes to how to do this: I could build all the vectors in memory. Then write them to disk. I could build all the vectors directly on the disk, using shelf of pickle or some such library. I could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector. All these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons. Goals A good solution would involve: Building as much as possible in memory. Once memory is full, dump everything to disk. If bits are needed from disk again, recover them back into memory to add stuff to those vectors. Go back to 1 until all vectors are built. The problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing. Question Does anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it? Many thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way. Additional Details The sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (à la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus. Part of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available?
Minimising reading from and writing to disk in Python for a memory-heavy operation
1
0.01818
1
0
0
3,167
7,381,258
2011-09-11T21:08:00.000
0
0
0
0
1
python,memory,io
0
7,433,853
0
11
1
false
0
0
Split the corpus evenly in size between parallel jobs (one per core) - process in parallel, ignoring any incomplete line (or if you cannot tell if it is incomplete, ignore the first and last line of that each job processes). That's the map part. Use one job to merge the 20+ sets of vectors from each of the earlier jobs - That's the reduce step. You stand to loose information from 2*N lines where N is the number of parallel processes, but you gain by not adding complicated logic to try and capture these lines for processing.
4
17
1
0
Background I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well. Requirements The key aspect of this particular program I must write is that it must: Read through a large corpus (between 5G and 30G, and potentially larger stuff down the line) Process the data on each line. From this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors. These vectors must all be saved to disk in some format or other. Steps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4) Parenthesis: Technical Details In case the actual procedure for building vectors affects the solution: For each line in the corpus, one or more vectors must have its basis weights updated. If you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index). Vectors do not depend on each other, nor does it matter which order the corpus lines are read in. Attempted Solutions There are three extrema when it comes to how to do this: I could build all the vectors in memory. Then write them to disk. I could build all the vectors directly on the disk, using shelf of pickle or some such library. I could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector. All these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons. Goals A good solution would involve: Building as much as possible in memory. Once memory is full, dump everything to disk. If bits are needed from disk again, recover them back into memory to add stuff to those vectors. Go back to 1 until all vectors are built. The problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing. Question Does anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it? Many thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way. Additional Details The sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (à la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus. Part of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available?
Minimising reading from and writing to disk in Python for a memory-heavy operation
1
0
1
0
0
3,167
7,381,258
2011-09-11T21:08:00.000
0
0
0
0
1
python,memory,io
0
7,381,527
0
11
1
false
0
0
From another comment I infer that your corpus fits into the memory, and you have some cores to throw at the problem, so I would try this: Find a method to have your corpus in memory. This might be a sort of ram disk with file system, or a database. No idea, which one is best for you. Have a smallish shell script monitor ram usage, and spawn every second another process of the following, as long as there is x memory left (or, if you want to make things a bit more complex, y I/O bandwith to disk): iterate through the corpus and build and write some vectors in the end you can collect and combine all vectors, if needed (this would be the reduce part)
4
17
1
0
Background I am working on a fairly computationally intensive project for a computational linguistics project, but the problem I have is quite general and hence I expect that a solution would be interesting to others as well. Requirements The key aspect of this particular program I must write is that it must: Read through a large corpus (between 5G and 30G, and potentially larger stuff down the line) Process the data on each line. From this processed data, construct a large number of vectors (dimensionality of some of these vectors is > 4,000,000). Typically it is building hundreds of thousands of such vectors. These vectors must all be saved to disk in some format or other. Steps 1 and 2 are not hard to do efficiently: just use generators and have a data-analysis pipeline. The big problem is operation 3 (and by connection 4) Parenthesis: Technical Details In case the actual procedure for building vectors affects the solution: For each line in the corpus, one or more vectors must have its basis weights updated. If you think of them in terms of python lists, each line, when processed, updates one or more lists (creating them if needed) by incrementing the values of these lists at one or more indices by a value (which may differ based on the index). Vectors do not depend on each other, nor does it matter which order the corpus lines are read in. Attempted Solutions There are three extrema when it comes to how to do this: I could build all the vectors in memory. Then write them to disk. I could build all the vectors directly on the disk, using shelf of pickle or some such library. I could build the vectors in memory one at a time and writing it to disk, passing through the corpus once per vector. All these options are fairly intractable. 1 just uses up all the system memory, and it panics and slows to a crawl. 2 is way too slow as IO operations aren't fast. 3 is possibly even slower than 2 for the same reasons. Goals A good solution would involve: Building as much as possible in memory. Once memory is full, dump everything to disk. If bits are needed from disk again, recover them back into memory to add stuff to those vectors. Go back to 1 until all vectors are built. The problem is that I'm not really sure how to go about this. It seems somewhat unpythonic to worry about system attributes such as RAM, but I don't see how this sort of problem can be optimally solved without taking this into account. As a result, I don't really know how to get started on this sort of thing. Question Does anyone know how to go about solving this sort of problem? I python simply not the right language for this sort of thing? Or is there a simple solution to maximise how much is done from memory (within reason) while minimising how many times data must be read from the disk, or written to it? Many thanks for your attention. I look forward to seeing what the bright minds of stackoverflow can throw my way. Additional Details The sort of machine this problem is run on usually has 20+ cores and ~70G of RAM. The problem can be parallelised (à la MapReduce) in that separate vectors for one entity can be built from segments of the corpus and then added to obtain the vector that would have been built from the whole corpus. Part of the question involves determining a limit on how much can be built in memory before disk-writes need to occur. Does python offer any mechanism to determine how much RAM is available?
Minimising reading from and writing to disk in Python for a memory-heavy operation
1
0
1
0
0
3,167
7,385,037
2011-09-12T08:18:00.000
0
0
1
0
0
python,class,logging
0
7,389,220
0
5
0
false
0
0
I personally just tend to name my loggers after classes, as it makes it much easier to track down where a particular message came from. So you can have a root logger named "top", and for the module "a" and class "testclass", I name my logger "top.a.testclass". I don't see the need to otherwise retrieve the classname, since the log message should give you all the information you need. @ed's response above, it feels very unpythonic to me and it is not something I would be comfortable with using on production code.
1
24
0
0
If I want the function name I can simply include %(funcName)s in the Formatter. But how do I get the name of the class containing the logging call instead? I've gone through the documentation for logging, but I can't find any mentioning of it.
How do I get the name of the class containing a logging call in Python?
0
0
1
0
0
30,677
7,389,417
2011-09-12T14:25:00.000
0
0
0
0
0
python,user-interface,wxpython,wxwidgets
0
7,389,557
0
1
0
false
0
1
I susggest that you create three panels, side by side. When one of the panels is resized by the user, you will have to adjust the size of the other panels to compensate - so that there are no gaps or overlaps. You can do this by handling the resize event, probably in the parent windows of the three panels. Another way, which requires you to write less code, would be to use wxGrid with one row and three columns and zero width labels for columns and rows. You will lose the flexibility of panels, but wxGrid will look after the resizing of the column widths for you.
1
0
0
0
I'm trying to figure out how i can have a 3 column layout where the (smaller) left and right columns are resizable with a draggable separator on each side of the center/main area. I've tried using splitwindow but that seems to only split in two parts. Hope someone can give me pointers on how it can be done.
WxWidget/WxPython; 3 column resizable layout
0
0
1
0
0
383
7,390,016
2011-09-12T15:08:00.000
2
0
1
0
0
python
0
7,390,053
0
4
0
false
0
0
Don't you have readline enabled? You can look through your interpreter history to find what you want. I think it's easier than digging through globals() or dir().
2
3
0
0
I work in a python shell and some weeks ago I have defined a variable which refers to a very important list. The shell stays always open, but I have forgotten this name. How to get a list of all global names I have ever defined?
Python - how to get a list of all global names I have ever defined?
0
0.099668
1
0
0
1,597
7,390,016
2011-09-12T15:08:00.000
2
0
1
0
0
python
0
7,390,036
0
4
0
false
0
0
You can examine globals(), which shows all the module-level variables, or locals(), which is the local scope. In the prompt, these are the same. Also, vars() shows all the names available to you, no matter where you are.
2
3
0
0
I work in a python shell and some weeks ago I have defined a variable which refers to a very important list. The shell stays always open, but I have forgotten this name. How to get a list of all global names I have ever defined?
Python - how to get a list of all global names I have ever defined?
0
0.099668
1
0
0
1,597
7,392,676
2011-09-12T19:05:00.000
0
1
0
0
1
c#,python,web-services,rpc
0
7,392,759
0
1
0
false
0
0
As John wrote, you're quite late if it's urgent and your description is quite vague. There are 1001 RPC techniques and the choice depends on details. But taking into account that you seem just to exchange some xml data, you probably don't need a full RPC implementation. You can write a HTTP server in python with just a few lines of code. If it needs to be a bit more stable and log running, have a look at twisted. Then just use pure html and the WebClient class. Not a perfect solution, but worked out quite well for me more than once. And you said it's urgent! ;-)
1
0
0
0
I am basically new to this kind of work.I am programming my application in C# in VS2010.I have a crystal report that is working fine and it basically gets populated with some xml data. That XMl data is coming from other application that is written in Python on another machine. That Python script generates some data and that data is put on the memory stream. I basically have to read that memory stream and write my xml which is used to populate my crystal report. So my supervisor wants me to use remote procedure call. I have never done any remote procedure calling. But as I have researched and understood. I majorly have to develop a web or WCF service I guess. I don't know how should I do it. We are planning to use the http protocol. So, this is how it is supposed to work. I give them the url of my service and they would call that service and my service should try to read the data they put on the memory stream. After reading the data I should use part of the data to write my xml and this xml is used to populate my crystal report. The other part of the data ( other than the data used to write the xml) should be sent to a database on the SQl server. This is my complete problem definition. I need ideas and links that will help me in solving this problem.
Remote procedure call in C#
0
0
1
0
1
779
7,398,343
2011-09-13T07:34:00.000
0
1
0
0
0
python,automation,citrix,packet-injection
0
10,010,649
0
1
0
false
0
0
after a lot of research, it cant be done. some manipulation like change window focus with the ICA COM object.
1
0
0
0
most of my job is on a citrix ICA app. i work in a winsows enviroment. among other things, i have to print 300 reports from my app weekly. i am trying to automate this task. i was using a screenshot automation tool called sikuli, but it is not portable form station to station. i thought i might be able to inject packets and send the commands on that level. i was not able to read the packets i captured with whireshark or do anythin sensable with them. i have expirence with python and if i get pointed in the right direction, i am pretty sure i can pull something off. does anyone have any ideas on how to do this (i am leaning towards packet injection aat the moment, but am open to ideas). thanks for the help, sam
citrix GUI automation or packet injection?
0
0
1
0
1
978
7,406,102
2011-09-13T17:40:00.000
0
0
1
0
0
python,cocoa,filenames,pyobjc
0
71,761,675
0
13
0
false
0
0
Extra note for all other answers Add hash of original string to the end of filename. It will prevent conflicts in case your conversion makes same filename from different strings.
1
64
0
0
I want to create a sane/safe filename (i.e. somewhat readable, no "strange" characters, etc.) from some random Unicode string (which might contain just anything). (It doesn't matter for me whether the function is Cocoa, ObjC, Python, etc.) Of course, there might be infinite many characters which might be strange. Thus, it is not really a solution to have a blacklist and to add more and more to that list over the time. I could have a whitelist. However, I don't really know how to define it. [a-zA-Z0-9 .] is a start but I also want to accept unicode chars which can be displayed in a normal way.
Create (sane/safe) filename from any (unsafe) string
0
0
1
0
0
48,106
7,408,089
2011-09-13T20:30:00.000
0
0
1
0
0
python,binary
0
7,408,259
0
4
0
false
0
0
Use open with 'rb' as flags. That would read the file in binary mode
1
0
0
1
I seem to be able to find information on how to do this in C#, but not on how to perform the same operation in Python. Any advice or suggestions would be appreciated. Thank you very much.
In Python, how do I convert a .exe file to a string of 1s and 0s?
0
0
1
0
0
1,453
7,412,941
2011-09-14T07:51:00.000
0
0
1
0
0
python
0
7,414,084
0
3
0
false
0
0
It might be a good idea to use a cron job. you can edit the cron table with : crontab -e and add a line like this (called every 20 minutes) */20 * * * * /usr/bin/python /home/toto/my_script.py
3
0
0
0
What is a good way to call a function at datetime in Python? There 3 ways that I know of: "Are we there yet?!" (check at if date has passed at interval (time.sleep(interval))) This is obviously bad. It can only be precise if interval is low, which becomes inefficient. Sleep off the difference (time.sleep(timedelta.seconds)) This is better, but I don't like the idea of putting a thread to sleep for an insanely long time, e.g. 6 months if such is the date. Hybrid between the two above; sleep off the difference if difference is bellow interval. If above, sleep for an interval to prevent long sleeps. I think this is the best out of all three when you think about long sleeps, but interval seems bad anyway. Are there any more ways you can think of? Is there anything in standard library that can help me call a function at datetime behind the scene? EDIT: I'm asking this because I've actually developed my own Cron implementation in Python. The only problem is that I can't decide how my code should wait for next occurrence. One of the differences between my implementation and original Cron is support for seconds. So, simply sleeping for minimum possible interval (1 second in my case) is too inefficient. I realize now that this question could perhaps be changed to "how does Cron do this?" i.e. "how does Cron check if any date has passed? Does it run constantly or is a process run each minute?". I believe the latter is the case, which, again, is inefficient if interval is 1 second. Another difference is that my code reads crontab once and calculates the exact date (datetime object) of next occurrence from the pattern. While Cron, I assume, simply checks each minute if any pattern from crontab matches current time. I'll stick to the "hybrid" way if there's no other way to do this.
Calling a function at datetime in Python
1
0
1
0
0
199
7,412,941
2011-09-14T07:51:00.000
0
0
1
0
0
python
0
7,413,538
0
3
0
false
0
0
I use the gobject main loop, but any library with an event loop should have this ability.
3
0
0
0
What is a good way to call a function at datetime in Python? There 3 ways that I know of: "Are we there yet?!" (check at if date has passed at interval (time.sleep(interval))) This is obviously bad. It can only be precise if interval is low, which becomes inefficient. Sleep off the difference (time.sleep(timedelta.seconds)) This is better, but I don't like the idea of putting a thread to sleep for an insanely long time, e.g. 6 months if such is the date. Hybrid between the two above; sleep off the difference if difference is bellow interval. If above, sleep for an interval to prevent long sleeps. I think this is the best out of all three when you think about long sleeps, but interval seems bad anyway. Are there any more ways you can think of? Is there anything in standard library that can help me call a function at datetime behind the scene? EDIT: I'm asking this because I've actually developed my own Cron implementation in Python. The only problem is that I can't decide how my code should wait for next occurrence. One of the differences between my implementation and original Cron is support for seconds. So, simply sleeping for minimum possible interval (1 second in my case) is too inefficient. I realize now that this question could perhaps be changed to "how does Cron do this?" i.e. "how does Cron check if any date has passed? Does it run constantly or is a process run each minute?". I believe the latter is the case, which, again, is inefficient if interval is 1 second. Another difference is that my code reads crontab once and calculates the exact date (datetime object) of next occurrence from the pattern. While Cron, I assume, simply checks each minute if any pattern from crontab matches current time. I'll stick to the "hybrid" way if there's no other way to do this.
Calling a function at datetime in Python
1
0
1
0
0
199
7,412,941
2011-09-14T07:51:00.000
1
0
1
0
0
python
0
7,413,202
0
3
0
false
0
0
If this is something that might be six months out like you said, a chron job is probably more suitable than keeping a python program running the whole time.
3
0
0
0
What is a good way to call a function at datetime in Python? There 3 ways that I know of: "Are we there yet?!" (check at if date has passed at interval (time.sleep(interval))) This is obviously bad. It can only be precise if interval is low, which becomes inefficient. Sleep off the difference (time.sleep(timedelta.seconds)) This is better, but I don't like the idea of putting a thread to sleep for an insanely long time, e.g. 6 months if such is the date. Hybrid between the two above; sleep off the difference if difference is bellow interval. If above, sleep for an interval to prevent long sleeps. I think this is the best out of all three when you think about long sleeps, but interval seems bad anyway. Are there any more ways you can think of? Is there anything in standard library that can help me call a function at datetime behind the scene? EDIT: I'm asking this because I've actually developed my own Cron implementation in Python. The only problem is that I can't decide how my code should wait for next occurrence. One of the differences between my implementation and original Cron is support for seconds. So, simply sleeping for minimum possible interval (1 second in my case) is too inefficient. I realize now that this question could perhaps be changed to "how does Cron do this?" i.e. "how does Cron check if any date has passed? Does it run constantly or is a process run each minute?". I believe the latter is the case, which, again, is inefficient if interval is 1 second. Another difference is that my code reads crontab once and calculates the exact date (datetime object) of next occurrence from the pattern. While Cron, I assume, simply checks each minute if any pattern from crontab matches current time. I'll stick to the "hybrid" way if there's no other way to do this.
Calling a function at datetime in Python
1
0.066568
1
0
0
199
7,421,082
2011-09-14T18:20:00.000
3
0
1
0
0
python,pyramid,paster
0
7,421,270
0
2
0
false
0
0
It is confusing at first but your code really doesn't need to be in your virtual environment directory at all. Actually it's better not to put your code inside your environment, as you might want to use different environments with the same code, for example to test your code with different versions of Python or different versions of a library. virtualenvwrapper does put all your environments in a single place. virtualenvwrapper is a convenient tool on top of virtualenv but you don't need it to put your code and your environments in different places. Maybe you should get a bit more comfortable with virtualenv itself before starting to use virtualenvwrapper. You should let paster create a directory with the project name. This is the directory that you will commit in version control (eg. git, mercurial...). You don't want to commit the directory containing the virtual environment.
1
0
0
0
I'm new to pyramid and paster, just reading the docs for now. I use virtualenv and inside the virtualenv dir I want to start a pyramid project. The problem is that I would like for paster to not create a dir with the project name, and instead put all the scaffold files on the current dir (the venv root). I thought about just not using paster but I still wouldn't know how to point to my app on development.ini "use" option. I could also have my virtualenv on an entirely different place of my filesystem, but that seems weird to me (maybe virtualenvwrapper could make it easier). Any other way to do this?
How do I create a project without the project folder?
0
0.291313
1
0
0
286
7,434,837
2011-09-15T17:12:00.000
0
1
1
0
0
python,packaging,setup.py
0
7,435,249
0
2
0
false
0
0
I may not have understood the problem correctly. For any additional dependencies, you mention them in setup.py as install_requires=['module1 >= 1.3', 'module2 >=1.8.2'] When you use setuptools, easy_install oo pip, these external dependencies will get installed during setup, if required. These should also be available in package repositories for download.
1
0
0
0
I am about to build a new python lib and I was seeking information concerning packaging in Python. I understand that "setup.py" is the script that controls everything. I wonder how to deal with it when there are external libraries in svn for instance. How to download automatically a given version from the repository using "setup.py" ?
setup.py and source control repository
0
0
1
0
0
742
7,437,147
2011-09-15T20:36:00.000
2
0
0
0
0
python,http-headers
0
7,437,186
0
1
0
true
0
0
In the past to accomplish this, I will read a portion of the socket data into memory, and then read from that buffer until a "\r\n\r\n" sequence is encountered (you could use a state machine to do this or simply use the string.find() function. Once you reach that sequence you know all of the headers have been read and you can do some parsing of the headers and then read the entire content length. You may need to be prepared to read a response that does not include a content-length header since not all responses contain it. If you run out of buffer before seeing that sequence, simply read more data from the socket into your buffer and continue processing. I can post a C# example if you would like to look at it.
1
0
0
0
I'm trying to code my own Python 3 http library to learn more about sockets and the Http protocol. My question is, if a do a recv(bytesToRead) using my socket, how can I get only the header and then with the Content-Length information, continue recieving the page content? Isn't that the purpose of the Content-Length header? Thanks in advance
Http protocol, Content-Length, get page content Python
0
1.2
1
0
1
739
7,441,726
2011-09-16T07:57:00.000
1
1
0
0
0
python,module,loadmodule
0
7,442,171
0
5
0
false
0
0
Just an idea and I'm not sure that it will work: You could write a module that contains a wrapper for __builtin__.__import__. This wrapper would save a reference to the old __import__and then assign a function to __builtin__.__import__ that does the following: whenever called, get the current stacktrace and work out the calling function. Maybe the information in the globals parameter to __import__ is enough. get the module of that calling functions and store the name of this module and what will get imported redirect the call the real __import__ After you have done this you can call your application with python -m magic_module yourapp.py. The magic module must store the information somewhere where you can retrieve it later.
1
2
0
0
How does one get (finds the location of) the dynamically imported modules from a python script ? so, python from my understanding can dynamically (at run time) load modules. Be it using _import_(module_name), or using the exec "from x import y", either using imp.find_module("module_name") and then imp.load_module(param1, param2, param3, param4) . Knowing that I want to get all the dependencies for a python file. This would include getting (or at least I tried to) the dynamically loaded modules, those loaded either by using hard coded string objects or those returned by a function/method. For normal import module_name and from x import y you can do either a manual scanning of the code or use module_finder. So if I want to copy one python script and all its dependencies (including the custom dynamically loaded modules) how should I do that ?
how do you statically find dynamically loaded modules
0
0.039979
1
0
0
303
7,449,756
2011-09-16T19:59:00.000
0
0
0
1
0
python,input,streaming,hadoop,filesplitting
0
24,434,211
0
3
0
false
0
0
The new ENV_VARIABLE for Hadoop 2.x is MAPREDUCE_MAP_INPUT_FILE
2
7
0
0
I am able to find the name if the input file in a mapper class using FileSplit when writing the program in Java. Is there a corresponding way to do this when I write a program in Python (using streaming?) I found the following in the hadoop streaming document on apache: See Configured Parameters. During the execution of a streaming job, the names of the "mapred" parameters are transformed. The dots ( . ) become underscores ( _ ). For example, mapred.job.id becomes mapred_job_id and mapred.jar becomes mapred_jar. In your code, use the parameter names with the underscores. But I still cant understand how to make use of this inside my mapper. Any help is highly appreciated. Thanks
Get input file name in streaming hadoop program
1
0
1
0
0
9,150
7,449,756
2011-09-16T19:59:00.000
6
0
0
1
0
python,input,streaming,hadoop,filesplitting
0
24,906,345
0
3
0
false
0
0
By parsing the mapreduce_map_input_file(new) or map_input_file(deprecated) environment variable, you will get the map input file name. Notice: The two environment variables are case-sensitive, all letters are lower-case.
2
7
0
0
I am able to find the name if the input file in a mapper class using FileSplit when writing the program in Java. Is there a corresponding way to do this when I write a program in Python (using streaming?) I found the following in the hadoop streaming document on apache: See Configured Parameters. During the execution of a streaming job, the names of the "mapred" parameters are transformed. The dots ( . ) become underscores ( _ ). For example, mapred.job.id becomes mapred_job_id and mapred.jar becomes mapred_jar. In your code, use the parameter names with the underscores. But I still cant understand how to make use of this inside my mapper. Any help is highly appreciated. Thanks
Get input file name in streaming hadoop program
1
1
1
0
0
9,150
7,474,887
2011-09-19T17:31:00.000
1
0
0
1
1
python,macos,netbeans
0
8,114,362
0
6
0
false
1
0
I was having a problem with Netbeans 7 not starting. Netbeans had first errored out with no error message. Then it wouldn't start or give me an error. I looked in the .netbeans directory in my user directory, and found and attempted to delete the 'lock' file in that directory. When I first tried to delete it, it said it was in use. So with task manager, I had to go to processes tab and find netbeans. I killed that task, then was able to delete 'lock'. Then netbeans started.
5
1
0
0
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install. I am using a mac osx I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
Netbeans 7 not starting up after python plugin installation
0
0.033321
1
0
0
2,129
7,474,887
2011-09-19T17:31:00.000
0
0
0
1
1
python,macos,netbeans
0
19,486,580
0
6
0
false
1
0
I have the same problem afther installing the python plguin. To solve this problem i deleted the file: org-openide-awt.jar from C:\Users\MYUSERNAME.netbeans\7.0\modules Regards! Martín. PD: I'm using Netbeans 7.0.1 anda Windows 7 64bit.
5
1
0
0
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install. I am using a mac osx I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
Netbeans 7 not starting up after python plugin installation
0
0
1
0
0
2,129
7,474,887
2011-09-19T17:31:00.000
1
0
0
1
1
python,macos,netbeans
0
7,478,399
0
6
0
false
1
0
I had the same issue. Netbeans would die before opening at all. I could not fix it, and had to revert back to 6.9.1.
5
1
0
0
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install. I am using a mac osx I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
Netbeans 7 not starting up after python plugin installation
0
0.033321
1
0
0
2,129
7,474,887
2011-09-19T17:31:00.000
1
0
0
1
1
python,macos,netbeans
0
8,088,729
0
6
0
false
1
0
I had the same problem, but with Windows 7. I deleted the .netbeans directory located in my home folder. That fixed my problem, hope it fixes yours.
5
1
0
0
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install. I am using a mac osx I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
Netbeans 7 not starting up after python plugin installation
0
0.033321
1
0
0
2,129
7,474,887
2011-09-19T17:31:00.000
-1
0
0
1
1
python,macos,netbeans
0
7,475,990
0
6
0
false
1
0
I know I'm not answering your question directly, but I too was considering installing the Python plugin in Netbeans 7 but saw that it was still in Beta. I use WingIDE from wingware for Python development. I'm a Python newbie but I'm told by the pros that Wing is the best IDE for Python. The "101" version is free and works very well. The licensed versions include more options such as version control integration and Django features.
5
1
0
0
I went to tools, plugins. Then chose to install the three python items that show up. After installation. I choose the restart netbeans option. But instead of restarting, netbeans just closed. And now it is not opening. Any ideas how to fix this? I normally develop Java on my netbeans 7 install. I am using a mac osx I see there are no takers, so let me ask this: Is there a way to revert to before the new plugin install?
Netbeans 7 not starting up after python plugin installation
0
-0.033321
1
0
0
2,129
7,482,493
2011-09-20T08:53:00.000
1
0
1
0
0
python,scripting
0
7,483,796
0
3
0
false
0
0
Assuming CPython, yes you have 'n' different interpreters running but (at least on operating systems like Windows, UNIX, and Linux) the interpreter code itself is shared. The data areas (which includes your Python code, depending on the implementation) will be unique to each process. Any modules written in C that produce a .dll or .so (shared object) will also share the code areas between processes, but have their own data areas.
3
1
0
0
I'm wondering how many python interpreter would be executed for distinct python apps? Say I have 6 different python apps up and running, so does that mean there are 6 different python interpreters are running for each of them?
Python Apps and Python Interpreter?
1
0.066568
1
0
0
117
7,482,493
2011-09-20T08:53:00.000
1
0
1
0
0
python,scripting
0
7,482,628
0
3
0
false
0
0
Yes, each python script is launched by a separate python interpreter process. (unless your applications are in fact a single application multi threaded of course ;) )
3
1
0
0
I'm wondering how many python interpreter would be executed for distinct python apps? Say I have 6 different python apps up and running, so does that mean there are 6 different python interpreters are running for each of them?
Python Apps and Python Interpreter?
1
0.066568
1
0
0
117
7,482,493
2011-09-20T08:53:00.000
5
0
1
0
0
python,scripting
0
7,482,633
0
3
0
true
0
0
when executing a python script, you have 1 interpreter running per process executing. if your application executes in a single process, you have 1 interpreter executing for each instance of your application. if your application launches multiple processes, then you get additional interpreters for each process launched. if your application uses threads, the interpreter is shared between the multiple threads which belong to the same process.
3
1
0
0
I'm wondering how many python interpreter would be executed for distinct python apps? Say I have 6 different python apps up and running, so does that mean there are 6 different python interpreters are running for each of them?
Python Apps and Python Interpreter?
1
1.2
1
0
0
117
7,484,924
2011-09-20T12:11:00.000
3
0
0
0
1
python,pyglet
0
7,485,069
0
3
0
true
0
1
I didn't use Pyglet yet, but is not a GUI library, it doesn't have to have widgets like buttons, or containers etc. It's a multimedia library like Pygame, it draws stuff on screen, plays sounds, and has some helper functions. If you want to draw a button on screen, you should first draw a rectangle, print some text in it, and then listen mouse clicks to know if it's clicked on this rectangle. See PyQT, PyGTK, WxPython for some examples of GUI libraries.
1
3
0
0
i'm checking out pyglet, but, funny enough, i can't find how to do a simple button! so what is the standard way to create a standard button? is there a standard way to create a Message-Box? open/save dialogs? or am i missing the point of pyglet? isn't it yet-another gui toolkit for creating (also) forms, windows, buttons, texts, standard widgets, etc. ? i'm using Python 2.x on a windows PC if that matters.
where is the button widget in pyglet ?
1
1.2
1
0
0
6,713
7,491,507
2011-09-20T20:41:00.000
1
0
0
0
0
python,django
1
7,498,096
0
4
0
false
1
0
i'd also add that there is a set of default templates somewhere that make the usage of registration vastly easier. think they were on ubernostrums google code last time i needed them.
2
3
0
0
I have a directory for a django project on my localhost /MyDjangoList/ In this folder I have a django application called PK I downloaded django-registration and unzipped the folder into the /MyDjangoList/ I went into terminal and went to the django-registration folder and ran python setup.py install. It did a bunch of things then spit out the following: error: could not create '/usr/local/lib/python2.7/dist-packages/registration': Permission denied The Install file says I can just put it into the same folder as my project, so do I even need to install this? If so, how do I properly install it?
How do I properly install django registration on localhost?
0
0.049958
1
0
0
1,876
7,491,507
2011-09-20T20:41:00.000
1
0
0
0
0
python,django
1
7,491,553
0
4
0
false
1
0
Do you need more permission? As in you need to do: sudo python setup.py install
2
3
0
0
I have a directory for a django project on my localhost /MyDjangoList/ In this folder I have a django application called PK I downloaded django-registration and unzipped the folder into the /MyDjangoList/ I went into terminal and went to the django-registration folder and ran python setup.py install. It did a bunch of things then spit out the following: error: could not create '/usr/local/lib/python2.7/dist-packages/registration': Permission denied The Install file says I can just put it into the same folder as my project, so do I even need to install this? If so, how do I properly install it?
How do I properly install django registration on localhost?
0
0.049958
1
0
0
1,876
7,491,777
2011-09-20T21:06:00.000
3
0
0
0
0
python,tkinter
0
7,491,885
0
1
0
true
0
1
You'll have to use separate threads or processes. Tkinter uses a single thread to process display updates, and the same thread is used to do event callbacks. If your event handler blocks then no Tkinter code will execute until it completes. If you have the Tkinter thread (the one that calls Tk.mainloop) and another thread for the rest of your application, then the event handlers running within the Tkinter thread can simply pass messages (possibly using Queue.Queue) to your application event handler.
1
1
0
0
I am making some programs which includes while loops(to illustrate some number calculatings) and when I use Tkinter for GUI, the program windows is freezing until the loop finished. I want to add a stop button and I want the windows not to freeze. How can I do these two things? Thank you
Tkinter is freezing while the loop is processing, how can i prevent it?
0
1.2
1
0
0
1,117