Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
7,492,855
2011-09-20T23:07:00.000
2
0
1
1
0
python,virtualenv
0
9,071,047
0
6
0
false
0
0
Seems to be not an answer, but still might be useful in other contexts. Have you tried running bin/activate_this.py from your Python virtualenv? The comment in this file of my virtualenv reads: By using execfile(this_file, dict(__file__=this_file)) you will activate this virtualenv environment. This can be used when you must use an existing Python interpreter, not the virtualenv bin/python You should achieve the desired result if you execute the runtime equivalent of the above code.
2
15
0
0
I make heavy use of virtualenv to isolate my development environments from the system-wide Python installation. Typical work-flow for using a virtualenv involves running source /path/to/virtualenv/bin/activate to set the environment variables that Python requires to execute an isolated runtime. Making sure my Python executables use the current active virtualenv is as simple as setting the shebang to #!/usr/bin/env python Lately, though, I've been writing some C code that embeds the Python runtime. What I can't seem to figure out is how to get the embedded runtime to use the current active virtualenv. Anybody got a good example to share?
Getting an embedded Python runtime to use the current active virtualenv
0
0.066568
1
0
0
4,477
7,492,855
2011-09-20T23:07:00.000
0
0
1
1
0
python,virtualenv
0
31,758,278
0
6
0
false
0
0
You can the check the environment variable VIRTUAL_ENV to get the current envs location.
2
15
0
0
I make heavy use of virtualenv to isolate my development environments from the system-wide Python installation. Typical work-flow for using a virtualenv involves running source /path/to/virtualenv/bin/activate to set the environment variables that Python requires to execute an isolated runtime. Making sure my Python executables use the current active virtualenv is as simple as setting the shebang to #!/usr/bin/env python Lately, though, I've been writing some C code that embeds the Python runtime. What I can't seem to figure out is how to get the embedded runtime to use the current active virtualenv. Anybody got a good example to share?
Getting an embedded Python runtime to use the current active virtualenv
0
0
1
0
0
4,477
7,503,149
2011-09-21T16:18:00.000
1
0
0
0
0
python
1
7,503,299
0
1
0
true
1
0
class Check_jira ends on line 25 and has only one method. Then you have an if block, and CheckForJiraIssueRecord is just a function defined in this block (that is, the function is defined if __name__ == '__main__'. Just put the if block outside after the whole class definition.
1
0
0
0
Here is the code: 1 #!/usr/bin/env python 2 3 import re, os, sys, jira, subprocess 4 5 class Check_jira: 6 7 def verify_commit_text(self, tags): 8 for line in tags: 9 if re.match('^NO-TIK',line): 10 return True 11 elif re.match('^NO-REVIEW', line): 12 return True 13 elif re.match(r'[a-zA-Z]+-\d+', line): 14 # Validate the JIRA ID 15 m = re.search("([a-zA-Z]+-\d+)",line) 16 if m: 17 my_args = m.group(1) 18 result = Check_jira.CheckForJiraIssueRecord(my_args) 19 if result == False: 20 util.warn("%s does not exist"%my_args) 21 else: 22 return True 23 return True 24 else: 25 return False 26 if __name__ == '__main__': 27 p = Check_jira() 28 commit_text_verified = p.verify_commit_text(os.popen('hg tip --template "{desc}"')) 29 30 if (commit_text_verified): 31 sys.exit(0) 32 else: 33 print >> sys.stderr, ('[obey the rules!]') 34 sys.exit(1); 35 def CheckForJiraIssueRecord(object): 36 37 sys.stdout = os.devnull 38 sys.stderr = os.devnull 39 40 41 try: 42 com = jira.Commands() 43 logger = jira.setupLogging() 44 jira_env = {'home':os.environ['HOME']} 45 command_cat= "cat" 46 command_logout= "logout" 47 #my_args = ["QA-656"] 48 server = "http://jira.myserver.com:8080/rpc/soap/jirasoapservice-v2?wsdl" 49 except Exception, e: 50 sys.exit('config error') 51 52 class Options: 53 pass 54 options = Options() 55 56 options.user = 'user' 57 options.password = 'password' 58 59 try: 60 61 jira.soap = jira.Client(server) 62 jira.start_login(options, jira_env, command_cat, com, logger) 63 issue = com.run(command_cat, logger, jira_env, my_args) 64 except Exception, e: 65 print sys.exit('data error') so maybe: 1. if name == 'main': shoudl be at the bottom ? 2. So, i have 2 classes (Check_jira) and (Options) 3. Check_jira has 2 functions verify_commit_text() and CheckForJiraIssueRecord() 4. I pass object as an argument to CheckForJiraIssueRecord since i am passing my_args to it , on its usage. 5. Not sure how to call one function from another function in the same class 6. Error i am getting is : Traceback (most recent call last): File "/home/qa/hook-test/.hg/check_jira.py", line 31, in commit_text_verified = p.verify_commit_text(os.popen('hg tip --template "{desc}"')) File "/home/qa/hook-test/.hg/check_jira.py", line 21, in verify_commit_text result = Check_jira.CheckForJiraIssueRecord(my_args) AttributeError: class Check_jira has no attribute 'CheckForJiraIssueRecord' transaction abort! rollback completed abort: pretxncommit.jira hook exited with status 1
Getting error: AttributeError: class has no attribute ''
1
1.2
1
0
0
1,839
7,528,360
2011-09-23T11:36:00.000
3
1
0
0
0
php,python,dsl,plpgsql
0
7,660,613
0
6
0
false
0
0
How about doing the scripting on the client. That will ensure maximum security and also save server resources. In other words Javascript would be your scripting platform. What you do is expose the functionality of your backend as javascript functions. Depending on how your app is currently written that might require backend work or not. Oh and by the way you are not limited to javascript for the actual language. Google "compile to javascript" and first hit should be a list of languages you can use.
2
11
0
0
I'm developing an web based application written in PHP5, which basically is an UI on top of a database. To give users a more flexible tool I want to embed a scripting language, so they can do more complex things like fire SQL queries, do loops and store data in variables and so on. In my business domain Python is widely used for scripting, but I'm also thinking of making a simple Domain Specific Language. The script has to wrap my existing PHP classes. I'm seeking advise on how to approach this development task? Update: I'll try scripting in the database using PLPGSQL in PostgreSQL. This will do for now, but I can't use my PHP classes this way. Lua approach is appealing and seems what is what I want (besides its not Python).
Embed python/dsl for scripting in an PHP web application
0
0.099668
1
1
0
570
7,528,360
2011-09-23T11:36:00.000
0
1
0
0
0
php,python,dsl,plpgsql
0
7,605,372
0
6
0
false
0
0
You could do it without Python, by ie. parsing the user input for pre-defined "tags" and returning the result.
2
11
0
0
I'm developing an web based application written in PHP5, which basically is an UI on top of a database. To give users a more flexible tool I want to embed a scripting language, so they can do more complex things like fire SQL queries, do loops and store data in variables and so on. In my business domain Python is widely used for scripting, but I'm also thinking of making a simple Domain Specific Language. The script has to wrap my existing PHP classes. I'm seeking advise on how to approach this development task? Update: I'll try scripting in the database using PLPGSQL in PostgreSQL. This will do for now, but I can't use my PHP classes this way. Lua approach is appealing and seems what is what I want (besides its not Python).
Embed python/dsl for scripting in an PHP web application
0
0
1
1
0
570
7,536,732
2011-09-24T03:08:00.000
0
0
0
0
1
python,xmpp,bots,xmpppy
0
7,558,640
0
1
0
false
0
0
Did you try waiting 30 minutes after the network failure? Depending on your network stack's settings, it could take this long to detect. However, if you're not periodically sending on the socket, you may never detect the outage. This is why many XMPP stacks periodically send a single space character, using an algorithm like: Set timer to N seconds On sending a stanza, reset the timer to N When the timer fires, send a space.
1
0
0
0
I am using xmpppy libary to write a XMPP IM robot. I want to act on disconnects, but I don't know how to detect disconnects. This could happen if your Jabber server crashes or if you have lost your internet connection. I found the callback, RegisterDisconnectHandler(self, DisconnectHandler), but it didn't work for the network failure, it only works when I explicitly call the method "disconnect". How do I detect a network failure or server crash?
A script im robot with xmpppy of python, how to detect network failure?
0
0
1
0
1
392
7,538,988
2011-09-24T12:25:00.000
10
0
0
1
0
python,zeromq
0
10,846,438
0
4
0
false
0
0
The send wont block if you use ZMQ_NOBLOCK, but if you try closing the socket and context, this step would block the program from exiting.. The reason is that the socket waits for any peer so that the outgoing messages are ensured to get queued.. To close the socket immediately and flush the outgoing messages from the buffer, use ZMQ_LINGER and set it to 0..
1
77
0
0
I just got started with ZMQ. I am designing an app whose workflow is: one of many clients (who have random PULL addresses) PUSH a request to a server at 5555 the server is forever waiting for client PUSHes. When one comes, a worker process is spawned for that particular request. Yes, worker processes can exist concurrently. When that process completes it's task, it PUSHes the result to the client. I assume that the PUSH/PULL architecture is suited for this. Please correct me on this. But how do I handle these scenarios? the client_receiver.recv() will wait for an infinite time when server fails to respond. the client may send request, but it will fail immediately after, hence a worker process will remain stuck at server_sender.send() forever. So how do I setup something like a timeout in the PUSH/PULL model? EDIT: Thanks user938949's suggestions, I got a working answer and I am sharing it for posterity.
zeromq: how to prevent infinite wait?
0
1
1
0
1
82,000
7,541,397
2011-09-24T19:36:00.000
2
0
1
0
0
python,multithreading,multiprocessing
0
7,541,787
0
2
0
false
0
0
What you're describing is essentially graph traversal; Most graph traversal algorithms (That are more sophisticated than depth first), keep track of two sets of nodes, in your case, the nodes are url's. The first set is called the "closed set", and represents all of the nodes that have already been visited and processed. If, while you're processing a page, you find a link that happens to be in the closed set, you can ignore it, it's already been handled. The second set is unsurprisingly called the "open set", and includes all of the edges that have been found, but not yet processed. The basic mechanism is to start by putting the root node into the open set (the closed set is initially empty, no nodes have been processed yet), and start working. Each worker takes a single node from the open set, copies it to the closed set, processes the node, and adds any nodes it discovers back to the open set (so long as they aren't already in either the open or closed sets). Once the open set is empty, (and no workers are still processing nodes) the graph has been completely traversed. Actually implementing this in multiprocessing probably means that you'll have a master task that keeps track of the open and closed sets; If a worker in a worker pool indicates that it is ready for work, the master worker takes care of moving the node from the open set to the closed set and starting up the worker. the workers can then pass all of the nodes they find, without worrying about if they are already closed, back to the master; and the master will ignore nodes that are already closed.
1
5
0
0
I am completely new to multiprocessing. I have been reading documentation about multiprocessing module. I read about Pool, Threads, Queues etc. but I am completely lost. What I want to do with multiprocessing is that, convert my humble http downloader, to work with multiple workers. What I am doing at the moment is, download a page, parse to page to get interesting links. Continue until all interesting links are downloaded. Now, I want to implement this with multiprocessing. But I have no idea at the moment, how to organize this work flow. I had two thoughts about this. Firstly, I thought about having two queues. One queue for links that needs to be downloaded, other for links to be parsed. One worker, downloads the pages, and adds them to queue which is for items that needs to be parsed. And other process parses a page, and adds the links it finds interesting to the other queue. Problems I expect from this approach are; first of all, why download one page at a time and parse a page at a time. Moreover, how do one process know that there are items to be added to queue later, after it exhausted all items from queue. Another approach I thought about using is that. Have a function, that can be called with an url as an argument. This function downloads the document and starts parsing it for the links. Every time it encounters an interesting link, it instantly creates a new thread running identical function as itself. The problem I have with this approach is, how do I keep track of all the processes spawned all around, how do I know if there is still processes to running. And also, how do I limit maximum number of processes. So I am completely lost. Can anyone suggest a good strategy, and perhaps show some example codes about how to go with the idea.
Which strategy to use with multiprocessing in python
1
0.197375
1
0
0
2,128
7,551,546
2011-09-26T06:53:00.000
7
0
0
1
0
python,windows,linux,macos
0
7,552,255
0
6
0
true
0
0
Regarding Linux, if all you need is to enumerate devices, you can even skip pyudev dependency for your project, and simply parse the output of /sbin/udevadm info --export-db command (does not require root privileges). It will dump all information about present devices and classes, including USB product IDs for USB devices, which should be more then enough to identify your USB-to-serial adapters. Of course, you can also do this with pyudev.
2
11
0
0
I have an 2-port signal relay connected to my computer via a USB serial interface. Using the pyserial module I can control these relays with ease. However, this is based on the assumption that I know beforehand which COM-port (or /dev-node) the device is assigned to. For the project I'm doing that's not enough since I don't want to assume that the device always gets assigned to for example COM7 in Windows. I need to be able to identify the device programatically across the possible platforms (Win, Linux, OSX (which I imagine would be similar to the Linux approach)), using python. Perhaps by, as the title suggests, enumerate USB-devices on the system and somehow get more friendly names for them. Windows and Linux being the most important platforms to support. Any help would be greatly appreciated! EDIT: Seems like the pyudev-module would be a good fit for Linux-systems. Has anyone had any experience with that?
Getting friendly device names in python
0
1.2
1
0
0
8,806
7,551,546
2011-09-26T06:53:00.000
0
0
0
1
0
python,windows,linux,macos
0
7,552,386
0
6
0
false
0
0
It will be great if this is possible, but in my experience with commercial equipments using COM ports this is not the case. Most of the times you need to set manually in the software the COM port. This is a mess, specially in windows (at least XP) that tends to change the number of the com ports in certain cases. In some equipment there is an autodiscovery feature in the software that sends a small message to every COM port and waits for the right answer. This of course only works if the instrument implements some kind of identification command. Good luck.
2
11
0
0
I have an 2-port signal relay connected to my computer via a USB serial interface. Using the pyserial module I can control these relays with ease. However, this is based on the assumption that I know beforehand which COM-port (or /dev-node) the device is assigned to. For the project I'm doing that's not enough since I don't want to assume that the device always gets assigned to for example COM7 in Windows. I need to be able to identify the device programatically across the possible platforms (Win, Linux, OSX (which I imagine would be similar to the Linux approach)), using python. Perhaps by, as the title suggests, enumerate USB-devices on the system and somehow get more friendly names for them. Windows and Linux being the most important platforms to support. Any help would be greatly appreciated! EDIT: Seems like the pyudev-module would be a good fit for Linux-systems. Has anyone had any experience with that?
Getting friendly device names in python
0
0
1
0
0
8,806
7,581,963
2011-09-28T10:42:00.000
1
0
0
0
1
python,nul
0
7,791,175
0
1
0
true
0
0
I found out the problem was that I was running the code inside PyScripter and the in-built python interpreter terminates NUL bytes in the output. So there was no problem with my code, if I run it outside PyScripter everything works fine. Now running Wing IDE and never looking back :)
1
2
0
0
I'm downloading files over HTTPS, I request the files through urllib2.Request and they come back as a socket._fileobject. I'd ideally like to stream this to file to avoid loading it into memory but I'm not sure how to do this. My problem is if I call .read() on the object it only returns all the data up to the first NUL character and doesn't read the whole file. How can I solve this? The NUL character comes down as \x00 if that's any help, not sure what encoding that is
read() stops after NUL character
0
1.2
1
0
1
406
7,592,008
2011-09-29T03:02:00.000
1
0
1
1
1
python,process,python-idle
0
7,592,072
0
1
0
true
0
0
Have you tried using python.exe instead of pythonw.exe? Im pretty sure this is the intended default behavior for the window python interperter (pythonw.exe). If its a .pyw file, just right click "Open With..." and use python.exe
1
2
0
0
Every time I restart the shell or run a script and instance of pythonw.exe*32 is created. When I close out of IDLE these processes don't go away in the task manager. Any ideas on how to fix this? Thanks!
pythonw.exe processes not quitting after running script
0
1.2
1
0
0
2,421
7,603,674
2011-09-29T21:36:00.000
2
0
0
0
0
python,pyramid,beaker
0
7,608,095
0
2
0
true
1
0
Changing the timeout isn't supported by beaker. If you are trying to make a session stick around that long, you should probably just put it into a separate cookie. A common use-case is the "remember me" checkbox on login. This helps you track who the user is, but generally the actual session shouldn't be sticking around that long and gets recreated.
1
3
0
0
I am using pyramid to create a web application. I am then using pyramid-beaker to interface beaker into pyramid's session management system. Two values affect the duration of a user's session. The session cookie timeout The actual session's life time on either disk/memcache/rdbms/etc I currently have to cookie defaulted (via the standard beaker config) to delete when the browser closes. I have the session data set to clear out after 2 hours. This works prefectly. What I need to know is how to override the cookie's timeout and the session timeout to both be 30 days or some other arbirtrary value.
How do I override the default session timeout with pyramid + pyramid-beaker + beaker
0
1.2
1
0
0
2,326
7,603,790
2011-09-29T21:50:00.000
5
0
1
0
0
python,database,multithreading,sqlalchemy,multiprocessing
0
7,603,832
0
1
0
false
0
0
The MetaData and its collection of Table objects should be considered a fixed, immutable structure of your application, not unlike your function and class definitions. As you know with forking a child process, all of the module-level structures of your application remain present across process boundaries, and table defs are usually in this category. The Engine however refers to a pool of DBAPI connections which are usually TCP/IP connections and sometimes filehandles. The DBAPI connections themselves are generally not portable over a subprocess boundary, so you would want to either create a new Engine for each subprocess, or use a non-pooled Engine, which means you're using NullPool. You also should not be doing any kind of association of MetaData with Engine, that is "bound" metadata. This practice, while prominent on various outdated tutorials and blog posts, is really not a general purpose thing and I try to de-emphasize this way of working as much as possible. If you're using the ORM, a similar dichotomy of "program structures/active work" exists, where your mapped classes of course are shared between all subprocesses, but you definitely want Session objects to be local to a particular subprocess - these correspond to an actual DBAPI connection as well as plenty of other mutable state which is best kept local to an operation.
1
1
0
1
Problem I am writing a program that reads a set of documents from a corpus (each line is a document). Each document is processed using a function processdocument, assigned a unique ID, and then written to a database. Ideally, we want to do this using several processes. The logic is as follows: The main routine creates a new database and sets up some tables. The main routine sets up a group of processes/threads that will run a worker function. The main routine starts all the processes. The main routine reads the corpus, adding documents to a queue. Each process's worker function loops, reading a document from a queue, extracting the information from it using processdocument, and writes the information to a new entry in a table in the database. The worker loops breaks once the queue is empty and an appropriate flag has been set by the main routine (once there are no more documents to add to the queue). Question I'm relatively new to sqlalchemy (and databases in general). I think the code used for setting up the database in the main routine works fine, from what I can tell. Where I'm stuck is I'm not sure exactly what to put into the worker functions for each process to write to the database without clashing with the others. There's nothing particularly complicated going on: each process gets a unique value to assign to an entry from a multiprocessing.Value object, protected by a Lock. I'm just not sure whether what I should be passing to the worker function (aside from the queue), if anything. Do I pass the sqlalchemy.Engine instance I created in the main routine? The Metadata instance? Do I create a new engine for each process? Is there some other canonical way of doing this? Is there something special I need to keep in mind? Additional Comments I'm well aware I could just not bother with the multiprocessing but and do this in a single process, but I will have to write code that has several processes reading for the database later on, so I might as well figure out how to do this now. Thanks in advance for your help!
How to use simple sqlalchemy calls while using thread/multiprocessing
0
0.761594
1
1
0
1,473
7,613,525
2011-09-30T16:47:00.000
1
0
1
1
0
python,msys
0
7,613,945
0
1
0
true
0
0
Find where in the msys path libgcc_s_dw2-1.dll is. Find the environmental variable in MSYS that has that path in it. Add that environmental variable to Windows.
1
4
0
0
I've got a short python script that will eventually edit an input file, run an executable on that input file and read the output from the executable. The problem is, I've compiled the executable through msys, and can only seem to run it from the msys window. I'm wondering if the easiest way to do this is to somehow use os.command in Python to run msys and pipe a command in, or run a script through msys, but I haven't found a way to do this. Has anyone tried this before? How would you pipe a command into msys? Or is there a smarter way to do this that I haven't thought of? Thanks in advance! EDIT: Just realized that this information might help, haha . . . . I'm running Windows, msys 1.0 and Python 2.7
How can I run a program in msys through Python?
1
1.2
1
0
0
2,618
7,623,600
2011-10-01T23:25:00.000
1
0
0
0
0
python,http,httpclient,urllib2,if-modified-since
0
7,623,988
0
3
1
false
0
0
You can add a header called ETag, (hash of your file, md5sum or sha256 etc ), to compare if two files are different instead of last-modified date
2
3
0
0
I have an HTTP server which host some large file and have python clients (GUI apps) which download it. I want the clients to download the file only when needed, but have an up-to-date file on each run. I thought each client will download the file on each run using the If-Modified-Since HTTP header with the file time of the existing file, if any. Can someone suggest how to do it in python? Can someone suggest an alternative, easy, way to achieve my goal?
Sync local file with HTTP server location (in Python)
0
0.066568
1
0
1
1,447
7,623,600
2011-10-01T23:25:00.000
0
0
0
0
0
python,http,httpclient,urllib2,if-modified-since
0
7,623,922
0
3
1
false
0
0
I'm assuming some things right now, BUT.. One solution would be to have a separate HTTP file on the server (check.php) which creates a hash/checksum of each files you're hosting. If the files differ from the local files, then the client will download the file. This means that if the content of the file on the server changes, the client will notice the change since the checksum will differ. do a MD5 hash of the file contents, put it in a database or something and check against it before downloading anything. Your solution would work to, but it requires the server to actually include the "modified" date in the Header for the GET request (some server softwares does not do this). I'd say putting up a database that looks something like: [ID] [File_name] [File_hash] 0001 moo.txt asd124kJKJhj124kjh12j
2
3
0
0
I have an HTTP server which host some large file and have python clients (GUI apps) which download it. I want the clients to download the file only when needed, but have an up-to-date file on each run. I thought each client will download the file on each run using the If-Modified-Since HTTP header with the file time of the existing file, if any. Can someone suggest how to do it in python? Can someone suggest an alternative, easy, way to achieve my goal?
Sync local file with HTTP server location (in Python)
0
0
1
0
1
1,447
7,623,795
2011-10-02T00:12:00.000
7
0
1
0
0
python,pydev
0
7,623,803
0
2
0
false
0
0
I don't know Pydev, but in most editors Shift+Tab will do the trick.
2
3
0
0
I am using pydev for python development. I am facing issue while removing indentation for a block of statement. If I have to add indentation I used to press SHIFT + down arrow key until I reach the end of block of statements which I want to indent and then press the TAB key.This is how i used to add indent for a block of statements in one step. The issue now I am facing is to remove indent in one step for a block of statement.For example I have a for loop and i have a block of statement with in that for loop. Now I dont want to have the for loop any more and want to remove the indent underlying the for loop statement block. At present I am going each line and press backspace to remove that indentation. Is there any easy way to do this for the entire statement block?
Removing indent in PYDEV
0
1
1
0
0
2,441
7,623,795
2011-10-02T00:12:00.000
2
0
1
0
0
python,pydev
0
7,626,593
0
2
0
false
0
0
From pydev.org, their page: Block indent (and dedent) Tab / Shift-Tab Smart indent (and dedent) Enter / Backspace
2
3
0
0
I am using pydev for python development. I am facing issue while removing indentation for a block of statement. If I have to add indentation I used to press SHIFT + down arrow key until I reach the end of block of statements which I want to indent and then press the TAB key.This is how i used to add indent for a block of statements in one step. The issue now I am facing is to remove indent in one step for a block of statement.For example I have a for loop and i have a block of statement with in that for loop. Now I dont want to have the for loop any more and want to remove the indent underlying the for loop statement block. At present I am going each line and press backspace to remove that indentation. Is there any easy way to do this for the entire statement block?
Removing indent in PYDEV
0
0.197375
1
0
0
2,441
7,632,642
2011-10-03T08:27:00.000
0
1
0
1
0
python,windows,python-3.x,serial-port,automated-tests
0
7,712,917
0
3
0
false
0
0
I was also able to solve this using WScript, but pySerial was the preferred solution.
1
2
0
0
I am testing a piece of hardware which hosts an ftp server. I connect to the server in order to configure the hardware in question. My test environment is written in Python 3. To start the ftp server, I need to launch a special proprietary terminal application on my pc. I must use this software as far as I know and I have no help files for it. I do however know how to use it to launch the ftp server and that's all I need it for. When I start this app, I go to the menu and open a dialog where I select the com port/speed the hardware is connected to. I then enter the command to launch the ftp server in a console like window within the application. I am then prompted for the admin code for the hardware, which I enter. When I'm finished configuring the device, I issue a command to restart the hardware's software. In order for me to fully automate my tests, I need to remove the manual starting of this ftp server for each test. As far as I know, I have two options: Windows GUI automation Save the stream of data sent on the com port when using this application. I've tried to find an GUI automater but pywinauto isn't supporting Python 3. Any other options here which I should look at? Any suggestions on how I can monitor the com port in question and save the traffic on it? Thanks, Barry
Control rs232 windows terminal program from python
0
0
1
0
0
1,707
7,638,787
2011-10-03T17:59:00.000
0
1
1
0
0
python,unicode,localization,fonts
0
7,638,836
0
2
0
false
0
0
Use utf-8 text and a font that has glyphs for every possible character defined, like Arial/Verdana in Windows. That bypasses the entire detection problem. One font will handle everything.
2
1
0
0
For a program of mine I have a database full of street name (using GIS stuff) in unicode. The user selects any part of the world he wants to see (using openstreetmap, google maps or whatever) and my program displays every streets selected using a nice font to show their names. As you may know not every font can display non latin characters... and it gives me headaches. I wonder how to tell my program "if this word is written in chinese, then use a chinese font". EDIT: I forgot to mention that I want to use non-standard fonts. Arial, Courier and some other can display non-latin words, but I want to use other fonts (I have a specific font for chinese, another one for japanese, another one for arabic...). I just have to know what font to chose depending of the word I want to write.
How to detect the right font to use depending on the langage
0
0
1
0
0
210
7,638,787
2011-10-03T17:59:00.000
0
1
1
0
0
python,unicode,localization,fonts
0
7,684,888
0
2
0
true
0
0
You need information about the language of the text. And when you decide what fonts you want, you do a mapping from language to font. If you try to do it automatically, it does not work. The fonts for Japanese, Chinese Traditional, and Chinese Simplified look differently even for the same character. They might be inteligible, but a native would be able to tell (ok, complain) that the font is wrong. Plus, if you do anything algorithmically, there is no way to consider the estethic part (for instance the fact that you don't like Arial :-)
2
1
0
0
For a program of mine I have a database full of street name (using GIS stuff) in unicode. The user selects any part of the world he wants to see (using openstreetmap, google maps or whatever) and my program displays every streets selected using a nice font to show their names. As you may know not every font can display non latin characters... and it gives me headaches. I wonder how to tell my program "if this word is written in chinese, then use a chinese font". EDIT: I forgot to mention that I want to use non-standard fonts. Arial, Courier and some other can display non-latin words, but I want to use other fonts (I have a specific font for chinese, another one for japanese, another one for arabic...). I just have to know what font to chose depending of the word I want to write.
How to detect the right font to use depending on the langage
0
1.2
1
0
0
210
7,660,059
2011-10-05T10:42:00.000
11
1
0
1
0
python,google-app-engine,oauth,openid,facebook-authentication
0
7,662,946
0
1
0
true
1
0
In my research on this question I found that there are essentially three options: Use Google's authentication mechanisms (including their federated login via OpenID) Pros: You can easily check who is logged in via the Users service provided with Appengine Google handles the security so you can be quite sure it's well tested Cons: This can only integrate with third party OpenID providers; it cannot integrate with facebook/twitter at this time Use the social authentication mechanisms provided by a known framework such as tipfy, or django Pros: These can integrate with all of the major social authentication services They are quite widely used so they are likely to be quite robust and pretty well tested Cons: While they are probably well tested, they may not be maintained They do come as part of a larger framework which you may have to get comfortable with before deploying your app Roll your own social authentication Pros: You can do mix up whatever flavours of OpenID and OAuth tickles your fancy Cons: You are most likely to introduce security holes Unless you've a bit of experience working with these technologies, this is likely to be the most time consuming Further notes: It's probable that everyone will move to OpenID eventually and then the standard Google authentication should work everywhere The first option allows you to point a finger at Google if there is a problem with their authentication; the second option imposes more responsibility on you, but still allows you to say that you use a widely used solution if there is a problem and the final option puts all the responsibility on you Most of the issues revolve around session management - in case 1, Google does all of the session management and it is pretty invisible to the developer; in case 2, the session management is handled by the framework and in the 3rd case, you've to devise your own.
1
6
0
0
[This question is intended as a means to both capture my findings and sanity check them - I'll put up my answer toute suite and see what other answers and comments appear.] I spent a little time trying to get my head around the different social authentication options for (python) Appengine. I was particularly confused by how the authentication mechanisms provided by Google can interact with other social authentication mechanisms. The picture is complicated by the fact that Google has nice integration with third party OpenID providers but some of the biggest social networks are not OpenID providers (eg facebook, twitter). [Note that facebook can use OpenID as a relaying party, but not as a provider]. The question is then the following: what are the different options for social authentication in Appengine and what are the pros and cons of each?
What are the different options for social authentication on Appengine - how do they compare?
1
1.2
1
0
0
552
7,671,348
2011-10-06T07:30:00.000
3
0
1
0
0
python,package,main,demo,convention
0
7,671,369
0
2
0
false
0
0
I've never seen any real convention for this, but I personally put it in a main sentinel within __init__.py so that it can be invoked via python -m somepackage.
2
5
0
0
I am coding a new python package to be used by others. To demonstrate how it should be used, I am writing a demo script that executes the main parts of the new package. What is the convention for doing this, so that other will find the script easily? Should it be a separate module (by what name)? Should it be located in the package's root directory? Out of the package? In __init__.py?
Where should I place a Python package's demo script?
0
0.291313
1
0
0
348
7,671,348
2011-10-06T07:30:00.000
4
0
1
0
0
python,package,main,demo,convention
0
7,672,997
0
2
0
false
0
0
Should it be a separate module (by what name)? demo/some_useful_name.py A demo directory contains demo scripts. Similarly, a test directory contains all your unit tests. Should it be located in the package's root directory? No. It's not part of the package. It's a demo. Out of the package? Yes. In init.py? Never. A package has two lives. (1) as uninstalled source, (2) in the lib/site-packages as installed code. The "source" should include README, setup.py, demo directory, test directory, and the package itself. The top-level "source" setup.py should install just the package. The demo and test don't get installed. They get left behind as part of the download.
2
5
0
0
I am coding a new python package to be used by others. To demonstrate how it should be used, I am writing a demo script that executes the main parts of the new package. What is the convention for doing this, so that other will find the script easily? Should it be a separate module (by what name)? Should it be located in the package's root directory? Out of the package? In __init__.py?
Where should I place a Python package's demo script?
0
0.379949
1
0
0
348
7,690,324
2011-10-07T16:47:00.000
0
0
0
0
0
wxpython,widget
0
7,691,411
0
1
0
false
0
1
The wxPython demo has a fairly complicated window in their MDI with SashWindows demo. However, I keep seeing on the wxPython mailing list that MDI in general isn't usually recommended anyway. If I were you, I'd look at wx.lib.agw.aui. It's a pure python implementation of AUI and fixes a lot of the bugs in wx.aui. I know it's not quite the same, but at least it gets active development.
1
0
0
1
I want to put a bunch of widgets into an MDIChildFrame , using wxpython, but i cant find much documentation on how to do so. has anyone created a child frame with alot going on it in so i can take a look at the source code? would be really Helpful Cheers Kemill
Using wx.MDIChildFrame Widgets
0
0
1
0
0
317
7,698,217
2011-10-08T16:32:00.000
1
0
0
0
0
python,gtk,pygtk
0
7,705,370
0
2
0
false
0
1
You might be able to set the pixmap opacity by implementing a custom gtk.CellRenderer that draws the pixmap according to the selection state, and replacing the gtk.IconView's default cell renderer with your own.
1
0
0
0
I want to change the opacity or color of a gtk.IconView select box (I want actually to make the selection more visible). I noticed that the gtk.IconView widget had style properties selection-box-alpha & selection-box-color but only accessible for reading. The set_select_function() method of the gtk.TreeSelection class would have been useful to do what I want but it's used for a gtk.TreeView and I haven't found an equivalent for gtk.IconView So, how can I do to have control over the selection and perform an action when the user select or unselect stuff ? Edit : In fact, change the values of selection-box-alpha and selection-box-color style properties wouldn't be a solution. I don't really want to change the selection box opacity but the "opacity" of the pixbuf (by compositing with a color). So, I need an equivalent method of set_select_function for a gtk.IconView widget.
How to get the control over the selection in a gtk.IconView?
0
0.099668
1
0
0
735
7,712,554
2011-10-10T12:12:00.000
1
1
0
0
0
python,imap
0
7,793,784
0
1
1
true
1
0
The UID is guaranteed to be unique. Store each one locally.
1
0
0
0
I am working on an application that is supposed to connect to IMAP account, read through emails and pick out emails sent by lets say "Mark", then it is supposed to respond to mark with an automatic response such as "Got it mate" and then do the same tomorrow, with the only difference that tomorrow it should not respond to the same email. I am not sure how to achieve this the best way, I have thought of storing the processed IDs in a table, or record last check date. But I feel these are not the best CS solutions.
Best way to earmark messages in an IMAP folder?
0
1.2
1
0
0
59
7,716,357
2011-10-10T17:22:00.000
1
0
1
1
1
python,permissions,cygwin
1
16,814,809
0
8
0
false
0
0
This seems to be a late answer, but may be useful for others. I got the same kinda error, when I was trying to run a shell script which used python. Please check \usr\bin for existence of python. If not found, install that to solve the issue. I come to such a conclusion, as the error shows "bad interpreter".
2
10
0
0
I'm using cygwin on windows 7 to run a bash script that activates a python script, and I am getting the following error: myscript.script: /cydrive/c/users/mydrive/folder/myscript.py: usr/bin/env: bad interpreter: Permission Denied. I'm a total newbie to programming, so I've looked around a bit, and I think this means Python is mounted on a different directory that I don't have access to. However, based on what I found, I have tried to following things: Change something (from user to exec) in the fstab: however, my fstab file is all commented out and only mentions what the defaults are. I don't know how I can change the defaults. The fstab.d folder is empty. change the #! usr/bin/env python line in the script to the actual location of Python: did not work, same error add a PYTHONPATH to the environment variables of windows: same error. I would really appreciate it if someone could help me out with a suggestion!
usr/bin/env: bad interpreter Permission Denied --> how to change the fstab
0
0.024995
1
0
0
33,994
7,716,357
2011-10-10T17:22:00.000
0
0
1
1
1
python,permissions,cygwin
1
26,632,686
0
8
0
false
0
0
You should write your command as 'python ./example.py ',then fix it in your script.
2
10
0
0
I'm using cygwin on windows 7 to run a bash script that activates a python script, and I am getting the following error: myscript.script: /cydrive/c/users/mydrive/folder/myscript.py: usr/bin/env: bad interpreter: Permission Denied. I'm a total newbie to programming, so I've looked around a bit, and I think this means Python is mounted on a different directory that I don't have access to. However, based on what I found, I have tried to following things: Change something (from user to exec) in the fstab: however, my fstab file is all commented out and only mentions what the defaults are. I don't know how I can change the defaults. The fstab.d folder is empty. change the #! usr/bin/env python line in the script to the actual location of Python: did not work, same error add a PYTHONPATH to the environment variables of windows: same error. I would really appreciate it if someone could help me out with a suggestion!
usr/bin/env: bad interpreter Permission Denied --> how to change the fstab
0
0
1
0
0
33,994
7,720,932
2011-10-11T02:55:00.000
0
1
0
0
1
python,tcp,speex,pyaudio
0
13,102,430
0
1
0
false
1
0
Did you run ping or ttcp to test network performance between the 2 hosts? If you have latency spikes or if some packets are dropped your approach to sending voice stream will suffer badly. TCP will wait for missing packet, report it being lost, wait for retransmit, etc. You should be using UDP over lossy links and audio compression that handles missing packets gracefully. Also in this case you have to timestamp outgoing packets.
1
2
0
0
Hello I am having problems with audio being sent over the network. On my local system with no distance there is no problems but whenever I test on a remote system there is audio but its not the voice input i want its choppy/laggy etc. I believe its in how I am handling the sending of the audio but I have tried now for 4 days and can not find a solution. I will post all relevant code and try and explain it the best I can these are the constant/global values #initilaize Speex speex_enc = speex.Encoder() speex_enc.initialize(speex.SPEEX_MODEID_WB) speex_dec = speex.Decoder() speex_dec.initialize(speex.SPEEX_MODEID_WB) #some constant values chunk = 320 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 I found adjusting the sample rate value would allow for more noise Below is the pyAudio code to initialize the audio device this is also global #initalize PyAudio p = pyaudio.PyAudio() stream = p.open(format = FORMAT, channels = CHANNELS, rate = RATE, input = True, output = True, frames_per_buffer = chunk) This next function is the keypress function which writes the data from the mic and sends it using the client function This is where I believe I am having problems. I believe how I am handling this is the problem because if I press and hold to get audio it loops and sends on each iteration. I am not sure what to do here. (Ideas!!!) def keypress(event): #chunklist = [] #RECORD_SECONDS = 5 if event.keysym == 'Escape': root.destroy() #x = event.char if event.keysym == 'Control_L': #for i in range(0, 44100 / chunk * RECORD_SECONDS): try: #get data from mic data = stream.read(chunk) except IOError as ex: if ex[1] != pyaudio.paInputOverflowed: raise data = '\x00' * chunk encdata = speex_enc.encode(data) #Encode the data. #chunklist.append(encdata) #send audio client(chr(CMD_AUDIO), encrypt_my_audio_message(encdata)) The server code to handle the audio ### Server function ### def server(): PORT = 9001 ### Initialize socket server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server_socket.bind((socket.gethostbyname(socket.gethostname()), PORT)) # socket.gethostbyname(socket.gethostname()) server_socket.listen(5) read_list = [server_socket] ### Start receive loop while True: readable, writable, errored = select.select(read_list, [], []) for s in readable: if s is server_socket: conn, addr = s.accept() read_list.append(conn) print "Connection from ", addr else: msg = conn.recv(2048) if msg: cmd, msg = ord(msg[0]),msg[1:] ## get a text message from GUI if cmd == CMD_MSG: listb1.insert(END, decrypt_my_message(msg).strip() + "\n") listb1.yview(END) ## get an audio message elif cmd == CMD_AUDIO: # make sure length is 16 --- HACK --- if len(msg) % 16 != 0: msg += '\x00' * (16 - len(msg) % 16) #decrypt audio data = decrypt_my_message(msg) decdata = speex_dec.decode(data) #Write the data back out to the speaker stream.write(decdata, chunk) else: s.close() read_list.remove(s) and for completion the binding of the keyboard in Tkinter root.bind_all('', keypress) Any ideas are greatly appreciated how I can make that keypress method work as needed or suggest a better way or maybe I am doing something wrong altogether *cheers Please note I have tested it without the encryption methods also and same thing :-)
Python Audio over Network Problems
0
0
1
0
1
2,438
7,723,399
2011-10-11T08:43:00.000
0
0
0
1
0
python,ipython
0
7,723,501
0
2
0
false
0
0
I think it is not possible provide full functionality of IPython, such as auto complete,.. Twisted has python shell that works well with telnet.
1
1
0
0
I am trying to write a simple telned server that will expose a IPython shell to the connected client. Does someone know how to do that ? The question is really about embedding the IPython shell into the Telnet server (I can probably use Twisted for the Telnet server part ) Thx
Writing a telnet server in Python and embedding IPython as shell
0
0
1
0
0
830
7,727,017
2011-10-11T13:42:00.000
0
0
1
1
0
python,windows,interpreter,ipython
0
7,820,862
0
4
0
true
0
0
Found a solution: python27.exe c:\Python27\Scripts\ipython-script.py
4
0
0
0
If I rename the python interpreter from C:\Python27\python.exe to C:\Python27\python27.exe and run it, it will not complain. But if I now try to run C:\Python27\Scripts\ipython.exe, it will fail to start because now the python interpreter has a different filename. My question is: how do I configure IPython (ms windows) to start up a python interpreter which has a different filename than python.exe?
Running IPython after changing the filename of python.exe
0
1.2
1
0
0
550
7,727,017
2011-10-11T13:42:00.000
1
0
1
1
0
python,windows,interpreter,ipython
0
7,727,342
0
4
0
false
0
0
Try to find Python in windows registry and changes path to python. After that try to reinstall ipython.
4
0
0
0
If I rename the python interpreter from C:\Python27\python.exe to C:\Python27\python27.exe and run it, it will not complain. But if I now try to run C:\Python27\Scripts\ipython.exe, it will fail to start because now the python interpreter has a different filename. My question is: how do I configure IPython (ms windows) to start up a python interpreter which has a different filename than python.exe?
Running IPython after changing the filename of python.exe
0
0.049958
1
0
0
550
7,727,017
2011-10-11T13:42:00.000
0
0
1
1
0
python,windows,interpreter,ipython
0
7,727,120
0
4
0
false
0
0
I do not know if there is a config file where you can change, but you may have to recompile Ipython and change the interpreter variables. But why do you need to rename it to python27.exe when it already is in a python27 folder?
4
0
0
0
If I rename the python interpreter from C:\Python27\python.exe to C:\Python27\python27.exe and run it, it will not complain. But if I now try to run C:\Python27\Scripts\ipython.exe, it will fail to start because now the python interpreter has a different filename. My question is: how do I configure IPython (ms windows) to start up a python interpreter which has a different filename than python.exe?
Running IPython after changing the filename of python.exe
0
0
1
0
0
550
7,727,017
2011-10-11T13:42:00.000
0
0
1
1
0
python,windows,interpreter,ipython
0
7,727,290
0
4
0
false
0
0
instead of renaming python.exe, make sure the path to the python you want to run is before the path to other pythons
4
0
0
0
If I rename the python interpreter from C:\Python27\python.exe to C:\Python27\python27.exe and run it, it will not complain. But if I now try to run C:\Python27\Scripts\ipython.exe, it will fail to start because now the python interpreter has a different filename. My question is: how do I configure IPython (ms windows) to start up a python interpreter which has a different filename than python.exe?
Running IPython after changing the filename of python.exe
0
0
1
0
0
550
7,731,324
2011-10-11T19:20:00.000
42
0
1
1
0
python,pydev
0
7,742,529
0
2
0
true
0
0
Ctrl+Shift+G will find all the references to a function in PyDev (F3 will go to the definition of a function).
1
35
0
0
This has probably been asked before but I can't seem to find the answer. I've moved from windows to Linux and started using PyDev (Aptana) recently but what I cannot seem to find is how to find all references to a function.
pydev: find all references to a function
0
1.2
1
0
0
10,065
7,733,364
2011-10-11T22:36:00.000
1
0
1
0
1
python,transparency,python-imaging-library,indexed-image
0
7,733,515
0
2
0
false
0
0
Once you merge the two images, you won't have two colors any more - the colors will combine based on the transparency of each one at every pixel location. Worst case, you will have 256*256=65536 colors, which can't be indexed and wouldn't compress well if you did. I would suggest saving as a PNG and let the lossless compression do its best.
1
5
0
0
I wonder how I could create a image with a transparent background and only 2 indexed colours (red and blue) to minimise the file size? More specifically I have two black and white images that I want to convert, one to transparent and blue and the other to transparent and red. Then I want to merge those 2 images. I could do that with a regular RGBA image, but I really want the colour to be indexed to minimise the file size. Ideally with PIL, but other Python library could also work.
Python PIL: Create indexed color image with transparent background
0
0.099668
1
0
0
6,876
7,733,969
2011-10-12T00:15:00.000
2
0
0
0
0
python,numpy,machine-learning,scipy
0
7,734,072
0
4
0
false
0
0
You can import the random module and call random.random to get a random sample from [0, 1). You can double that and subtract 1 to get a sample from [-1, 1). Draw d values this way and the tuple will be a uniform draw from the cube [-1, 1)^d.
1
7
1
0
How can I generate a uniformly distributed [-1,1]^d data in Python? E.g. d is a dimension like 10. I know how to generate uniformly distributed data like np.random.randn(N) but dimension thing is confused me a lot.
Uniformly distributed data in d dimensions
0
0.099668
1
0
0
13,886
7,747,852
2011-10-12T23:43:00.000
0
0
0
0
0
javascript,python,macos,unix,controls
0
7,747,962
0
3
0
false
1
0
On the client side (the browser), you can do it with the simplest approach. Just an html form. javascript would make it nicer for validation and to do ajax calls so the page doesnt have to refresh. But your main focus is handling it on the server. You could receive the form request in the language of your choice. If you are already running python, you could write a super fast cgi python script. Look at the cgi module for python. You would need to put this into the apache server on osx if thats where you will host it. Unfortunately, your question about exactly how to write it is beyond the scope of a simple answer. But google for how to write and html form, or look at maybe jquery to build a quick form that can make ajax calls easily. Then search for how to use the python cgi module and receive POST requests.
2
0
0
0
I want to have a "control panel" on a website, and when a button is pressed, I want it to run a command on the server (my computer). The panel is to run different python scripts I wrote (one script for each button), and I want to run the panel on my Mac, my iPod touch, and my wii. The best way I see for this is a website, since they all have browsers. Is there a javascript or something to run a command on my computer whenever the button is pressed? EDIT: I heard AJAX might work for server-based things like this, but I have no idea how to do that. Is there like a 'system' block or something I can use?
Running command with browser
0
0
1
0
1
198
7,747,852
2011-10-12T23:43:00.000
0
0
0
0
0
javascript,python,macos,unix,controls
0
7,747,896
0
3
0
true
1
0
Here are three options: Have each button submit a form with the name of the script in a hidden field. The server will receive the form parameters and can then branch off to run the appropriate script. Have each button hooked to it's own unique URL and use javascript on the button click to just set window.location to that new URL. Your server will receive that URL and can decide which script to run based on the URL. You could even just use a link on the web page with no javascript. Use Ajax to issue a unique URL to your server. This is essentially the same (from the server's point of view) as the previous two options. The main difference is that the web browser doesn't change what URL it's pointing to. The ajax call just directs the server to do something and return some data which the host web page can then do whatever it wants with.
2
0
0
0
I want to have a "control panel" on a website, and when a button is pressed, I want it to run a command on the server (my computer). The panel is to run different python scripts I wrote (one script for each button), and I want to run the panel on my Mac, my iPod touch, and my wii. The best way I see for this is a website, since they all have browsers. Is there a javascript or something to run a command on my computer whenever the button is pressed? EDIT: I heard AJAX might work for server-based things like this, but I have no idea how to do that. Is there like a 'system' block or something I can use?
Running command with browser
0
1.2
1
0
1
198
7,752,532
2011-10-13T10:09:00.000
1
0
1
0
0
python,ipython
0
7,754,105
0
1
0
true
0
0
If you give the script a .ipy extension, ipython's special syntax (like !ls) should work when you do ipython myscript.ipy.
1
1
0
0
It's so convenient to use shell escape from interactive environment in ipython, but is it possible to call python script containing shell escape from ipython?
how can i run a script containing shell escape from ipython?
1
1.2
1
0
0
178
7,770,034
2011-10-14T15:37:00.000
-5
0
0
1
0
python,linux
0
7,770,065
0
4
0
false
0
0
You could just parse /etc/passwd, it's stored there.
1
34
0
0
There is os.getuid() which "Returns the current process’s user id.". But how do I find out any given user's id?
in Python and linux how to get given user's id
0
-1
1
0
0
48,007
7,784,969
2011-10-16T14:40:00.000
0
0
0
0
1
python,wxpython
0
7,785,429
0
3
0
false
0
0
I am not a wx expert. Could you use wx's native event driven mechanisms? The keypress would certainly have an event. Wx has a socket class wxSocketClient() that could translate the low level socket events (data ready, closed, etc) into a wx event.
1
1
0
0
I want to write an (GUI) application that listens both to keyboard events (client side generated events) and to a network port (server side generated events). I could use some high level advice on how to do this. Some additional info: - I am using the wxPython module for the GUI - I could set the socket in non-blocking mode, but this way I have to keep polling the socket by keeping executing the recv() command. I did this earlier and I can recall that this used considerable resources - I could use the thread module, but since I am not familiar with it, I try to avoid this, but maybe I can't Advice would be appreciated.
Listening to network event and keyboard input at the same time in Python
0
0
1
0
1
803
7,792,013
2011-10-17T09:48:00.000
2
1
1
0
0
c#,python,mono
0
7,792,073
0
2
0
false
0
1
Have you considered IronPython? It's trivial to integrate and since it's working directly with .net the integration works very well.
2
1
0
1
I've decided to try and create a game before I finish studies. Searching around the net, I decided to create the basic game logic in python (for simplicity and quicker development time), and the actual I/O engine in c# (for better performance. specifically, I'm using Mono with the SFML library). After coming to grips with both languages and IDEs, I've gotten stuck on integrating the two, which leads me to three questions (the most important one is the second): a. which module should encapsulate the other? should the python game logic call the c# I/O for input and then update it for output, or should it be the other way around? b. whatever the answer is, how can I do it? I haven't found any specific instructions on porting or integrating scripts or binaries in either language. c. Will the calls between modules be significantly harmful for performance? If they will, should I just develop everything in in one language? Thanks!
Integrating python and c#
0
0.197375
1
0
0
814
7,792,013
2011-10-17T09:48:00.000
2
1
1
0
0
c#,python,mono
0
7,792,064
0
2
0
true
0
1
Sincerely, I would say C# is today gives you a lot of goods from Python. To quote Jon Skeet: Do you know what I really like about dynamic languages such as Python, Ruby, and Groovy? They suck away fluff from your code, leaving just the essence of it—the bits that really do something. Tedious formality gives way to features such as generators, lambda expressions, and list comprehensions. The interesting thing is that few of the features that tend to give dynamic lan- guages their lightweight feel have anything to do with being dynamic. Some do, of course—duck typing, and some of the magic used in Active Record, for example— but statically typed languages don't have to be clumsy and heavyweight. And you can have dynamic typing too. That's a new project, I would use just C# here.
2
1
0
1
I've decided to try and create a game before I finish studies. Searching around the net, I decided to create the basic game logic in python (for simplicity and quicker development time), and the actual I/O engine in c# (for better performance. specifically, I'm using Mono with the SFML library). After coming to grips with both languages and IDEs, I've gotten stuck on integrating the two, which leads me to three questions (the most important one is the second): a. which module should encapsulate the other? should the python game logic call the c# I/O for input and then update it for output, or should it be the other way around? b. whatever the answer is, how can I do it? I haven't found any specific instructions on porting or integrating scripts or binaries in either language. c. Will the calls between modules be significantly harmful for performance? If they will, should I just develop everything in in one language? Thanks!
Integrating python and c#
0
1.2
1
0
0
814
7,821,977
2011-10-19T13:24:00.000
0
0
1
0
0
python,kml
0
7,822,039
0
3
0
false
0
0
You can download the KML file in python using urllib. For reading KML, you can use a parser (search for "kml python parser").
1
1
0
0
I'd like to download a KML file and print a particular element of it as a string in Python. Could anyone give me an example of how to do this? Thanks.
KML to string in Python?
0
0
1
0
0
2,981
7,827,859
2011-10-19T20:42:00.000
2
0
0
0
0
python,database,database-design
0
7,827,955
0
4
0
false
0
0
Just a general data modeling concept: you never want to name anything "...NumberOne", "...NumberTwo". Data models designed in this way are very difficult to query. You'll ultimately need to visit each of N tables for 1 to N ingredients. Also, each table in the model would ultimately have the same fields making maintenance a nightmare. Rather, just have one ingredient table that references the "recipe" table. Ultimately, I just realized this doesn't exactly answer the question, but you could implement this solution in Sqlite. I just get worried when good developers start introducing bad patterns into the data model. This comes from a guy who's been on both sides of the coin.
1
0
0
0
I'm very new to Python and I'm trying to write a sort of recipe organizer to get acquainted with the language. Basically, I am unsure how how I should be storing the recipes. For now, the information I want to store is: Recipe name Ingredient names Ingredient quantities Preparation I've been thinking about how to do this with the built-in sqlite3, but I know nothing about database architecture, and haven't been able to find a good reference. I suppose one table would contain recipe names and primary keys. Preparation could be in a different table with the primary key as well. Would each ingredient/quantity pair need its own table. In other words, there would be a table for ingredientNumberOne, and each recipe's first ingredient, with the quantity, would go in there. Then each time recipe comes along with more ingredients than there are tables, a new table would be created. Am I even correct in assuming that sqlite3 is sufficient for this task?
How to store recipe information with Python
0
0.099668
1
1
0
2,639
7,843,786
2011-10-21T00:24:00.000
3
0
1
0
0
python
0
7,843,795
0
4
0
false
0
0
It sounds like you're using Numpy. If so, the shape (38845,) means you have a 1-dimensional array, of size 38845.
3
0
0
0
I am using a package and it is returning me an array. When I print the shape it is (38845,). Just wondering why this ','. I am wondering how to interpret this. Thanks.
How to intepret the shape of the array in Python?
1
0.148885
1
0
0
376
7,843,786
2011-10-21T00:24:00.000
0
0
1
0
0
python
0
7,843,917
0
4
0
false
0
0
Just wondering why this ','. Because (38845) is the same thing as 38845, but a tuple is expected here, not an int (since in general, your array could have multiple dimensions). (38845,) is a 1-tuple.
3
0
0
0
I am using a package and it is returning me an array. When I print the shape it is (38845,). Just wondering why this ','. I am wondering how to interpret this. Thanks.
How to intepret the shape of the array in Python?
1
0
1
0
0
376
7,843,786
2011-10-21T00:24:00.000
8
0
1
0
0
python
0
7,843,805
0
4
0
true
0
0
Python has tuples, which are like lists but of fixed size. A two-element tuple is (a, b); a three-element one is (a, b, c). However, (a) is just a in parentheses. To represent a one-element tuple, Python uses a slightly odd syntax of (a,). So there is only one dimension, and you have a bunch of elements in that one dimension.
3
0
0
0
I am using a package and it is returning me an array. When I print the shape it is (38845,). Just wondering why this ','. I am wondering how to interpret this. Thanks.
How to intepret the shape of the array in Python?
1
1.2
1
0
0
376
7,846,355
2011-10-21T07:36:00.000
2
0
0
0
0
python,postgresql,postgis,geodjango
0
7,904,142
0
6
0
false
0
0
I have no experience with GeoDjango, but on PostgreSQL/PostGIS you have the st_distance(..) function. So, you can order your results by st_distance(geom_column, your_coordinates) asc and see what are the nearest rows. If you have plain coordinates (no postgis geometry), you can convert your coordinates to a point with the geometryFromText function. Is that what you were looking for? If not, try to be more explicit.
1
13
0
0
I am using GeoDjango with PostGIS. Then I am into trouble on how to get the nearest record from the given coordinates from my postgres db table.
How can I query the nearest record in a given coordinates(latitude and longitude of string type)?
0
0.066568
1
1
0
5,401
7,873,170
2011-10-24T08:50:00.000
1
0
0
1
0
python
0
7,873,222
0
2
0
false
0
0
You're asking a very general question. Perhaps overly general. Generally, unless your application is relatively simple, it's impossible to guarantee that it is going to work on Linux and Mac OS X by only having Windows available. You will have to at least test it on Linux. Mac OS X is rather similar to Linux in many aspects, so you may get off the hook there, although for more complex cases it won't suffice. Python is not much different from other languages in this respect - it makes writing cross platform code easier, but it won't solve all your problems. Luckily, installing Linux on a VM is quick and free. Personally I use VirtualBox with a Ubuntu installation on top. It takes less than an hour to set up such a system from scratch (download Vbox, download an Ubuntu image and install it).
2
1
0
0
I have windows xp, I have found some python libraries that only work for windows xp and thus if you have a mac os or linux or windows 7, you can't download my program because it won't work, how to make these libraries compatible with these OS, I can't ask the creator of the libraries so I have to download the source code and modify it, and i have to make it compatible on these OS using my xp :D well my brother's pc is windows 7, but I don't have mac OS or linux (unless i can use VM) EDIT my application is not simple
How to make python modules compatible with different OS?
0
0.099668
1
0
0
133
7,873,170
2011-10-24T08:50:00.000
0
0
0
1
0
python
0
7,873,269
0
2
0
true
0
0
Your question is quite broad: 1) Development and testing: Use VMs, absolutely, they are great for testing on OS you don't natively use, and to have a clean environment for testing (eg. test even windows stuff on a clean windows VM if you can, you might find out you're missing some dependencies that you took for granted on your dev machine). 2) Actual library porting: Depending on the library this may or may not be difficult. Why is this library only working on windows? does it use specific DLLs, via ctypes or swig or some other bindings. If the library is python code (not a C library), is it tied to windows python APIs? There are many things to take into account, if using system specific APIs/libs, can they be faked on other OSs (write small abstraction over them), or does it require a lot more code. You get the gist.
2
1
0
0
I have windows xp, I have found some python libraries that only work for windows xp and thus if you have a mac os or linux or windows 7, you can't download my program because it won't work, how to make these libraries compatible with these OS, I can't ask the creator of the libraries so I have to download the source code and modify it, and i have to make it compatible on these OS using my xp :D well my brother's pc is windows 7, but I don't have mac OS or linux (unless i can use VM) EDIT my application is not simple
How to make python modules compatible with different OS?
0
1.2
1
0
0
133
7,876,250
2011-10-24T13:20:00.000
0
1
1
0
0
python
0
7,876,369
0
1
0
false
0
0
Constructor arguments are usually documented in the type docstring, i.e. via the tp_doc slot, so you can do help(type) (or type? in IPython) instead of help(type.__new__) or help(type.__init__).
1
1
0
0
When developing a Python plug-in (in C++), how does one go about setting the documentation for __new__? In particular, given a new type defined by a PyTypeObject structure in the C++, how does one document the arguments which can be passed to the constructor.
Generating constructor documentation for class defined in C++
0
0
1
0
0
119
7,883,962
2011-10-25T02:11:00.000
3
0
1
0
0
python,yield
0
7,884,098
0
4
0
false
0
0
Another use is in a network client. Use 'yield' in a generator function to round-robin through multiple sockets without the complexity of threads. For example, I had a hardware test client that needed to send a R,G,B planes of an image to firmware. The data needed to be sent in lockstep: red, green, blue, red, green, blue. Rather than spawn three threads, I had a generator that read from the file, encoded the buffer. Each buffer was a 'yield buf'. End of file, function returned and I had end-of-iteration. My client code looped through the three generator functions, getting buffers until end-of-iteration.
1
43
0
0
I know how yield works. I know permutation, think it just as a math simplicity. But what's yield's true force? When should I use it? A simple and good example is better.
Where to use yield in Python best?
0
0.148885
1
0
0
34,825
7,929,014
2011-10-28T12:06:00.000
1
0
0
0
0
javascript,python,django
0
7,929,052
0
2
0
false
1
0
Most sites that do something like this implement it with a second form where you attach the file. Doing the upload via ajax means you do need to store the file on your server for some amount of time, and then your original form just needs a reference to that file so you know when you're done with it. Then you just need to know when you can delete it.
2
0
0
0
I have a form with "subject", "body" and "file" fields on some page on my Django site. If "subject" and/or "body" parameters exist in GET, I pre-fill them in the form from server side. I want to do the same with "file" field - more exactly, I want if there is an "URL" parameter in request.GET, take the file from this URL and pre-fill the "file" field with it. I've googled and still have no idea how to implement this whether with pure Javascript or with server-side help, and from my experience, it isn't possible (or at least hard to do) in most browsers due to input type="file" nature. Is it in fact possible to implement it in some way?
Pre-fill a form with attached file taken from some URL in Django
1
0.099668
1
0
0
851
7,929,014
2011-10-28T12:06:00.000
1
0
0
0
0
javascript,python,django
0
7,929,043
0
2
0
true
1
0
You can't pre-fill a file field. But I don't think you need to use one at all, since you're getting the file from a URL, not from the user's local machine. Just use a normal text field for the URL, and get the file server-side (eg using urllib) after the form is submitted.
2
0
0
0
I have a form with "subject", "body" and "file" fields on some page on my Django site. If "subject" and/or "body" parameters exist in GET, I pre-fill them in the form from server side. I want to do the same with "file" field - more exactly, I want if there is an "URL" parameter in request.GET, take the file from this URL and pre-fill the "file" field with it. I've googled and still have no idea how to implement this whether with pure Javascript or with server-side help, and from my experience, it isn't possible (or at least hard to do) in most browsers due to input type="file" nature. Is it in fact possible to implement it in some way?
Pre-fill a form with attached file taken from some URL in Django
1
1.2
1
0
0
851
7,937,928
2011-10-29T09:04:00.000
3
0
0
0
0
python,sockets,streaming,ipv6,multicast
1
7,939,658
0
1
0
false
0
0
You DO want to use datagrams, as with multicast there are multiple receivers and a stream socket will not work. You need to send your data in small chunks (datagrams) and state in each which part of the stream it is so receivers can detect lost (and reordered) datagrams. Instead of inventing a new mechanism for identifying the parts you are most likely better off encapsulating your data in RTP. If you are going to stream video it might be worth looking into gstreamer which can do both sending and receiving RTP and has python bindings.
1
0
0
0
I need some help in implementing Multicast Streaming server over IPv6 preferably in Python. I am able to do so with Datagram servers but since I need to send large amounts of data (images and videos) over the connection, I get an error stating , data too large to send. Can any one tell me how do I implement a Streaming Socket with multicast that can both send and receive data? Also, if there is a better way to do than Stream Sockets, please tell. Thank You.
How do I create a multicast stream socket over IPv6 in Python?
0
0.53705
1
0
1
570
7,941,623
2011-10-29T20:52:00.000
0
0
0
0
0
python,database,django,psycopg2
1
7,942,855
0
2
0
false
1
0
Generally, you would create the database externally before trying to hook it up with Django. Is this your private server? If so, there are command-line tools you can use to set up a PostgreSQL user and create a database. If it is a shared hosting situation, you would use CPanel or whatever utility your host provides to do this. For example, when I had shared hosting, I was issued a database user and password by the hosting administrator. Perhaps you were too. Once you have this set up, there are places in your settings.py file to put your username and password credentials, and the name of the database.
2
0
0
0
I'm create a blog using django. I'm getting an 'operational error: FATAL: role "[database user]" does not exist. But i have not created any database yet, all i have done is filled in the database details in setting.py. Do i have to create a database using psycopg2? If so, how do i do it? Is it: python import psycopg2 psycopg2.connect("dbname=[name] user=[user]") Thanks in advance.
how do i create a database in psycopg2 and do i need to?
1
0
1
1
0
1,391
7,941,623
2011-10-29T20:52:00.000
0
0
0
0
0
python,database,django,psycopg2
1
7,941,712
0
2
0
false
1
0
before connecting to database, you need to create database, add user, setup access for user you selected. Reffer to installation/configuration guides for Postgres.
2
0
0
0
I'm create a blog using django. I'm getting an 'operational error: FATAL: role "[database user]" does not exist. But i have not created any database yet, all i have done is filled in the database details in setting.py. Do i have to create a database using psycopg2? If so, how do i do it? Is it: python import psycopg2 psycopg2.connect("dbname=[name] user=[user]") Thanks in advance.
how do i create a database in psycopg2 and do i need to?
1
0
1
1
0
1,391
7,945,669
2011-10-30T15:06:00.000
2
0
0
0
0
python,xml,rss,atom-feed,feedparser
0
8,021,162
0
1
0
true
0
0
I'm the current developer of feedparser. Currently, one of the ways you can get that information is to monkeypatch feedparser._FeedParserMixin (or edit a local copy of feedparser.py). The methods you'll want to modify are: feedparser._FeedParserMixin.unknown_starttag feedparser._FeedParserMixin.unknown_endtag At the top of each method you can insert a callback to a routine of your own that will capture the elements and their attributes as they're encountered by feedparser.
1
2
0
1
I'm trying to use feedparser to retrieve some specific information from feeds, but also retrieve the raw XML of each entry (ie. elements for RSS and for Atom), and I can't see how to do that. Obviously I could parse the XML by hand, but that's not very elegant, would require separate support for RSS and Atom, and I imagine it could fall out of sync with feedparser for ill-formed feeds. Is there a better way? Thanks!
Retrieving raw XML for items with feedparser
0
1.2
1
0
1
1,115
7,960,578
2011-10-31T22:33:00.000
3
0
0
0
0
python,boto,amazon-emr
0
9,055,033
0
3
0
true
1
0
If it finishes correctly, it should not terminate with keep_alive=True. That said, it would normally exit on failure, so you want to add terminate_on_failure="CONTINUE" to your add_job_steps arguments.
1
2
0
0
How can I add steps to a waiting Amazon EMR job flow using boto without the job flow terminating once complete? I've created an interactive job flow on Amazon's Elastic Map Reduce and loaded some tables. When I pass in new steps to the job flow using Boto's emr_conn.add_jobflow_steps(...), the job flow terminates after it finishes or fails. I know I can start a job flow with boto using run_jobflow with the keep_alive parameter -- but I'd like to work with flows that are already running.
Boto: how to keep EMR job flow running after completion/failure?
0
1.2
1
0
0
2,242
7,964,869
2011-11-01T09:59:00.000
7
0
0
0
0
python,qt4,qt-designer
0
34,307,822
0
2
0
false
0
1
Right-click on your widget Select "Go to slot..." Select a signal and click OK Your custom slot declaration and definition for that signal will be added to *.cpp and *.h files. Its name will be generated automatically. upd: Sorry, I didn't notice that the question is about Python & QtDesigner itself, I was thinking of the designer mode in QtCreator IDE. However, this still may be useful for someone who is looking for Qt/C++ info, so I leave the answer.
2
22
0
0
I use Qt4 Designer and I want that when I click on the "yes" button, some code will execute. And when I click on the "no", some other code will be execute. How can I do it?
Qt Designer: how to add custom slot and code to a button
0
1
1
0
0
38,462
7,964,869
2011-11-01T09:59:00.000
45
0
0
0
0
python,qt4,qt-designer
0
7,965,081
0
2
0
false
0
1
Click on the Edit Signal/Slots tool. Create a connection for your button. For this, select your button in the designer by pressing on it with the left button of the mouse. Move the mouse to some place in the main window to create a connection with the main window (it is like a red line with a earth connection). When you release the mouse button, the Configure Connection dialog appears. In this dialog select a signal in the left text control (the sender), for example, pressed(). Then press edit in the right text control (the receiver). A dialog for the Signals/Slots of MainWindow appears. In the slot panel add a new slot (green cross). The text slot1() appears. Double click on it to edit the line and write instead the name of your function doit_when_yes_ispressed(). Accept. Now in the Configure Connection dialog you will see your function in the right text control. Select and Accept. In the designer now you can see the signal and your function in the widget.
2
22
0
0
I use Qt4 Designer and I want that when I click on the "yes" button, some code will execute. And when I click on the "no", some other code will be execute. How can I do it?
Qt Designer: how to add custom slot and code to a button
0
1
1
0
0
38,462
7,976,269
2011-11-02T05:40:00.000
9
0
1
0
0
python,iterator,boolean,generator
0
7,976,309
0
3
0
false
0
0
Guido doesn't want generators and iterators to behave that way. Objects are true by default. They can be false only if they define __len__ that returns zero or __nonzero__ that returns False (the latter is called __bool__ in Py3.x). You can add one of those methods to a custom iterator, but it doesn't match Guido's intent. He rejected adding __len__ to iterators where the upcoming length is known. That is how we got __length_hint__ instead. So, the only way to tell if an iterator is empty is to call next() on it and see if it raises StopIteration. On ASPN, I believe there are some recipes using this technique for lookahead wrapper. If a value is fetched, it is saved-up the an upcoming next() call.
1
16
0
0
Other empty objects in Python evaluate as False -- how can I get iterators/generators to do so as well?
How can I get generators/iterators to evaluate as False when exhausted?
0
1
1
0
0
5,257
7,986,900
2011-11-02T20:27:00.000
6
0
0
0
0
python,nlp,machine-learning,nltk
0
7,987,856
0
2
1
true
0
0
You need a huge training set of documents. Small subset of this collection (but still large set of documents) should represent given domain. Using nltk calculate words statistics taking into account morphology, filter out stopwords. The good statistics is TF*IDF which is roughly a number of occurenses of a word in the domain subset divided by number of documents containing the word in a whole collection. Keywords are words with greatest TF*IDF.
1
5
0
1
I am trying to determine the most popular keywords for certain class of documents in my collection. Assuming that the domain is "computer science" (which of course, includes networking, computer architecture, etc.) what is the best way to preserve these domain-specific keywords from text? I tried using Wordnet but I am not quite how to best use it to extract this information. Are there any well-known list of words that I can use as a whitelist considering the fact that I am not aware of all domain-specific keywords beforehand? Or are there any good nlp/machine learning techniques to identity domain specific keywords?
Preserving only domain-specific keywords?
1
1.2
1
0
0
1,008
7,988,856
2011-11-03T00:02:00.000
0
0
0
1
0
python,eclipse,pydev
0
7,993,203
0
1
0
false
1
0
This was a bug -- and it was fixed in the latest version (PyDev 2.2.4), so, please upgrade to the latest version and that should be working already.
1
1
0
0
I am using the pydev remote debugger feature for my application. When I try to stop the debugger server via the stop button it shows on the console that the server is successfully terminated but it isn't because it is still accepting new connections on its default port (5678). Do you know how can I stop the server in a reliable way? Thanks in advance.
How can I stop the debugger server from pydev?
0
0
1
0
0
525
8,000,280
2011-11-03T18:47:00.000
2
1
0
0
0
python,mercurial,hook,mercurial-hook
0
8,000,347
0
2
0
false
0
0
changegroup hook is called once per push. If you want to analyse each changeset, then you want incoming hook (there's no input hook AFAIK) — it'll be called for each changeset, with ID in HG_NODE environment variable. You can get the commit message with e.g. hg log -r $HG_NODE --template '{desc}' or via the API.
1
2
0
0
I would like to write a hook for Mercurial to do the following, an am struggling to get going.: Run on central repo, and execute when changeset(s) are pushed (I think I should use the "input" or "changegroup" hook) Search each commit message for a string with the format "issue:[0-9]*" IF string found, call a webservice, and provide the issue number, commit message, and a list of files that were changed So, just for starters, how can I get the commit message for each commit from the "input" or "changegroup" hook? Any advice beyond this on how to achieve the other points would also be appeciated. Thanks for any help.
How to access commit message from Mercurial Input or Changeset hook
0
0.197375
1
0
0
817
8,017,514
2011-11-05T01:28:00.000
2
1
0
0
1
python,tdd
0
8,017,802
0
4
0
false
0
0
I use a piece of paper to create a test list (scratchpad to keep track of tests so that I don't miss out on them). I hope you're not writing all the failing tests at one go (because that can cause some amount of thrashing as new knowledge comes in with each Red-Green-Refactor cycle). To mark a test as TO-DO or Not implemented, you could also mark the test with the equivalent of a [Ignore("PENDING")] or [Ignore("TODO")]. NUnit for example would so such tests as yellow instead of failed. So Red implies test failure, Yellow implies TODO.
3
2
0
0
If you are in the middle of a TDD iteration, how do you know which tests fail because the existing code is genuinely incorrect and which fail because either the test itself or the features haven't been implemented yet? Please don't say, "you just don't care, because you have to fix both." I'm ready to move past that mindset. My general practice for writing tests is as follows: First, I architect the general structure of the test suite, in whole or in part. That is - I go through and write only the names of tests, reminding me of the features that I intend to implement. I typically (at least in python) simply start with each testing having only one line: self.fail(). This way, I can ride a stream of consciousness through listing every feature I think I will want to test - say, 11 tests at a time. Second, I pick one test and actually write the test logic. Third, I run the test runner and see 11 failures - 10 that simply self.fail() and 1 that is a genuine AssertionError. Fourth, I write the code that causes my test to pass. Fifth, I run the test runner and see 1 pass and 10 failures. Sixth, I go to step 2. Ideally, instead of seeing tests in terms of passes, failures, and exceptions, I'd like to have a fourth possibility: NotImplemented. What's the best practice here?
TDD practice: Distinguishing between genuine failures and unimplemented features
1
0.099668
1
0
0
1,102
8,017,514
2011-11-05T01:28:00.000
1
1
0
0
1
python,tdd
0
8,017,731
0
4
0
false
0
0
Most projects would have a hierarchy (e.g. project->package->module->class) and if you can selectively run tests for any item on any of the levels or if your report covers these parts in detail you can see the statuses quite clearly. Most of the time, when an entire package or class fails, it's because it hasn't been implemented. Also, In many test frameworks you can disable individual test cases by removing annotation/decoration from or renaming a method/function that performs a test. This has the disadvantage of not showing you the implementation progress, though if you decide on a fixed and specific prefix you can probably grep that info out of your test source tree quite easily. Having said that, I would welcome a test framework that does make this distinction and has NOT_IMPLEMENTED in addition to the more standard test case status codes like PASS, WARNING and FAILED. I guess some might have it.
3
2
0
0
If you are in the middle of a TDD iteration, how do you know which tests fail because the existing code is genuinely incorrect and which fail because either the test itself or the features haven't been implemented yet? Please don't say, "you just don't care, because you have to fix both." I'm ready to move past that mindset. My general practice for writing tests is as follows: First, I architect the general structure of the test suite, in whole or in part. That is - I go through and write only the names of tests, reminding me of the features that I intend to implement. I typically (at least in python) simply start with each testing having only one line: self.fail(). This way, I can ride a stream of consciousness through listing every feature I think I will want to test - say, 11 tests at a time. Second, I pick one test and actually write the test logic. Third, I run the test runner and see 11 failures - 10 that simply self.fail() and 1 that is a genuine AssertionError. Fourth, I write the code that causes my test to pass. Fifth, I run the test runner and see 1 pass and 10 failures. Sixth, I go to step 2. Ideally, instead of seeing tests in terms of passes, failures, and exceptions, I'd like to have a fourth possibility: NotImplemented. What's the best practice here?
TDD practice: Distinguishing between genuine failures and unimplemented features
1
0.049958
1
0
0
1,102
8,017,514
2011-11-05T01:28:00.000
0
1
0
0
1
python,tdd
0
8,021,286
0
4
0
false
0
0
I also now realize that the unittest.expectedFailure decorator accomplishes functionality congruent with my needs. I had always thought that this decorator was more for tests that require certain environmental conditions that might not exist in the production environment where the test is being run, but it actually makes sense in this scenario too.
3
2
0
0
If you are in the middle of a TDD iteration, how do you know which tests fail because the existing code is genuinely incorrect and which fail because either the test itself or the features haven't been implemented yet? Please don't say, "you just don't care, because you have to fix both." I'm ready to move past that mindset. My general practice for writing tests is as follows: First, I architect the general structure of the test suite, in whole or in part. That is - I go through and write only the names of tests, reminding me of the features that I intend to implement. I typically (at least in python) simply start with each testing having only one line: self.fail(). This way, I can ride a stream of consciousness through listing every feature I think I will want to test - say, 11 tests at a time. Second, I pick one test and actually write the test logic. Third, I run the test runner and see 11 failures - 10 that simply self.fail() and 1 that is a genuine AssertionError. Fourth, I write the code that causes my test to pass. Fifth, I run the test runner and see 1 pass and 10 failures. Sixth, I go to step 2. Ideally, instead of seeing tests in terms of passes, failures, and exceptions, I'd like to have a fourth possibility: NotImplemented. What's the best practice here?
TDD practice: Distinguishing between genuine failures and unimplemented features
1
0
1
0
0
1,102
8,034,767
2011-11-07T09:37:00.000
0
1
0
0
0
python,mechanize
1
8,035,293
0
2
0
false
1
0
The server blocks your activity with such response. Is it your site? If not, follow the rules: Obey robots.txt file Put a delay between request, even if robots.txt doesn't require it. Provide some contact information (e-mail or page URL) in the User-Agent header. Otherwise be ready site owner blocking you based on User-Agent, IP or other information he thinks distinguish you from legitimate users.
1
3
0
0
I am trying out Mechanize to make some routine simpler. I have managed to bypass that error by using br.set_handle_robots(False). There are talks about how ethical it's to use it. What I wonder about is where this error is generated, on my side, or on server side? I mean does Mechanize throw the exception when it sees some robots.txt rule or does server decline the request when it detects that I use an automation tool?
On what side is 'HTTP Error 403: request disallowed by robots.txt' generated?
0
0
1
0
0
997
8,039,566
2011-11-07T16:37:00.000
2
0
0
0
0
python,redis
0
8,039,797
0
1
0
true
0
0
My initial thought would be to store the data elsewhere, like relational database, or possibly using a zset. If you had continuous data (meaning it was consistently set at N interval time periods), then you could store the hash key as the member and the date (as a int timestamp) as the value. Then you could do a zrank for a particular date, and use zrevrange to query from the first rank to the value you get from zrank.
1
1
0
0
I store several properties of objects in hashsets. Among other things, something like "creation date". There are several hashsets in the db. So, my question is, how can I find all objects older than a week for example? Can you suggest an algorithm what faster than O(n) (naive implementation)? Thanks, Oles
Redis: find all objects older than
0
1.2
1
1
0
462
8,041,501
2011-11-07T19:30:00.000
0
0
0
0
0
python,scripting
0
8,077,102
0
5
0
false
1
0
Plain CGI is a good starting point to learn about server side scripting, but it is an outdated technology and gets difficult to maintain after certain level of complexity. I would think it is no longer used in industrial-grade web server anymore. Plus you have to setup a web server and then install some module to interpret python script (like Apache with mod_python) just to get started. I had some experience with Django (https://www.djangoproject.com/) and found it fairly easy to get started with since they come with development test server. All you need to have is a Python interpreter + Django and you can get up-and-running quickly and worry about the deployment setup later. They have pretty good documentation for beginner as well.
1
0
0
0
I am wondering how to go about implementing a web application with Python. For example, the html pages would link to python code that would give it increased functionality and allow it to write to a database. Kind of like how Reddit does it.
How to develop a simple web application with server-side Python
0
0
1
0
0
7,276
8,041,852
2011-11-07T19:59:00.000
0
1
0
0
1
python,parsing,email
0
8,041,910
0
2
0
true
0
0
The email module doesn't give me a way to extract the content. if I make a message object, the object doesn't have a field for the content of the body. Of course it does. Have a look at the Python documentation and examples. In particular, look at the walk and payload methods.
1
2
0
0
I have some mails in txt format, that have been forwarded multiple times. I want to extract the content/the main body of the mail. This should be at the last position in the hierarchy..right? (Someone point this out if I'm wrong). The email module doesn't give me a way to extract the content. if I make a message object, the object doesn't have a field for the content of the body. Any idea on how to do it? Any module that exists for the same or any any particular way you can think of except the most naive one of-course of starting from the back of the text file and looking till you find the header. If there is an easy or straightforward way/module with any other language ( I doubt), please let me know that as well! Any help is much appreciated!
Forwarded Email parsing in Python/Any other language?
1
1.2
1
0
0
1,419
8,047,092
2011-11-08T07:17:00.000
8
0
0
0
0
python,html,django
0
8,047,122
0
4
0
false
1
0
Django templates has {# ... #} as comments. NOTE: These comments are not multi-line.
1
4
0
0
I'm curious, i have the following code {% code_goes_here %}, how do I comment it out in the html file?
Is there a way to comment out python code in a django html file?
0
1
1
0
0
3,772
8,054,953
2011-11-08T17:57:00.000
6
0
0
0
0
emacs,ipython
0
64,160,062
0
3
0
false
0
0
C-c M-o runs the command comint-clear-buffer (found in inferior-python-mode-map), which is an interactive compiled Lisp function. It is bound to C-c M-o. (comint-clear-buffer) Clear the comint buffer.
1
7
0
0
I am running and interactive ipython shell in an emacs buffer using ipython.el. I wonder if there is a way to clear the screen? Since it is not running on a terminal, the import os; os.system('CLS') trick will not work. Thanks.
how to clear ipython buffer in emacs?
1
1
1
0
0
2,167
8,059,845
2011-11-09T03:00:00.000
-2
0
0
0
0
python,user-interface
0
8,059,885
0
2
0
false
0
1
Hmm, that looks very much like they are using Adobe AIR or maybe Flash.
1
0
0
0
So I've been tinkering with a few different GUI's but I haven't been able to even find a point to begin researching this question: How do I make a GUI like Steam (Digital distribution app) has? More specifically, I'm interested in how they manage to make their SHIFT+TAB menu pop up in-game, without disrupting/pausing/effecting the game. I've been somewhat successful in making a GUI window "stay on top" when a game is in window mode, but Steam pops this little menu up over the top of a running, fullscreen game. That's what I'm interested in learning about. Any info would be much appreciated. :) Sorry if this isn't the correct place to post this. I wasn't sure exactly where to ask. PS> Preferably something I could implement in Python!!!
Programming a GUI Like Steam?
0
-0.197375
1
0
0
1,878
8,072,461
2011-11-09T22:10:00.000
0
0
0
1
0
python,compilation,debian
0
8,072,665
0
3
0
false
0
0
Why do you think you need to compile it? Debian, like most other Linux distributions, comes with Python installed as standard, as many of the system tools depend on it. A Python script will just run.
1
1
0
0
I'm having a hard time finding a conclusive answer for how to compile a Python 2.7 .py file into a executable program for Debian. Most tutorials say "we assume you've already written the manifest, etc.", but I cannot find how to do those first steps. So, I need a step-by-step. How do I take JUST a .py file (using Python 2.7 and PyGTK) and a few .pdf and .png files that go with it, and compile all of that into a working Debian binary? (Note: If the tutorial starts out with a tar.gz, or requires a setup.py or similar already written, I need instructions on how to get those files too.)
How to compile .py into an executable for Debian
0
0
1
0
0
2,610
8,075,297
2011-11-10T05:17:00.000
12
0
0
0
0
python,webdriver,selenium-webdriver
0
18,481,265
0
3
0
false
0
0
You can use browser.maximize_window() for that
1
2
0
0
I wanted to know how to maximize a browser window using the Python bindings for Selenium 2-WebDriver.
How to maximize a browser window using the Python bindings for Selenium 2-WebDriver?
0
1
1
0
1
9,312
8,082,243
2011-11-10T15:48:00.000
0
0
1
0
0
python
0
8,082,297
0
3
0
false
0
0
Python documentation extensively refers to dictionaries and lists as containing 'items'
2
1
0
0
Let's take a random container in Python (list, dict...), do you say a container has items or do you refer to it as members? Documentation seems to suggest that only sets have members.
item or member, is there a rule how to refer to a container's contents?
0
0
1
0
0
76
8,082,243
2011-11-10T15:48:00.000
2
0
1
0
0
python
0
8,082,280
0
3
0
false
0
0
"Items" in Python usually are the things that can be retrieved or modified using the __getitem__() and __setitem__() functions, so lists and dictionaries have items. Sets in contrast don't implement __getitem__() and __setitem__(), so the documentation uses one of the usual terms to refer to the members of a set ("element" being the other one).
2
1
0
0
Let's take a random container in Python (list, dict...), do you say a container has items or do you refer to it as members? Documentation seems to suggest that only sets have members.
item or member, is there a rule how to refer to a container's contents?
0
0.132549
1
0
0
76
8,092,254
2011-11-11T09:52:00.000
0
0
0
0
0
python,django,django-models,django-admin
0
8,092,344
0
2
0
true
1
0
Turns out, one can put the methods in the UserAdmin itself instead than in the User model. This way I can access all the information I need about the user.
1
0
0
0
I have a Django application where users have additional data. That data is collected in a Profile model with a OneToOneField pointing to User. This is fine and works perfectly for most purposes, but I have trouble customizing the admin for User. In particular: I would like to be able to show a Profile field inside list_display. I don't know how to do this without writing an additional method on User itself. I would like to be able to show some information about related models (e.g. some resources owned by the user) inside the User detail page. Again, I do not know how to do this without writing a custom User method. Do you know any solution to the above?
Customizing the Django User admin
0
1.2
1
0
0
521
8,095,062
2011-11-11T14:04:00.000
1
0
0
0
0
python,qt,pyqt
0
8,095,328
0
2
0
false
0
1
Add some widgets to the central widget first. Then select the central widget and use the "Layout in a Grid", "Layout Vertically", etc buttons on the toolbar to add a main layout. The layouts in the "Widget Box" side-bar are used for adding child layouts to a main layout.
2
1
0
0
I started new clear Main Window project, There are 4 Objects: MainWindow, centralWidget, menubar, and statusbar. I need to set default layout inside the window, so probably for centralWidget. But I didnt found way how to do it. I can get LayoutWidget with some particular size into centralWidget. But I want to set layout for the whole centralWidget.
pyqt designer and layout for central widget
0
0.099668
1
0
0
2,793
8,095,062
2011-11-11T14:04:00.000
3
0
0
0
0
python,qt,pyqt
0
8,095,337
0
2
0
true
0
1
Right click anywhere within your centralWidget go to the Lay Out sub menu and select the Layout you want. This will be applied automatically to all contents of your centralWidget. In order to see how it works place inside it 2 or 3 push buttons and try changing the layouts.
2
1
0
0
I started new clear Main Window project, There are 4 Objects: MainWindow, centralWidget, menubar, and statusbar. I need to set default layout inside the window, so probably for centralWidget. But I didnt found way how to do it. I can get LayoutWidget with some particular size into centralWidget. But I want to set layout for the whole centralWidget.
pyqt designer and layout for central widget
0
1.2
1
0
0
2,793
8,097,644
2011-11-11T17:23:00.000
0
0
0
0
0
python,django
0
8,097,716
0
3
0
false
1
0
Honestly, this isn't a Django-specific issue. The problem is whether you are doing a normal form submission or using AJAX. The basic idea is to POST to your form submission endpoint using AJAX and the form data, and in the Django view, merely update your models and return either an empty 200 response or some data (in XML, JSON, small HTML, whatever you need). Then the AJAX call can populate a success message div on success, or display a failure message if it gets back a non-200 response.
1
1
0
0
I'm submitting a form and instead of redirecting to a success url I would like to just show "Form has been submitted" in text on the page when the form has been submitted. Does anyone know how I can do so?
How do I confirm a form has been submitted with django?
0
0
1
0
0
3,238
8,098,068
2011-11-11T18:02:00.000
2
0
0
0
1
python,networking
0
8,098,102
0
4
0
false
1
0
I would suggest taking a look at setting up a simple site in google app engine. It's free and you can use python to do the site. Than it would just be a matter of creating a simple restful service that you could send a POST to with your pickled data and store it in a database. Than just create a simple web front end onto the database.
4
1
0
0
I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails. Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading. I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case.
Sending data through the web to a remote program using python
1
0.099668
1
0
1
403
8,098,068
2011-11-11T18:02:00.000
0
0
0
0
1
python,networking
0
8,098,342
0
4
0
false
1
0
Adding this as an answer so that OP will be more likely to see it... Make sure you consider security! If you just blindly accept pickled data, it can open you up to arbitrary code execution.
4
1
0
0
I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails. Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading. I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case.
Sending data through the web to a remote program using python
1
0
1
0
1
403
8,098,068
2011-11-11T18:02:00.000
0
0
0
0
1
python,networking
0
8,099,975
0
4
0
false
1
0
I suggest you to use a good middle-ware like: Zero-C ICE, Pyro4, Twisted. Pyro4 using pickle to serialize data.
4
1
0
0
I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails. Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading. I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case.
Sending data through the web to a remote program using python
1
0
1
0
1
403
8,098,068
2011-11-11T18:02:00.000
1
0
0
0
1
python,networking
0
8,098,220
0
4
0
false
1
0
Another option in addition to what Casey already provided: Set up a remote MySQL database somewhere that has user access levels allowing remote connections. Your Python program could then simply access the database and INSERT the data you're trying to store centrally (e.g. through MySQLDb package or pyodbc package). Your users could then either read the data through a client that supports MySQL or you could write a simple front-end in Python or PHP that displays the data from the database.
4
1
0
0
I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails. Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading. I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case.
Sending data through the web to a remote program using python
1
0.049958
1
0
1
403
8,108,477
2011-11-12T23:54:00.000
0
0
0
0
1
python,session,cookies,scrapy
0
8,113,814
0
1
0
false
1
0
I think the site tracks your session. If it's a PHP site, pass PHPSESSID cookie to the request which downloads the PDF file.
1
0
0
0
I'm scraping pdf files from a site using Scrapy, a Python web-scraping framework. The site requires to follow the same session in order to allow you to download the pdf. It works great with Scrapy's because it's all automated but when I run the script after a couple of seconds it starts to give me fake pdf files like when I try to access directly the pdf, without my session. Why is that so & any idea how to overcome this problem!?
Downloading PDF files with Scrapy
0
0
1
0
1
1,740
8,113,895
2011-11-13T19:04:00.000
1
0
0
1
0
python,termination,linefeed
0
8,115,701
0
1
0
true
0
0
You mostly don't need to worry about it. If you come to a point when something doesn't work, come back and ask about that. Note however that what determines the line-ending convention is not which programming language you use, but the platform it runs on (*nix/Windows/Mac, all are different).
1
0
0
0
I'm note sure about conventions for different types of line termination in different programming languages. I know that there are 2 types, 1: line feed, 2: carriage-return, line feed. My question is: how does readline in different programming languages, like python: a = fd.readline();, c/c++: file.getline (buffer,100);, java: line = buf.readLine(); deal with line termination? If they are sensitive to these 2 different types of terminations, how do I treat them separately?
different types of line terminations(unix, windows, etc)
0
1.2
1
0
0
136
8,114,839
2011-11-13T21:23:00.000
1
0
1
0
0
python,exe,scrapy
0
8,114,876
0
1
0
false
0
0
Yes, use py2exe Ask questions about any specific problems you have.
1
0
0
0
I have a Scrapy script, and it's working fine. To distribute it to my friends, I need it to be executable because they don't know much about Scrapy. Would someone tell me how to turn a Scrapy script into an exe file? Is py2exe applicable in this regard?
How to make an exe file out of Python Scrapy script?
0
0.197375
1
0
0
1,257
8,136,921
2011-11-15T13:17:00.000
0
0
1
0
1
python
0
52,946,193
0
3
0
false
0
0
try radon, It calculate Cyclomatic Complexity, Maintainability Index and Raw metrics like LOC, SLOC and...
1
12
0
0
I want a tool which can compute source code metrics such as lines of code, number of packages, classes, functions, cyclomatic complexity number, depth of inheritance tree etc. for my Python Code. I have tried pylint, but it didn't offer much metrics. pynocle seemed interesting but I dont know how to use it. Can anyone give me some suggestions ? Thanks in advance
Software metric tool for Python
0
0
1
0
0
1,752
8,139,822
2011-11-15T16:39:00.000
21
0
1
0
0
python,neural-network,pybrain
0
8,143,012
0
2
0
true
0
0
Here is how I did it: ds = SupervisedDataSet(6,3) tf = open('mycsvfile.csv','r') for line in tf.readlines(): data = [float(x) for x in line.strip().split(',') if x != ''] indata = tuple(data[:6]) outdata = tuple(data[6:]) ds.addSample(indata,outdata) n = buildNetwork(ds.indim,8,8,ds.outdim,recurrent=True) t = BackpropTrainer(n,learningrate=0.01,momentum=0.5,verbose=True) t.trainOnDataset(ds,1000) t.testOnData(verbose=True) In this case the neural network has 6 inputs and 3 outputs. The csv file has 9 values on each line separated by a comma. The first 6 values are input values and the last three are outputs.
1
8
0
0
I am trying to use PyBrain for some simple NN training. What I don't know how to do is to load the training data from a file. It is not explained in their website anywhere. I don't care about the format because I can build it now, but I need to do it in a file instead of adding row by row manually, because I will have several hundreds of rows.
How to load training data in PyBrain?
0
1.2
1
0
0
9,903
8,143,228
2011-11-15T21:14:00.000
0
0
0
0
0
python,django
0
8,143,409
0
2
0
false
1
0
you could always keep the files, and have a cron job that deletes files whose session has expired
1
0
0
0
I have a form where I upload a file, and I generate a report out of it. The thing is, I would also like to make the report available for download, as an archive. I would like to somehow include the CSS and the JS ( that I inherit from my layout ) inside the report, but I don't really know how to go about this. So far, I am not storing the file ( the report's being generated from ) on server side, I delete it after I'm done with it. The only solution I could think of so far, was: from my archive generating view, use urllib to post to the form generating the report, save the response, and just rewrite the links to the stylesheet/JS files. Is there a simpler way to go about this? Is there a way to keep some files on server side as long as the client's session lives?
How to use a report from a view inside another view in Django?
0
0
1
0
0
54
8,143,439
2011-11-15T21:34:00.000
0
0
0
0
1
python,matplotlib
0
8,147,354
0
3
0
false
0
0
For resolution, you can use the dpi (dots per inch) argument when creating a figure, or in the savefig() function. For high quality prints of graphics dpi=600 or more is recommended.
1
4
1
0
I'm fed up with manually creating graphs in excel and consequently, I'm trying to automate the process using Python to massage the .csv data into a workable form and matplotlib to plot the result. Using matplotlib and generating them is no problem but I can't work out is how to set the aspect ration / resolution of the output. Specifically, I'm trying to generate scatter plots and stacked area graphs. Everything I've tried seems to result in one or more of the following: Cramped graph areas (small plot area covered with the legend, axes etc.). The wrong aspect ratio. Large spaces on the sides of the chart area (I want a very wide / not very tall image). If anyone has some working examples showing how to achieve this result I'd be very grateful!
How do get matplotlib pyplot to generate a chart for viewing / output to .png at a specific resolution?
0
0
1
0
0
1,578
8,153,525
2011-11-16T14:46:00.000
0
0
0
1
0
python,graph-algorithm
0
8,154,303
0
2
0
false
0
0
If you have created a topology of the screens, the A* algorithm should work fine.
1
0
0
0
I am using a terminal client to interact with a mainframe computer. The entire interface is based on the concept of screens. An example workflow might look like: Login Screen: enter login credentials, press enter Menu Screen: enter the number of the menu item you want (lets say "6" for memos), press enter Memo Screen: enter account number, press enter Add Memo Screen: enter memo details, etc, press enter to save, F3 to go back I have written a python application to automate the processing of records through this terminal interface. One of the difficulties I am having is that there are a lot of different screens and my application right now is pretty dumb about how to get from one screen to another. It can go from the login screen, to adding memos. But, if it finds itself on the memo screen and needs to de-activate an account, it has to logout and login again because it only knows how to get to the deactivation screen from the login screen, not from the add memos screen. So, I would like to create a "map" in my application that links each screen to the screens that are "next" to it. Then, I need an algorithm that could tell how to get from any screen A to any screen B in the shortest manner possible. I already have some screen objects setup and have "related" them to the screens next to them. So, I am looking for some kind of algorithm that I can implement or python library that I can use that will do the work of calculating the route from one screen to another. Edit: I realize that I am looking for some kind of shortest-path graph algorithm. What is currently throwing me is that I don't really have a "distance", I just have nodes. So, I really don't want shortest-distance, I want least-nodes.
Looking for algorithm to help determine shortest path from one terminal screen to another
0
0
1
0
0
137
8,189,641
2011-11-18T21:54:00.000
0
1
0
0
1
python,filesystems,sshfs,mount-point
0
8,217,136
0
1
1
true
0
0
I found that there may be a bug in sshfs, such that if a user on a Linux system has the same user ID as another, i.e., 1002, but different usernames, this causes problems. The way I worked around this was to actually avoid sshfs for this case all together and mount the drives directly to a local system. I wanted to avoid this because I couldn't do this from a remote location, but it gets the job done.
1
0
0
0
I have a share (on Machine-A) mounted via sshfs on Machine-B. From Machine-C, I have this share mounted also via sshfs (double sshfs) like so: On Machine-C: /mnt/Machine-B/target_share On Machine-B: /mnt/Machine-A/target_share On Machine-A: /media/target_share Now I have a Python program that runs fine in all places tested (including Machine-C on its local file system) except from Machine-C on the drive that lives on Machine-A, but is mounted on Machine-B. The reason I am running the Python program from Machine-C is that it has the resources necessary to run it. I have run it on Machine-A and Machine-B and it has maxed the memory out on each, thereby failing each time. I have tried to mount the target_share on Machine-B with this type of command as well: sudo mount -t cifs -o username=<username>,password=<password> //Machine-A/target_share /mnt/target_share But this doesn't seem to work each way I have tried it, i.e., with different credentials, with and without credentials, etc. To make matters worse, one caveat is that I can only SSH into Machine-B from Machine-C. I cannot directly access Machine-A from Machine-C, which, if I could, would probably make all this work just fine. The Python program runs on Machine-C but the logic in the middle that I need to work doesn't run and gives no errors. It basically starts, and then ends a few seconds later. I am relatively new to Python. Also, not sure if this post would be better on another board. If so, let me know or move as necessary. I can post the Python code as well if I need to. My apologies for the complicated post. I didn't know how else to explain it. Thanks in advance.
Running Python Program via sshfs-mounted Share
1
1.2
1
0
0
509