Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
12,871,388
2012-10-13T08:30:00.000
0
1
1
0
java,python,testing,frameworks,automation
12,871,446
2
false
0
0
Normally Test Cases are used as classes rather than functions because each test case has own setup data and initialization mechanism. Implementing test cases as a single function will not only make difficult to set up test data before running any test case, but yes you can have different test method in a test case class if you are running same test scenario.
2
0
0
I am having a design problem in test automation:- Requirements - Need to test different servers (using unix console and not GUI) through automation framework. Tests which I'm going to run - Unit, System, Integration Question: While designing a test case, I am thinking that a Test Case should be a part of a test suite (test suite is a class), just as we have in Python's pyunit framework. But, should we keep test cases as functions for a scalable automation framework or should be keep test cases as separate classes(each having their own setup, run and teardown methods) ? From automation perspective, Is the idea of having a test case as a class more scalable, maintainable or as a function?
How to write automated tests - Test case as a function or test case as a class
0
0
0
1,670
12,871,388
2012-10-13T08:30:00.000
0
1
1
0
java,python,testing,frameworks,automation
12,871,619
2
false
0
0
The following is my opinion: Pros of writing tests as functions: If you need any pre-requisites for that test case, just call another function which provides the pre-requisites. Do the same thing for teardown steps Looks simple for a new person in the team. Easy to undertstand what is happening by looking into tests as functions Cons of writing tests as functions: Not maintainable - Because if there are huge number of tests where same kind of pre-requisites are required, the test case author has to maintain calling each pre-requisite function in the test case. Same for each teardown inside the test case If there are so many calls to such a pre-requisite function inside many test cases, and if anything changes in the product functionality etc, you have to manually make efforts in many places again. Pros of writing test cases as classes: Setup, run and teardown are clearly defined. the test pre-requisites are easily understood If there is Test 1 which is does something and the result of Test 1 is used as a setup pre-requisite in Test 2 and 3, its easy to just inherit from Test 1, and call its setup, run a teardown methods first, and then, continue your tests. This helps make the tests independent of each other. Here, you dont need to make efforts to maintain the actual calling of your code. It will be done implicitly because of inheritance. Sometimes, if the setup method of Test 1 and run method of Test 2 might become the pre-requisites of another Test 3. In that case, just inherit from both of Test 1 and Test 2 classes and in the Test 3's setup method, call the setup of Test 1 and run of Test 2. Again you dont need to need to maintain the calling of the actual code, because you are calling the setup and run methods, which are tried and tested from the framework perspective. Cons of writing test case as classes: When the number of tests increase, you cant look into a particular test and say what it does, because it may have inherited so much levels that you cant back track. But, there is a solution around it - Write doc strings in each setup, run, teardown method of each test case. And, write a custom wrapper to generate doc strings for each test case. While/After inheriting, you should provide an option to add/Remove the docstring of a particular function (setup, run, teardown) to the inherited function. This way, you can just run that wrapper and get information about a test case from its doc-strings
2
0
0
I am having a design problem in test automation:- Requirements - Need to test different servers (using unix console and not GUI) through automation framework. Tests which I'm going to run - Unit, System, Integration Question: While designing a test case, I am thinking that a Test Case should be a part of a test suite (test suite is a class), just as we have in Python's pyunit framework. But, should we keep test cases as functions for a scalable automation framework or should be keep test cases as separate classes(each having their own setup, run and teardown methods) ? From automation perspective, Is the idea of having a test case as a class more scalable, maintainable or as a function?
How to write automated tests - Test case as a function or test case as a class
0
0
0
1,670
12,873,542
2012-10-13T13:26:00.000
3
0
1
1
python,python-idle
12,873,584
3
false
0
0
Indeed, the command to run a Python file should be run in the command prompt. Python should be in your path variable for it to work flexible. When the python folder is added to path you can call python everywhere in the command prompt, otherwise just in your python install folder. The following is from the python website: Windows has a built-in dialog for changing environment variables (following guide applies to XP classical view): Right-click the icon for your machine (usually located on your Desktop and called “My Computer”) and choose Properties there. Then, open the Advanced tab and click the Environment Variables button. In short, your path is: My Computer ‣ Properties ‣ Advanced ‣ Environment Variables In this dialog, you can add or modify User and System variables. To change System variables, you need non-restricted access to your machine (i.e. Administrator rights). Another way of adding variables to your environment is using the set command in a command prompt: set PYTHONPATH=%PYTHONPATH%;C:\My_python_lib If you do it via My Computer, then look for the line named path in Enviroment Variables. Give that the value of your Python installation folder.
2
4
0
I have installed the Enthought Python distribution on my computer, but I don't have any idea how to use it. I have PyLab and IDLE but I want to run .py files by typing the following command: python fileName.py I don't know where to write this command: IDLE, PyLab or Python.exe or Windows command prompt. When I do this in IDLE it says: SyntaxError: invalid syntax Please help me to figure this out.
How to run a Python project?
0.197375
0
0
41,336
12,873,542
2012-10-13T13:26:00.000
3
0
1
1
python,python-idle
12,873,556
3
true
0
0
Open a command prompt: Press ⊞ Win and R at the same time, then type in cmd and press ↵ Enter Navigate to the folder where you have the ".py" file (use cd .. to go one folder back or cd folderName to enter folderName) Then type in python filename.py
2
4
0
I have installed the Enthought Python distribution on my computer, but I don't have any idea how to use it. I have PyLab and IDLE but I want to run .py files by typing the following command: python fileName.py I don't know where to write this command: IDLE, PyLab or Python.exe or Windows command prompt. When I do this in IDLE it says: SyntaxError: invalid syntax Please help me to figure this out.
How to run a Python project?
1.2
0
0
41,336
12,874,266
2012-10-13T14:55:00.000
0
0
0
0
python,kml,google-earth,collada
12,878,019
1
false
0
0
You can do this, you would probably do it by creating a KML NetworkLink file that loads a KMZ that contains a KML file that points to a .dae file that is collada. That can all be autogenerated from Python. There are libraries to generate it or you can roll your own. Just search generate collada python and you'll find some resources that will work for you.
1
0
0
Hello from what i understand that using Python i can dynamically send kml data to Google earth client. i was wondering is this possible for collada files? i need it in combination to load models in Google earth. so far i can do it manually.
Dynamically generate collada files
0
0
1
356
12,874,406
2012-10-13T15:13:00.000
4
0
1
0
python,eof
12,874,442
4
false
0
0
Do you need to process the first number before the second is entered? If not, then int(s) for s in sys.stdin.read().split() would probably do, either as a list comprehension (in []) or generator expression (in (), for example as a function argument).
1
0
0
Is there a way to scan an unknown number of elements in Python (I mean scan numbers until the user writes at the standard input eof(end of file))?
How to scan an unknown number of numbers in Python?
0.197375
0
0
1,134
12,875,660
2012-10-13T18:18:00.000
0
0
1
1
python,c,cython,fuse
12,875,850
1
false
0
1
You should use it as a Python Fuse System, then you can write your cython stuff in it's own module and just import that module in your python code. Usually file system related operations are IO bound and not CPU bound so I'm not sure how much of a speedup you would get with cython, but I guess try out and compare the results.
1
0
0
I know FUSE has bindings for C, C++, Python etc. Which effectively means that I can develop FUSE filesystems using those languages. I wish to use Cython, as it offers much faster speeds as compared to pure Python. That is stressed in a filesystem. Is it possible to produce a FUSE filesystem by coding in Cython? As far as I understand, Python documentation is all that I require to write Cython code for FUSE. But (if it is indeed possible) should I be using Cython as a Python FUSE system or C system??
can I use FUSE with Cython bindings
0
0
0
173
12,876,159
2012-10-13T19:19:00.000
3
1
0
0
python,mod-python
12,876,548
1
true
1
0
I am not familiar with the mod_python (project was abandoned long ago) but nowadays Python applications are using wsgi (mod_wsgi or uwsgi). If you are using apache, mod_wsgi is easy to configure, for nginx use the uwsgi.
1
0
0
I've been testing out Mod_python and it seems that there are two ways of producing python code using:- Publisher Handler PSP Handler I've gotten both to work at the same time however, should I use one over the other? PSP resembles PHP a lot but Publisher seems to resemble python more. Is there an advantage over using one (speed, ease of use, etc.)?
Mod_Python: Publisher Handler vs PSP Handler
1.2
0
0
274
12,877,643
2012-10-13T22:36:00.000
1
0
0
0
python,sockets,client-server
12,877,653
2
false
0
0
There is generally speaking no problem with having even tens of thousands of sockets at once on even a very modest home server. Just make sure you do not create a new thread or process for each connection.
1
5
0
I'm writing a client-server app with Python. The idea is to have a main server and thousands of clients that will connect with it. The server will send randomly small files to the clients to be processed and clients must do the work and update its status to the server every minute. My problem with this is that for the moment I only have an small and old home server so I think that it can't handle so many connections. Maybe you could help me with this: How can increase the number of connections in my server? How can I balance the load from client-side? How could I improve the communication? I mean, I need to have a list of clients on the server with its status (maybe in a DB?) and this updates will be received time to time, so I don't need a permanent connection. Is a good idea to use UDP to send the updates? If not, do I have to create a new thread every time that I receive an update? EDIT: I updated the question to explain a little better the problem but mainly to be clear enough for people with the same problems. There is actually a good solution in @TimMcNamara answer.
Handle multiple socket connections
0.099668
0
1
3,588
12,877,870
2012-10-13T23:18:00.000
1
0
0
1
python,linux,web
12,877,893
3
false
1
0
Note this is not a standard way. Imagine the websites out there had the ability to open Notepad or Minesweeper at their will when you visit or click something. The way it is done is, you need to have a service which is running on the client machine which can expose certain apis and trust the request from the web apps call. this needs to be running on the client machines all the time and in your web app, you can send a request to this service to launch the application that you desire.
1
1
0
I'm going to be using python to build a web-based asset management system to manage the production of short cg film. The app will be intranet-based running on a centos machine on the local network. I'm hoping you'll be able to browse through all the assets and shots and then open any of them in the appropriate program on the client machine (also running centos). I'm guessing that there will have to be some sort of set up on the client-side to allow the app to run commands, which is fine because I have access to all of the clients that will be using it (although I don't have root access). Is this sort of thing possible?
How can a python web app open a program on the client machine?
0.066568
0
0
351
12,878,175
2012-10-14T00:09:00.000
0
1
0
0
python,ruby,unix,scripting,python-3.x
12,878,180
4
false
0
0
I don't think you need it in ruby ... an if doesn't require an else.
2
4
0
Python3 has a pass command that does nothing. This command is used in if-constructs because python requires the programmer to have at least one command for else. Does Ruby have an equivalent to python3's pass command?
Python3 Pass Command Equivalent in Ruby
0
0
0
1,225
12,878,175
2012-10-14T00:09:00.000
6
1
0
0
python,ruby,unix,scripting,python-3.x
12,878,394
4
false
0
0
Your statement is essentially wrong, since else statement is not obligatory in Python. One of the frequent uses of the pass statement is in try/ except construct, when exception may be ignored. pass is also useful when you define API - and wish to postpone actual implementation of classes/functions. EDIT: One more frequent usage I haven't ,mentioned - defining user exception; usually you just override name to distinguish them from standard exceptions.
2
4
0
Python3 has a pass command that does nothing. This command is used in if-constructs because python requires the programmer to have at least one command for else. Does Ruby have an equivalent to python3's pass command?
Python3 Pass Command Equivalent in Ruby
1
0
0
1,225
12,878,631
2012-10-14T01:48:00.000
1
0
1
0
python
12,878,642
1
true
0
0
Sure. Values that are supposed to be treated as constants. There's no need to have each instance initialized with the values since they'll never change.
1
0
0
I read that it isn't advisable to create static methods/variables in web applications because all the threads will have to use the same method/variable. Is there any use case for them at all in this domain?
In a web application under potentially high load, what are the use cases for class variables and methods in python?
1.2
0
0
71
12,878,695
2012-10-14T02:02:00.000
2
0
0
0
python,wxpython
12,884,118
2
false
0
1
Really you have methods to place menu to position that you want: Append(menu, title) Add menu to the end of MenuBar (i.e. place it as most right element). title is a title of a new menu. If succed returns True, otherwise - False. Insert(pos, menu, title) Insert menu in pos position (after that GetMenu(pos)==menu will be True). All menu in position after that will be shifted right. pos=0 is firts (left) position. if pos=GetMenuCount(), the result will be like by using Append(). title is a title of a new menu. If succed returns True, otherwise - False. Remove(pos) Remove menu from position pos, all menu in position after that are shifted to the left. Returns deleted menu. Replace(pos, menu, title) Replace menu in position pos and don't affect to other menus in MenuBar. Returns menu that was in that position. Sorry for my English)
1
0
0
Is it possible to control the position of elements in a wx.MenuBar()? I can't find it in the docs, but it seems like it should be an option. By default, menu elements, like file, view, edit, etc.. are at the left most position, and then each additional element extends to the right. I have a very simple gui, and thus only have one menu element called help. I'd like to position it RIGHT, instead of the default LEFT.
Is there a position the menu elements of a wx MenuBar?
0.197375
0
0
231
12,878,819
2012-10-14T02:30:00.000
2
1
0
0
python,pyramid
12,879,591
1
true
1
0
You should be able to check the request.content_length. WSGI does not support streaming the request body so content length must be specified. If you ever access request.body, request.params or request.POST it will read the content and save it to disk. The best way to handle this, however, is as close to the client as possible. Meaning if you are running behind a proxy of any sort, have that proxy reject requests that are too large. Once it gets to Python, something else may have already stored the request to disk.
1
1
0
Is there a way to check the size of the incoming POST in Pyramid, without saving the file to disk and using the os module?
Check size of HTTP POST without saving to disk
1.2
0
0
134
12,879,754
2012-10-14T06:11:00.000
1
0
0
0
python,youtube,gdata
12,892,498
1
false
1
0
YouTube will not provide this to you. They intentionally rate limit their feeds to prevent abuse.
1
0
0
I want to catch the 1 thousand most viewed youtube videos through gdata youtube api. However, only the first 84 are being returned. If I use the following query, only 34 records are returned (plus the first 50). Anyone knows what is wrong? "http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=1" returns 25 records "http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=26" returns 25 records "http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=51" returns 25 records "http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=76" returns 9 records
Youtube GData 2 thousand most viewed
0.197375
0
1
164
12,880,380
2012-10-14T08:05:00.000
0
0
1
0
python,opengl,multiprocessing,pygame,pyinstaller
12,885,990
1
false
0
1
The [x] button fires for me, when event.type == pygame.QUIT on windows.
1
2
0
I wrote my program to spawn a process of the main application. When I run the pyinstaller packaged exe it says no Module name pygame.base. But if I keep the app in the main thread it doesn't do that. I need the process to detect closure, because GLUT doesn't have an event for the upper right hand exit button and I can't remove it unless I use full-screen. The program's threads hang and the app never closes. So the main thread checks to see if the process is running. If not, it closes the entire application. But all closure will hang if I only use the main thread. I can soft exit from in-game events just fine. But I need a way to catch the X button. OR fix pyinstaller. I've looked up a bunch of things getting pyinstaller to include everything. I used the OneDir option so I can see the files. I copied all dependencies from one that worked to the one that didn't without replacing the exe. It still had the import error. Any insight would be nice. For now the X button is a hazard. I'm guessing multiprocessing doesn't work with pyinstaller all too well.
No Module name pygame.base
0
0
0
751
12,881,834
2012-10-14T11:35:00.000
2
0
0
1
python,google-app-engine,google-apps-script
12,881,937
2
true
1
0
No you can't. AS and GAE are totally different things and won't work together. What you can really do is (abstract): write an AS that does what you need redirect from GAE to the AS url to make sure that the user logs-in/grants permissions. perform what you needed to do with AS send back the user to GAE with a bunch of parameters if you need to
1
1
0
I'm trying to work out if it is possible to use Google Apps Scripts inside Google App Engine? And if there is a tutorial in doing this out there ? Through reading Google's App Script site I get the feeling that you can only use app scripts inside Google Apps like Drive, Docs etc? What I would like to do is to be able to use the Groups Service that is in GAS inside GAE to Create, delete and only show the groups that a person is in, all inside my GAE App. Thanks
Accessing Google Groups Services on Google Apps Script in Google App Engine?
1.2
0
0
221
12,882,679
2012-10-14T13:28:00.000
0
0
1
0
python
12,882,688
3
false
0
0
Without seeing your code, the error message seems to imply that at some point your function some_func(x) returns None. As the message states, you subtraction operation between None and the float is not possible in Python. Trace your function and make sure that it always returns a numeric value and that problem should not occur. Alternatively, change your code to check the return for None (as shown by @daknok) and avoid the problem that way - however, it's better to prevent the problem at the source IMO. Note the excellent comment by @burhan Kahlid below too.
1
0
0
So I'm getting this error on a line where I have a recursive call. The line where the error occurred looks something like: return some_func(x) - .2 TypeError: unsupported operand type(s) for -: 'NoneType' and 'float' I tried return some_func(x) - .2 and x not None and float(x) is True, and other tricks but unsuccessfully so far. Thanks!
Return type error in python
0
0
0
272
12,889,477
2012-10-15T04:53:00.000
0
0
0
0
python,google-app-engine,google-cloud-datastore,security,voting
12,889,623
3
false
1
0
Have you tried set() variable type which avoid duplicate?
2
0
0
There are a few questions similar to this already but I am hoping for a somewhat different answer. I have a website on Google App Engine in Python and I let registered users vote on posts, either up or down, and I only want each user to be able to vote once. The primary method that I have thought of and read about is to simply keep a list in each post of the users who have voted on it and only show the voting button to users who have not voted yet. To me this seems like an inelegant solution. To keep track of this when hundreds of people are voting on hundreds of posts is a ton of information to keep track of for such a little bit of functionality. Though I could be wrong about this. Is this as much of a data/datastore write hog as I am imagining it to be? If I have a few hundred votes going on each day is that not such a big deal?
More Efficient Way to Prevent Duplicate Voting
0
0
0
193
12,889,477
2012-10-15T04:53:00.000
0
0
0
0
python,google-app-engine,google-cloud-datastore,security,voting
12,895,187
3
false
1
0
If you only have hundreds of votes for hundreds of posts this is not a big data hog or read/write hog. I am assuming you are storing the list of users as a list in the post enity on the data store. Depending on if you are storing a long that points to the user or a string for their email you are probably at most using 10 bytes per user per post. if you have a thousand votes per post and 1000 posts then you would be using 10 MB. Plus it wouldn't add much to the cost of read/writes. You could reduce that cost if you want by not indexing the value and just search through it when you get it from the data store for the needed information.
2
0
0
There are a few questions similar to this already but I am hoping for a somewhat different answer. I have a website on Google App Engine in Python and I let registered users vote on posts, either up or down, and I only want each user to be able to vote once. The primary method that I have thought of and read about is to simply keep a list in each post of the users who have voted on it and only show the voting button to users who have not voted yet. To me this seems like an inelegant solution. To keep track of this when hundreds of people are voting on hundreds of posts is a ton of information to keep track of for such a little bit of functionality. Though I could be wrong about this. Is this as much of a data/datastore write hog as I am imagining it to be? If I have a few hundred votes going on each day is that not such a big deal?
More Efficient Way to Prevent Duplicate Voting
0
0
0
193
12,890,137
2012-10-15T06:06:00.000
1
1
0
0
java,python,rmi,rpc,web2py
12,890,526
4
false
1
0
I'd be astonished if you could do it at all. Java RMI requires Java peers.
1
0
0
I have a remote method created via Python web2py. How do I test and invoke the method from Java? I was able to test if the method implements @service.xmlrpc but how do i test if the method implements @service.run?
Using Java RMI to invoke Python method
0.049958
0
1
2,053
12,890,897
2012-10-15T07:18:00.000
3
0
0
0
python,ruby,web-crawler
12,891,191
4
false
1
0
Between lxml and beautiful soup, lxml is more equivalent to nokogiri because it is based on libxml2 and it has xpath/css support. The equivalent of net/http is urllib2
1
2
0
I've started to learn python the past couple of days. I want to know the equivalent way of writing crawlers in python. so In ruby I use: nokogiri for crawling html and getting content through css tags Net::HTTP and Net::HTTP::Get.new(uri.request_uri).body for getting JSON data from a url what are equivalents of these in python?
Going from Ruby to Python : Crawlers
0.148885
0
1
5,706
12,892,263
2012-10-15T08:59:00.000
1
0
0
0
python,flash,wxpython
12,896,608
1
true
0
1
You can try the WebView widget that was mentioned or download the separate webkit port and try that (search the wxPython mailing list for a link to that). Otherwise, the only truly supported wxPython widget that does this is "ActiveX_FlashWindow" and I'm pretty sure that's Windows-only.
1
0
0
I'm considering creating a wxPython application that I'll deploy for Windows and OSX. I want to include Flash objects like YouTube embeds and Twitch.tv embeds in the application. Is that possible? Are there any resources out there that addresses the issue?
Can I create a cross-platform wxPython application that uses Flash embeds?
1.2
0
0
146
12,893,264
2012-10-15T09:57:00.000
1
0
0
0
python,firefox,authentication,selenium,form-submit
12,893,410
1
false
0
0
Using driver.get("https://username:[email protected]/") should directly log you in, without the popup being displayed, What about this did not work for you? EDIT I am not sure this will work but after driver.get("https://username:[email protected]/") Try accepting alert. For the alert - @driver.switch_to.alert.accept in Ruby or driver.switchTo().alert().accept(); in Java
1
2
0
I want to access with Selenium (through) Python, a URL that demands authentication. When visit the URL, manually a new authentication window pops up, on which I need to fill in a username and password. Only after clicking on “OK” this window disappears and I return to the original site. As I want to visit this URL on an interval base to download information and want to automatize this process in python. In my current effort I use Selenium, but none of the examples that I found seem to do what I need. Thinks I tried but do not work are: driver.get("https://username:[email protected]/") selenium.FireEvent("OK", "click") driver.find_element_by_id("UserName") I do not know the actual element id’s What I did manage is to load my Firefox profile that stores the authentication information, but I still need to confirm the authentication by clicking “ok”. Is there any way to prevent this screen to pop up? If not how to access this button on the authentication form, from which I cannot obtain id-information?
How to Submit Https authentication with Selenium in python
0.197375
0
1
3,449
12,895,653
2012-10-15T12:25:00.000
1
0
0
1
python,celery
12,898,127
1
true
0
0
A worker can't trigger signals after it has been killed. The best way would be to write a plugin for Nagios, Munin or similar to notify when the number of workers decrease. See the Monitoring and Management guide in the Celery documentation.
1
1
0
Where's the best place to get notified when celery workers die? I'm aware of the worker_shutdown signal but does it get called also in sudden death of workers? I have these pdf rendering workers and they suddenly died or at least became unresponsive to remote control commands so I'm looking for ways so I can get notified when things like this happen.
Get notified when celery workers die
1.2
0
0
699
12,897,908
2012-10-15T14:35:00.000
2
1
0
0
eclipse,pydev,python-unittest
12,945,643
1
true
1
0
Output can be easily redirected to a file in Run Configurations > Common tab > Standard Input and Output section. Hiding just in plain sight...
1
2
0
Is there a built-in way in Eclipse to redirect PyUnit's output to a file (~ save the report)?
Redirecting PyUnit output to file in Eclipse
1.2
0
0
823
12,899,513
2012-10-15T16:02:00.000
5
0
0
0
python,opencv,windows-xp,py2exe
12,930,212
1
true
0
0
The problem comes from 4 dlls which are copyied by py2exe: msvfw32.dll msacm32.dll, avicap32.dll and avifil32.dll As I am building on Vista, I think that it forces the use of Vista dlls on Windows XP causing some mismatch when trying to load it. I removed these 4 dlls and everything seems to work ok (in this case it use the regular system dlls.)
1
1
1
I have a Python app built with Python, OpenCv and py2exe. When I distribute this app and try to run it on a windows XP machine, I have an error on startup due to error loading cv2.pyd (opencv python wrapper) I looked at cv2.pyd with dependency walker and noticed that some dlls are missing : ieshims.dll and wer.dll. Unfortunately copying these libs doesn't solve the issues some other dlls are missing or not up-to-date. Any idea?
Python: OpenCV can not be loaded on windows xp
1.2
0
0
941
12,900,023
2012-10-15T16:34:00.000
8
1
0
1
python,django,celery,django-celery,celery-task
12,900,160
3
false
1
0
I think you'll need to open two shells: one for executing tasks from the Python/Django shell, and one for running celery worker (python manage.py celery worker). And as the previous answer said, you can run tasks using apply() or apply_async() I've edited the answer so you're not using a deprecated command.
1
89
0
I'm using celery and django-celery. I have defined a periodic task that I'd like to test. Is it possible to run the periodic task from the shell manually so that I view the console output?
How can I run a celery periodic task from the shell manually?
1
0
0
60,194
12,901,776
2012-10-15T18:29:00.000
2
0
1
0
python,setuptools,distutils
12,902,659
3
false
0
0
Try the --install-dir option. You may also want to use --build-dir to change building dir.
2
24
0
Python has ability to "pseudoinstall" a package by running it's setup.py script with develop instead of install. This modifies python environment so package can be imported from it's current location (it's not copied into site-package directory). This allows to develop packages that are used by other packages: source code is modified in place and changes are available to rest of python code via simple import. All works fine except that setup.py develop command creates an .egg-info folder with metadata at same level as setup.py. Mixing source code and temporary files is not very good idea - this folder need to be added into "ignore" lists of multiple tools starting from vcs and ending backup systems. Is it possible to use setup.py develop but create .egg-info directory in some other place, so original source code is not polluted by temporary directory and files?
Python "setup.py develop": is it possible to create ".egg-info" folder not in source code folder?
0.132549
0
0
13,420
12,901,776
2012-10-15T18:29:00.000
4
0
1
0
python,setuptools,distutils
13,554,020
3
false
0
0
The good manner is to keep all source files inside special directory which name is your project name (programmers using other languages keep their code inside src directory). So if your setup.py file is inside myproject directory then you should keep the files at myproject/myproject. This method keeps your sources separated from other files regardless what happen in main directory. My suggestion would be to use whitelist instead of blacklist -- tell the tools to ignore all files excluding these which are inside myproject directory. I think that this is the simplest way not to touch your ignore lists too often.
2
24
0
Python has ability to "pseudoinstall" a package by running it's setup.py script with develop instead of install. This modifies python environment so package can be imported from it's current location (it's not copied into site-package directory). This allows to develop packages that are used by other packages: source code is modified in place and changes are available to rest of python code via simple import. All works fine except that setup.py develop command creates an .egg-info folder with metadata at same level as setup.py. Mixing source code and temporary files is not very good idea - this folder need to be added into "ignore" lists of multiple tools starting from vcs and ending backup systems. Is it possible to use setup.py develop but create .egg-info directory in some other place, so original source code is not polluted by temporary directory and files?
Python "setup.py develop": is it possible to create ".egg-info" folder not in source code folder?
0.26052
0
0
13,420
12,903,081
2012-10-15T19:59:00.000
1
0
1
0
python,python-3.x,python-2.7
12,903,111
3
false
0
0
Most people are still using 2.x, even Django (one of the most popular web frameworks for Python) doesn't fully support Python 3 yet. I would start with version 2.x. You'll have a lot less headaches when working with other libraries.
1
1
0
I'm not quite sure what the real differences are so I'm not able to decide which version is better for a beginner. I have a background about C, C++ and Java basically, i would like to start Python because i need it, this will look like a noob question but i really don't know which version i'm supposed to pick; for what i see the difference between 2.x and 3.x are mostly related to the syntax used in the two versions, but since i don't even have started anything yet i know nothing so i ask: which one I'm supposed to pick ?
Starting with Python 2.x or 3.x?
0.066568
0
0
871
12,904,326
2012-10-15T21:29:00.000
3
0
0
0
python,amazon-s3,boto
12,907,767
2
false
1
0
There are two ways to implement the search... Case 1. As suggested by john - you can specify the prefix of the s3 key file in your list method. that will return you result of S3 key files which starts with the given prefix. Case 2. If you want to search the S3 key which are end with specific suffix or we can say extension then you can specify the suffix in delimiter. Remember it will give you correct result only in the case if you are giving suffix for the search item which is end with that string. Else delimiter is used for path separator. I will suggest you Case 1 but if you want to faster search with specific suffix then you can try case 2
1
2
0
I have 10000 files in a s3 bucket.When I list all the files it takes 10 minutes. I want to implement a search module using BOTO (Python interface to AWS) which searches files based on user input. Is there a way I can search specific files with less time?
Search files(key) in s3 bucket takes longer time
0.291313
1
1
5,534
12,904,413
2012-10-15T21:36:00.000
1
0
0
0
python,google-app-engine,geolocation,geocoding,gae-search
13,315,721
1
false
1
0
I don't know of any examples that show geo specifically, but the process is very much the same. You can index documents containing GeoFields, each of which has a latitude and a longitude. Then, when you construct your query, you can: limit the results by distance from a fixed point by using a query like distance(my_geo_field, geopoint(41, 65)) < 100 sort by distance from a point with a sort expression like distance(my_geo_field, geopoint(55, -20)) calculate expressions based on the distance between points by using a FieldExpression like distance(my_geo_field, geopoint(10, -30)) They work pretty much like any other field, except you can use the distance function in the query and expression languages. If you have any specific questions, feel free to ask here.
1
2
0
I have been going through trying to find the best option on Google App Engine to search a Model by GPS coordinates. There seem to be a few decent options out there such as GeoModel (which is out of date) but the new Search API seems to be the most current and gives a good bit of functionality. This of course comes at the possibility of it getting expensive after 1000 searches once this leaves the experimental zone. I am having trouble going through the docs and creating a coherent full example to be able to use the Search API to search by location and I want to know if anyone has examples they are willing to share or create to help make this process a little more straightforward. I understand how to create the actual geosearch query but I am unclear as to how to glue that together with the construction and indexing of the document.
Google App Engine Search API Performing Location Based Searches
0.197375
0
0
498
12,904,560
2012-10-15T21:49:00.000
-2
0
0
0
python,django,security,encryption,credentials
12,904,783
3
false
1
0
Maybe you can rely on a multi-user scheme, by creating : A user running Django (e.g. django) who does not have the permission to access the credentials A user having those permissions (e.g. sync). Both of them can be in the django group, to allow them to access the app. After that, make a script (a Django command, such as manage.py sync-external, for instance) that syncs what you want. That way, the django user will have access to the app and the sync script, but not the credentials, because only the sync user does. If anyone tries to run that script without the credentials, it will of course result in an error. Relying on Linux permission model is in my opinion a "Good Idea", but I'm not a security expert, so bear that in mind. If anyone has anything to say about what's above, don't hesitate!
1
12
0
I'm working on a python/django app which, among other things, syncs data to a variety of other services, including samba shares, ssh(scp) servers, Google apps, and others. As such, it needs to store the credentials to access these services. Storing them as unencrypted fields would be, I presume, a Bad Idea, as an SQL injection attack could retrieve the credentials. So I would need to encrypt the creds before storage - are there any reliable libraries to achieve this? Once the creds are encrypted, they would need to be decrypted before being usable. There are two use cases for my app: One is interactive - in this case the user would provide the password to unlock the credentials. The other is an automated sync - this is started by a cron job or similar. Where would I keep the password in order to minimise risk of exploits here? Or is there a different approach to this problem I should be taking?
Safely storing encrypted credentials in django
-0.132549
0
0
6,457
12,905,707
2012-10-16T00:02:00.000
-3
0
1
0
python,types,numbers,compare
12,905,719
6
false
0
0
47.36/1.6*2 return integer. So 47.36/1.6*2 would be 18, which is not equal to 18.5. Edit Sorry about that, actually it is being stored as 18.499999. You should do this import numpy as np print np.around((47.36/1.6**2), decimals=1) == 18.5 This would return True.
2
1
0
I found this interesting question when I was doing homework we know, 47.36/1.6**2 == 18.5 but when I try to run the following code, it gives me a False(should be true) print 47.36/1.6**2 == 18.5 Do anyone know what's going on?
Simple Basic Python compare
-0.099668
0
0
120
12,905,707
2012-10-16T00:02:00.000
1
0
1
0
python,types,numbers,compare
12,905,783
6
false
0
0
I would avoid checking for exact equality when comparing two floats. Instead take the difference and see if it is smaller than a value you consider close to zero. (47.36/1.6**2 - 18.5) < 0.00000000001 will be True
2
1
0
I found this interesting question when I was doing homework we know, 47.36/1.6**2 == 18.5 but when I try to run the following code, it gives me a False(should be true) print 47.36/1.6**2 == 18.5 Do anyone know what's going on?
Simple Basic Python compare
0.033321
0
0
120
12,906,249
2012-10-16T01:15:00.000
6
0
1
0
python,clr,visual-studio-2012,ironpython
13,183,325
3
false
0
1
I got the same error and resolved it with these steps: I changed the project properties General>Interpreter to IronPython 2.7 Debug>Launch mode to IronPython(.NET) launcher At first I didn't see IronPython as an interpreter option to select from. I added the path to IronPython installation to my Path System variable, restarted Visual Studio and it worked.
1
5
0
i'm using IronPython and i want to create some windows form, i want to create a windows form with some button, and i want to do this in visual studio with iron python, i'm using visual studio 2012 integrated edition, each time i create an "ironpython windows form" project, when i want to run it, it says: The project is currently set to use the .NET debugger for IronPython debugging but the project is configured to start with a CPython interpreter. To fix this change the debugger type in project properties->Debug->Launch mode when i change debugger to Standard Python Launcher, it says: ImportError: No module named clr what should i do?
Cpython interpretter / IronPython interpretter No module named clr
1
0
0
4,656
12,908,271
2012-10-16T05:44:00.000
0
0
0
0
python,skype4py
15,720,075
1
false
0
0
Looking at the docs, each chat message has a body (the text of the message) and a IsEditable property. So long as IsEditable=True you should be able to do m.Body = "some new text"
1
0
0
As the title says, I'm curious if the functionality of editing a sent chat message can be replicated programatically using the Skype4Py api. I'm not sure when Skype added this feature, and the API, as far as I know, hasn't been maintained in some years, so I was just curious. I don't see anything that looks like it would do it in the docs, but,I figured I check here before I give up.
Is it possible to edit a previously sent chat message using the Skype4Py api?
0
0
1
667
12,909,334
2012-10-16T07:14:00.000
-1
1
0
1
python
66,757,126
2
false
0
0
I used the same script, but my host failed to respond. My host is in different network. WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
1
2
0
I am writing a python script to copy python(say ABC.py) files from one directory to another directory with the same folder name(say ABC) as script name excluding .py. In the local system it works fine and copying the files from one directory to others by creating the same name folder. But actually I want copy these files from my local system (windows XP) to the remote system(Linux) located in other country on which I execute my script. But I am getting the error as "Destination Path not found" means I am not able to connect to remote that's why. I use SSH Secure client. I use an IP Address and Port number to connect to the remote server. Then it asks for user id and password. But I am not able to connect to the remote server by my python script. Can Any one help me out how can I do this??
How to Transfer Files from Client to Server Computer by using python script?
-0.099668
0
1
9,452
12,911,327
2012-10-16T09:16:00.000
1
0
0
1
python,python-3.x
12,911,826
3
false
0
0
The docs mention a default method, which you can override to handle any unknown command. Code it to prefix scan a list of commands and invoke them as you suggest for do_alias.
1
10
0
How can I create an alias for a command in a line-oriented command interpreter implemented using the cmd module? To create a command, I must implement the do_cmd method. But I have commands with long names (like constraint) and I want to provide aliases (in fact, shortcuts) for these commands (like co). How can I do that? One possibility that came to my mind is to implement the do_alias (like do_co) method and just calling do_cmd (do_constraint) in this method. But this brings me new commands in the help of the CLI. Is there any other way to achieve this? Or may be is there a way to hide commands from the help output?
Aliases for commands with Python cmd module
0.066568
0
0
3,225
12,913,250
2012-10-16T11:01:00.000
3
1
0
0
c++,python,parsing
12,913,328
1
false
0
0
Using regular parser-generators to generate parsers from the grammar is usually complicated with Python (for example due to its significant whitespace and difficult line-continuation rules). I am not sure about your experience with Python, but my recommendation would be to parse the Python file from Python, and do as much of the processing as possible in Python, then return the result to the C++ code using well-defined data types (such as the stdc++ ones) and using Boost.python for the bindings.
1
0
0
I can find a load of information on the reverse, not so much this way around :) So, the summary is that I want to write some Python code-completion stuff in C++, but I can't figure out the best way of tokenizing the Python code. Are there any libraries out there that will do this? I'm leaning towards calling Python's tokenize.tokenize directly from C++... but whenever I look at calling Python code from C++ I go cross-eyed.
Parsing Python code from C++
0.53705
0
0
210
12,919,644
2012-10-16T16:50:00.000
1
1
0
1
c++,python,embedded
12,919,968
2
false
0
0
doubt you can do this you are using a uart interface...just get an arduino or someat and send serial commands to the arduino (serial pins) which then puts the correct pwm signal out its pins ... probably 5 lines of arduino code and another 5 of python code ... all that said you may be able to find some very difficult and hacky way to output a PWM signal over serial ... but you need to think about if thats really appropriate ...
1
4
0
How to to send out a pwm signal from the serial port with linux? (With python or c++) I want to connect a motor directly to change the speed rotation.
PWM signal out of serial port with linux
0.099668
0
0
1,468
12,921,120
2012-10-16T18:28:00.000
1
0
0
1
python,google-app-engine
12,921,291
2
true
1
0
If can disable an entire application (from Application Settings page) for sometime and then reenable it (or you can delete it from that point onwards). There is no way you can "shutdown" a particular instance. You can have different version of your application, but at any moment in time, you can only have only one instance as the active version of your application. You can however split traffic between different versions, but that does not change active versions. In terms of performance, you can change the Max Idle Instances value to one so that only one of the instance is preloaded or active.
2
0
0
In user interface of Google App Engine, in "Instances" I can shutdown selected instances by press button "Shutdown". Can I do shutdown by program from source code?
GAE: Instance shutdown from source code
1.2
0
0
144
12,921,120
2012-10-16T18:28:00.000
0
0
0
1
python,google-app-engine
12,944,920
2
false
1
0
Actually you can force an instance to shutdown within your code, but it's not pretty. Just allocate more memory than you instance has, it will be then shutdown for you. I have used this technique in some python2.5 M/S apps where a DeadlineExceeded during startup could cause problems with incomplete imports. If the next handled request gave me an ImportError somewhere I knew the instance was toast, so I would redirect the user to the site, and then create a really big string exhausting memory, and then that instance would be shutdown. You could in theory do something similiar.
2
0
0
In user interface of Google App Engine, in "Instances" I can shutdown selected instances by press button "Shutdown". Can I do shutdown by program from source code?
GAE: Instance shutdown from source code
0
0
0
144
12,923,402
2012-10-16T21:04:00.000
1
0
0
0
python,django,timezone
12,923,424
2
true
1
0
generally, the server is setup to run as UTC, which has no daylight saving time, etc, and each user's settings contains their preferred timezone. Then you do the time difference calculation. You DB might have that feature.
1
0
0
I am building a application where the clients are supposed to pass a datetime field to query the database. But the client will be in different timezones, so how to solve that problem. Should I make it a point that the client's timezone is used as deafult for each user or the server for every user. And if so than how can I do it?
Timezone for server
1.2
0
0
115
12,925,147
2012-10-16T23:50:00.000
3
0
0
0
python,tar,compression
12,925,291
1
true
0
0
No, the argument to getmember is the key inside the tar file, not a local file system path. Use the slashes.
1
3
0
I have a tar.gz file, and I want to extract a certain directory but not the whole thing, so I use TarFile.getmember('foo/bar'). what I want to know is if I really should be using TarFile.getmember(os.path.join('foo','bar')). edit: I'm also wondering if I would use os.path.join for any other function within the tarfile module or zipfile moduple.
(python) do I use os.path.join on tar.gz files
1.2
0
0
355
12,926,898
2012-10-17T03:58:00.000
62
0
0
0
python,numpy
12,926,989
2
true
0
0
You can do this with the return_index parameter: >>> import numpy as np >>> a = [4,2,1,3,1,2,3,4] >>> np.unique(a) array([1, 2, 3, 4]) >>> indexes = np.unique(a, return_index=True)[1] >>> [a[index] for index in sorted(indexes)] [4, 2, 1, 3]
1
36
1
How can I use numpy unique without sorting the result but just in the order they appear in the sequence? Something like this? a = [4,2,1,3,1,2,3,4] np.unique(a) = [4,2,1,3] rather than np.unique(a) = [1,2,3,4] Use naive solution should be fine to write a simple function. But as I need to do this multiple times, are there any fast and neat way to do this?
numpy unique without sort
1.2
0
0
32,836
12,929,592
2012-10-17T07:51:00.000
3
0
1
0
python
12,929,674
3
false
0
0
You can modify your PATH environment variable to get python3 picked up first, or create an alias that'll give you a command for running python3. To modify your PATH edit your shell configuration file (e.g., ~/.bashrc if you are using BASH) PATH=/usr/local/python3.2.3/bin:$PATH To create an alias to python3 do (in the same file), alias python3=/usr/local/python3.2.3/bin/python3
3
1
0
I just start learning Python in my company, I usually type python ext.py to run a script. The default Python version is 2.4. But now I need to run a new script in Python 3. The company's IT support installed Python 3 in /usr/local/python3.2.3/bin/python3, but when I type python -V to check version, it still shows Python 2.4. The IT support said they can't replace the old version. How can i run the script using Python 3?
How to run a script with a different version of Python
0.197375
0
0
164
12,929,592
2012-10-17T07:51:00.000
1
0
1
0
python
12,929,608
3
true
0
0
/usr/local/python3.2.3/bin/python3 -V /usr/local/python3.2.3/bin/python3 ext.py
3
1
0
I just start learning Python in my company, I usually type python ext.py to run a script. The default Python version is 2.4. But now I need to run a new script in Python 3. The company's IT support installed Python 3 in /usr/local/python3.2.3/bin/python3, but when I type python -V to check version, it still shows Python 2.4. The IT support said they can't replace the old version. How can i run the script using Python 3?
How to run a script with a different version of Python
1.2
0
0
164
12,929,592
2012-10-17T07:51:00.000
0
0
1
0
python
12,930,020
3
false
0
0
As py2 and 3 have so many differences, usually Python 3 will be started with python3 instead of a mere python.
3
1
0
I just start learning Python in my company, I usually type python ext.py to run a script. The default Python version is 2.4. But now I need to run a new script in Python 3. The company's IT support installed Python 3 in /usr/local/python3.2.3/bin/python3, but when I type python -V to check version, it still shows Python 2.4. The IT support said they can't replace the old version. How can i run the script using Python 3?
How to run a script with a different version of Python
0
0
0
164
12,931,338
2012-10-17T09:35:00.000
1
0
0
0
django,testing,python-sphinx
14,034,222
1
true
1
0
Turned out that I had some python paths wrong. Everything works as expected - as noted by bmu in his comment. (I'm writing this answer so I can close the question in a normal way)
1
1
0
I have a Django project with a couple of apps - all of them with 100% coverage unit tests. And now I started documenting the whole thing in a new directory using ReST and Sphinx. I create the html files using the normal approach: make html. Since there are a couple of code snippets in these ReST files, I want to make sure that these snippets stay valid over time. So what is a good way to test these ReST files, so I get an error if some API changes made such a snippet invalid? I guess there have to be some changes in conf.py?
Sphinx code validation for Django projects
1.2
0
0
148
12,932,610
2012-10-17T10:46:00.000
5
0
1
1
python
12,932,872
1
true
0
0
Python is implemented in C, and records what version of the C compiler was used to compile it (to aid tracking down compiler-specific bugs). The implementation itself does not vary based on the compiler. It can vary based on the platform it is compiled for, and the available external libraries, but there is nothing altering Python behaviour based on the compiler used.
1
1
0
If I start python under MacOS X 10.8 in the console it starts with "Python 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin". In what way does the implementation of python depend on GCC?
CPython and GCC
1.2
0
0
295
12,932,663
2012-10-17T10:49:00.000
0
1
0
0
javascript,python,c,serialization,natural-sort
12,933,456
3
false
0
0
The obvious answer is to remove the restriction that you can't use the native database type. I can't see any point in it. It's still 8 bytes and it does the ordering for you without any further investigation, work, experiment, testing etc being necessary.
1
3
0
The project I'm working on requires me to store javascript numbers(which are doubles) as BLOB primary keys in a database table(can't use the database native float data type). So basically I need to serialize numbers to a byte array in such a way that: 1 - The length of the byte array is 8(which is what is normally needed to serialize doubles) 2 - the byte arrays must preserve natural order so the database will transparently sort rows in the index b-tree. A simple function that takes a number and returns an array of numbers representing the bytes is what I'm seeking. I prefer the function to be written in javascript but answers in java, C, C#, C++ or python will also be accepted.
How to efficiently serialize 64-bit floats so that the byte arrays preserve natural numeric order?
0
0
0
797
12,934,495
2012-10-17T12:38:00.000
1
0
0
0
python,wireshark,libpcap
12,941,372
1
false
0
0
What needs to be updated is libpcap, not the wrapper. Libpcap 1.1.0 and later can transparently read pcap-ng files, as long as the pcap-ng files don't have more than one link-layer header type (for example, a pcap-ng file with two LINKTYPE_ETHERNET interfaces would be readable, but a pcap-ng file with a LINKTYPE_ETHERNET interface and a LINKTYPE_PPP interface wouldn't be). Reading arbitrary pcap-ng files would require API changes to libpcap; the wrappers would then have to pick up those changes as well.
1
2
0
Have any of the existing libpcap wrappers for python been updated to read the pcap-ng format generated by Wireshark 1.8.x? I tried several of them with no success. Maybe I'm missing something, like config options or new APIs. If this hasn't been done yet or is incomplete, I'm willing to help with the work. Thanks, all.
Python libpcap wrapper that understands pcap-ng format?
0.197375
0
0
1,267
12,936,533
2012-10-17T14:23:00.000
2
0
0
0
python,selenium,webdriver,urllib2,tcp-ip
12,940,958
3
false
0
0
In as much as a small amount of plastic explosive solves the problem of forgetting your house keys, I implemented a solution. I created a class that tracks a list of resources and the time they were added to it, blocks when a limit is reached, and removes entries when their timestamp passes beyond a timeout value. I then created an instance of this class setup with a limit of 32768 resources and a timeout of 240 seconds, and had my test framework add an entry to the list every time webdriver.execute() is called or a few other actions (db queries, REST calls) are performed. It's not particularly elegant and it's quite arbitrary, but at least so far it seems to be keeping my tests from triggering port exhaustion while not noticeably slowing tests that weren't causing port exhaustion before.
1
2
0
I'm working on a suite of tests using selenium webdriver (written in Python). The page being tested contains a form that changes its displayed fields based upon what value is selected in one of its select boxes. This select box has about 250 options. I have a test (running via nose, though that's probably irrelevant) that iterates through all the options in the select box, verifying that the form has the correct fields displayed for each selected option. The problem is that for each option, it calls through selenium: click for the option to choose find_element and is_displayed for 7 fields find_elements for the items in the select box get_attribute and text for each option in the select box So that comes out to (roughly) 250 * (7 * 2 + 1 + 2 * 250), or 128,750 distinct requests to the webdriver server the test is running on, all within about 10 or 15 minutes. This is leading to client port exhaustion on the machine running the test in some instances. This is all running through a test framework that abstracts away things like how the select box is parsed, when new page objects are created, and a few other things, so optimizations in the test code either mean hacking that all to hell or throwing away the framework for this test and doing everything manually (which, for maintainability of our test code, is a bad idea). Some ideas I've had for solutions are: Trying to somehow pool or reuse the connection to the webdriver server Somehow tweaking the configuration of urllib2 or httplib at runtime so that the connections opened by selenium timeout or are killed more quickly System independent (or at least implementable for all systems with an OS switch or some such) mechanism for actively tracking and closing the ports being opened by selenium As I mentioned above, I can't do much to tweak the way the page is parsed or handled, but I do have control over subclassing/tweaking WebDriver or RemoteConnection any way I please. Does anyone have any suggestions on how to approach any of the above ideas, or any ideas that I haven't come up with?
tcp/ip port exhaustion with selenium webdriver
0.132549
0
1
2,907
12,937,916
2012-10-17T15:33:00.000
0
0
1
1
python,windows,py2app
72,331,122
2
false
0
0
No, There's no way on earth you can run py2app on windows,py2app cannot create a Mac app bundle in windows because it's impossible to do so... Either hackintosh your pc or emulate a mac using QEMU,VMware or VirtualBox!
1
10
0
I recently discovered that an outdated version of Python was causing my Wx app to run into errors. I can't install Python 2.7.3 on my Mac, and when I tried it in a virtual machine, py2app was still "compiling" the app after running overnight (my Windows/Linux box has an ≈1GHz processor). Is there a version of py2app that runs on Windows?
Can I run py2app on Windows?
0
0
0
5,722
12,938,568
2012-10-17T16:07:00.000
1
0
0
0
python,pdf,pagination,matplotlib
12,938,704
3
false
0
0
I suspect that there is a more elegant way to do this, but one option is to use tempfiles or StringIO to avoid making traditional files on the system and then you can piece those together.
1
12
1
I have a lengthy plot, composed o several horizontal subplots organized into a column. When I call fig.savefig('what.pdf'), the resulting output file shows all the plots crammed onto a single page. Question: is there a way to tell savefig to save on any number (possibly automatically determined) of pdf pages? I'd rather avoid multiple files and then os.system('merge ...'), if possible.
Matplotlib savefig into different pages of a PDF
0.066568
0
0
16,939
12,939,379
2012-10-17T16:53:00.000
1
0
0
0
python,mouse,pygame
12,940,103
2
false
0
1
The MOUSEBUTTONDOWN event has two attributes: pos button If you use event.pos instead of mouse.get_pos() you should be able to pick up the coordinates where the click occurred even if you have more than one mouse.
1
0
0
I'm trying to make a simple piano keyboard for my programming class. I've made it in pygame because it's nice and simple to use. I've managed to get it working by making a mouse rect and individual rects for each note on the piano and having the note play by linking it to a mousebuttondown event. So it's working pretty good but I'll be demonstrating it on a smartboard and I was wondering if I could make it multiplayer. The problem is I can't work out a way that two people can play at the same time because obviously there's only one mouse (which is detected by the player tapping on screen) and therefore one mouse.get_pos() for the rect collision detection. If anyone can suggest a work around that'd be cool! Thanks for the help -Laura
Multiple mouse position
0.099668
0
0
880
12,940,223
2012-10-17T17:45:00.000
2
1
0
0
python,bash,shell
12,940,327
1
false
0
0
If the server supports if-modified-since, you could send the request with If-Modified-Since: (T-30 minutes) and ignore the 304 responses.
1
0
0
I am trying to download only files modified in the last 30 minutes from a URL. Can you please guide me how to proceed with this. Please let me know if I should use shell scripting or python scripting for this.
Download files modified in last 30 minutes from a URL
0.379949
0
1
170
12,942,806
2012-10-17T20:26:00.000
8
0
1
0
python,linux,file,unix,words
12,942,907
3
false
0
0
You'll have to do 2 passes through the file: In pass 1: Build a dictionary using the words as keys and their occurrences as values (i.e. each time you read a word, add 1 to its value in the dictionary) Then pre-process the list to remove all keys with a value greater than 1. This is now your "blacklist" In pass 2: Read through the file again and remove any word that has a match in the blacklist. Run-time: Linear time to read the file in both passes. It takes O(1) to add each word to the dictionary / increment its value in pass 1. It takes O(n) to pre-process the dictionary into the blacklist. It takes O(1) for the blacklist look-up in pass 2. O(n) complexity
1
1
0
Given a very large text file, I want to remove all the words which occur only once in the file. Is there any easy and efficient way of doing it? Best regards,
Remove rare words from a very large file
1
0
0
1,939
12,943,846
2012-10-17T21:40:00.000
4
0
0
0
python,django,git
12,944,156
1
true
1
0
The way I work: Each website has it's own git repo and each app has it's own repo. Each website also has it's own virtualenv and requirements.txt. Even though 2 websites may share the most recent version of MyApp right now, they may not in the future (maybe you haven't gotten one of the websites up to date on some API changes). If you really must have just one version of MyApp, you could install it at the system level and then symlink it into the virtualenv for each project. For development on a local machine (not production) I do it a little differently. I symlink the app's project folder into a "src" folder in the virtualenv of the website and then to a python setup.py develop into the virtualenv so that the newest changes are always used on the website "in real time".
1
0
0
I'm currently developing several websites on Django, which requiere several Django Apps. Lets say I have two Django projects: web1 and web2 (the websites, each has a git repo). Both web1 and web2 have a different list of installed apps, but happen to both use one (or more) application(s) developed by me, say "MyApp" (also has a git repo there). My questions are: What is the best way to decouple MyApp from any particular website? What I want is to develop MyApp independently, but have each website use the latest version of the app (if it has it installed, of course). I have two "proposed" solutions: use symlinks on each website to a "master" MyApp folder, or use git to push from MyApp to each website's repo. How to deploy with this setup? Right now, I can push the git repo of web1 and web2 to a remote repo in my shared hosting account, and it works like a charm. Will this scale adequately? I think I have the general idea working in my head, but I'm not sure about the specifics. Won't this create a nested git repo issue? How does git deal with simlinks, specifically if the symlink destination has a .git folder in it?
Git, Django and pluggable applications
1.2
0
0
154
12,945,278
2012-10-18T00:04:00.000
2
1
0
0
java,python,optimization,webserver,multiplayer
12,946,896
3
false
0
0
Anything else? Maybe a cup of coffee to go with your question :-) Answering your question from the ground up would require several books worth of text with topics ranging from basic TCP/IP networking to scalable architectures, but I'll try to give you some direction nevertheless. Questions: Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these). I would venture that if you're not clear on the definition of each of these maybe designing an implementing a service that will be "be playing millions of these hands as fast as I can" is a bit hmm, over-reaching? But don't let that stop you as they say "ignorance is bliss." Any good frameworks to work off of? I think your project is a good candidate for Node.js. There main reason being that Node.js is relatively scaleable and it is good at hiding the complexity required for that scalability. There are downsides to Node.js, just Google search for 'Node.js scalability critisism'. The main point against Node.js as opposed to using a more general purpose framework is that scalability is difficult, there is no way around it, and Node.js being so high level and specific provides less options for solving though problems. The other drawback is Node.js is Javascript not Java or Phyton as you prefer. Message types? I'm thinking JSON or Protocol Buffers. I don't think there's going to be a lot of traffic between client and server so it doesn't really matter I'd go with JSON just because it is more prevalent. How to make it FAST? The real question is how to make it scalable. Running human vs human card games is not computationally intensive, so you're probably going to run out of I/O capacity before you reach any computational limit. Overcoming these limitations is done by spreading the load across machines. The common way to do in multi-player games is to have a list server that provides links to identical game servers with each server having a predefined number of slots available for players. This is a variation of a broker-workers architecture were the broker machine assigns a worker machine to clients based on how busy they are. In gaming users want to be able to select their server so they can play with their friends. Related: Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand. Since this is in human time scales (seconds as opposed to miliseconds) the client should send keepalives say every 10 seconds with say 30 second session timeout. The keepalives would be JSON messages in your application protocol not HTTP which is lower level and handled by the framework. The framework itself should provide you with HTTP 1.1 connection management/pooling which allows several http sessions (request/response) to go through the same connection, but do not require the client to be always connected. This is a good compromise between reliability and speed and should be good enough for turn based card games.
2
2
0
Basically I want a Java, Python, or C++ script running on a server, listening for player instances to: join, call, bet, fold, draw cards, etc and also have a timeout for when players leave or get disconnected. Basically I want each of these actions to be a small request, so that players could either be processes on same machine talking to a game server, or machines across network. Security of messaging is not an issue, this is for learning/research/fun. My priorities: Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand. Speed. I'm going to be playing millions of these hands as fast as I can. Run on a shared server instance (I may have limited access to ports or things that need root) My questions: Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these). Any good frameworks to work off of? Message types? I'm thinking JSON or Protocol Buffers. How to make it FAST? Thanks guys - just looking for some pointers and suggestions. I think it is a cool problem with a lot of neat things to learn doing it.
Multiplayer card game on server using RPC
0.132549
0
1
2,321
12,945,278
2012-10-18T00:04:00.000
1
1
0
0
java,python,optimization,webserver,multiplayer
12,963,229
3
false
0
0
Honestly, I'd start with classic LAMP. Take a stock Apache server, and a mysql database, and put your Python scripts in the cgi-bin directory. The fact that they're sending and receiving JSON instead of HTTP doesn't make much difference. This is obviously not going to be the most flexible or scalable solution, of course, but it forces you to confront the actual problems as early as possible. The first problem you're going to run into is game state. You claim there is no shared state, but that's not right—the cards in the deck, the bets on the table, whose turn it is—that's all state, shared between multiple players, managed on the server. How else could any of those commands work? So, you need some way to share state between separate instances of the CGI script. The classic solution is to store the state in the database. Of course you also need to deal with user sessions in the first place. The details depend on which session-management scheme you pick, but the big problem is how to propagate a disconnect/timeout from the lower level up to the application level. What happens if someone puts $20 on the table and then disconnects? You have to think through all of the possible use cases. Next, you need to think about scalability. You want millions of games? Well, if there's a single database with all the game state, you can have as many web servers in front of it as you want—John Doe may be on server1 while Joe Schmoe is on server2, but they can be in the same game. On the other hand, you can a separate database for each server, as long as you have some way to force people in the same game to meet on the same server. Which one makes more sense? Either way, how do you load-balance between the servers. (You not only want to keep them all busy, you want to avoid the situation where 4 players are all ready to go, but they're on 3 different servers, so they can't play each other…). The end result of this process is going to be a huge mess of a server that runs at 1% of the capacity you hoped for, that you have no idea how to maintain. But you'll have thought through your problem space in more detail, and you'll also have learned the basics of server development, both of which are probably more important in the long run. If you've got the time, I'd next throw the whole thing out and rewrite everything from scratch by designing a custom TCP protocol, implementing a server for it in something like Twisted, keeping game state in memory, and writing a simple custom broker instead of a standard load balancer.
2
2
0
Basically I want a Java, Python, or C++ script running on a server, listening for player instances to: join, call, bet, fold, draw cards, etc and also have a timeout for when players leave or get disconnected. Basically I want each of these actions to be a small request, so that players could either be processes on same machine talking to a game server, or machines across network. Security of messaging is not an issue, this is for learning/research/fun. My priorities: Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand. Speed. I'm going to be playing millions of these hands as fast as I can. Run on a shared server instance (I may have limited access to ports or things that need root) My questions: Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these). Any good frameworks to work off of? Message types? I'm thinking JSON or Protocol Buffers. How to make it FAST? Thanks guys - just looking for some pointers and suggestions. I think it is a cool problem with a lot of neat things to learn doing it.
Multiplayer card game on server using RPC
0.066568
0
1
2,321
12,946,708
2012-10-18T03:38:00.000
2
0
0
1
python,django,linux,apache2
12,947,246
1
true
1
0
If you use the subprocess module to execute the script, the close_fds argument to the Popen constructor will probably do what you want: If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. Assuming they weren't simply closed, the first three file descriptors are traditionally stdin, stdout and stderr, so the listening sockets in the Django application will be among those closed in the child process.
1
5
0
I am trying to fork() and exec() a new python script process from within a Django app that is running in apache2/WSGI Python. The new python process is daemonized so that it doesn't hold any association to apache2, but I know the HTTP ports are still open. The new process kills apache2, but as a result the new python process now holds port 80 and 443 open, and I don't want this. How do I close port 80 and 443 from within the new python process? Is there a way to gain access to the socket handle descriptors so they can be closed?
Close TCP port 80 and 443 after forking in Django
1.2
0
0
256
12,949,634
2012-10-18T07:48:00.000
0
0
1
0
python,regex,routing,tornado
12,950,281
2
true
1
0
What about this: r'/admin/?' or r'/admin/{0,1}? Pay attention that I'm only talking about regex, don't know if this would work in Django.
1
1
0
i currently have this in my routing: (r"/admin", AdminController.Index ), (r"/admin/", AdminController.Index ), how do i merge them with just one line and have admin and admin/ go to AdminController.Index? i know this could be achieved via regex, but it doesnt seem to work
Regex routing in Python using Tornado framework
1.2
0
0
1,761
12,953,040
2012-10-18T10:57:00.000
2
0
1
1
python,ipython
15,759,634
1
true
0
0
Just now I have a try with ipython on my mac, auto-complete works well when dealing with space-contained directory. The ipython version I used is 0.13.1. Perhaps simple upgrade your ipython can solve.
1
4
0
I am running iPython v0.13 on windows 7 64-bit. (qtconsole --pylab=inline) When I type %cd 'C:/Users/My Name/Downloads' it takes me to the desired location. When I tab auto-complete between directories the auto-complete fails if directories have a space in their names (as in the example). Is there a reason for this and a solution to overcome it (besides migrating to Linux or using underscores as filename/directory name separators. Thanks.
iPython filepath autocompletion
1.2
0
0
910
12,953,542
2012-10-18T11:26:00.000
5
0
0
0
python,simplehttpserver,webdev.webserver
25,786,569
3
false
1
0
The right way to do this, to ensure that the query parameters remain as they should, is to make sure you do a request to the filename directly instead of letting SimpleHTTPServer redirect to your index.html For example http://localhost:8000/?param1=1 does a redirect (301) and changes the url to http://localhost:8000/?param=1/ which messes with the query parameter. However http://localhost:8000/index.html?param1=1 (making the index file explicit) loads correctly. So just not letting SimpleHTTPServer do a url redirection solves the problem.
1
13
0
I like to use Python's SimpleHTTPServer for local development of all kinds of web applications which require loading resources via Ajax calls etc. When I use query strings in my URLs, the server always redirects to the same URL with a slash appended. For example /folder/?id=1 redirects to /folder/?id=1/ using a HTTP 301 response. I simply start the server using python -m SimpleHTTPServer. Any idea how I could get rid of the redirecting behaviour? This is Python 2.7.2.
Why does SimpleHTTPServer redirect to ?querystring/ when I request ?querystring?
0.321513
0
1
6,427
12,956,582
2012-10-18T14:06:00.000
0
0
0
1
python,google-app-engine,google-cloud-datastore
12,958,199
2
false
1
0
You can use UrlFetch in App2 to request all the data you need from App1 and proces it to create your merged result. It is quite easy to serve and serialize the entities (with a cursor) using JSON for the data exchange in App1.
1
1
0
I have 2 different apps that are basically identical but I had to setup a 2nd app because of GAE billing issues. I want to merge the datastore data from the 1st app with the 2nd apps data store. By merge, I simply want to append the 2 data stores. To help w/visualise App1: SomeModel AnotherModel App2: SomeModel Another Model I want app 2's datastore to be the sum of app2 and app1. The only way I see to transfer data from one app to another on the app engine administration page will overwrite the target destination data... I don't want to overwrite. thx for any help
Merge (append) GAE datastores from different apps
0
0
0
52
12,960,212
2012-10-18T17:19:00.000
2
0
0
0
python,django,apache,wsgi,django-wsgi
12,960,567
2
false
1
0
I'd go for the Apache configuration + run a proxy in front + restrict in WSGI : I dislike Apache for communicating with web clients when dynamic content generation is involved. Because of it's execution model, a slow or disconnected client can tie up the Apache process. If you have a proxy in front ( i prefer nginx, but even a vanilla apache will do ), the proxy will worry about the clients and Apache can focus on a new dynamic content request. Depending on your Apache configuration, a process can also slurp a lot of memory and hold onto it until it hits MaxRequests. If you have memory intensive code in /admin ( many people do ), you can end up with Apache processes that grab a lot more memory than they need. If you split your Apache config into /admin and /!admin , you can tweak your apache settings to have a larger number of /!admin servers which require a smaller potential footprint. I'm paranoid server setups. I want to ensure the proxy only sends /admin to a certain Apache port. I want to ensure that Apache only receives /admin on certain apache port, and that it came from the proxy (with a secret header) or from localhost. I want to ensure that the WSGI is only running the /admin stuff based on certain server/client conditions.
1
3
0
What is the simplest way to make Django /admin/ urls accessible to localhost only? Options I have thought of: Seperate the admin site out of the project (somehow) and run as a different virtual host (in Apache2) Use a proxy in front of the hosting (Apache2) web server Restrict the URL in Apache within WSGI somehow. Is there a standard approach? Thanks!
How do I make Django admin URLs accessible to localhost only?
0.197375
0
0
2,696
12,960,276
2012-10-18T17:23:00.000
0
1
0
1
python,redirect,subprocess
12,960,474
3
false
0
0
You can use existing file descriptors as the stdout/stderr arguments to subprocess.Popen. This should be exquivalent to running from with redirection from bash. That redirection is implemented with fdup(2) after fork and the output should never touch your program. You can probably also pass fopen('/dev/null') as a file descriptor. Alternatively you can redirect the stdout/stderr of your controller program and pass None as stdout/stderr. Children should print to your controllers stdout/stderr without passing through python itself. This works because the children will inherit the stdin/stdout descriptors of the controller, which were redirected by bash at launch time.
2
0
0
To begin with, I am only allowed to use python 2.4.4 I need to write a process controller in python which launches and various subprocesses monitors how they affect the environment. Each of these subprocesses are themselves python scripts. When executed from the unix shell, the command lines look something like this: python myscript arg1 arg2 arg3 >output.log 2>err.log & I am not interested in the input or the output, python does not need to process. The python program only needs to know 1) The pid of each process 2) Whether each process is running. And the processes run continuously. I have tried reading in the output and just sending it out a file again but then I run into issues with readline not being asynchronous, for which there are several answers many of them very complex. How can I a formulate a python subprocess call that preserves the bash redirection operations? Thanks
Preserving bash redirection in a python subprocess
0
0
0
1,449
12,960,276
2012-10-18T17:23:00.000
0
1
0
1
python,redirect,subprocess
12,961,812
3
false
0
0
The subprocess module is good. You can also do this on *ix with os.fork() and a periodic os.wait() with a WNOHANG.
2
0
0
To begin with, I am only allowed to use python 2.4.4 I need to write a process controller in python which launches and various subprocesses monitors how they affect the environment. Each of these subprocesses are themselves python scripts. When executed from the unix shell, the command lines look something like this: python myscript arg1 arg2 arg3 >output.log 2>err.log & I am not interested in the input or the output, python does not need to process. The python program only needs to know 1) The pid of each process 2) Whether each process is running. And the processes run continuously. I have tried reading in the output and just sending it out a file again but then I run into issues with readline not being asynchronous, for which there are several answers many of them very complex. How can I a formulate a python subprocess call that preserves the bash redirection operations? Thanks
Preserving bash redirection in a python subprocess
0
0
0
1,449
12,960,535
2012-10-18T17:42:00.000
1
0
1
0
python,audio,join,split
17,218,449
2
false
0
0
If you prefer to make a solution yourself, you can also glue MP3 files together. They're composed by just a sequence of frames, so if you copy byte-by-byte each file to an unique file, it should work well. For the audio extraction, if you can rely on an external programme, you can use FFmpeg, and the -t and -tt options, with the -acodec copy switch, to preserve the original data.
2
0
0
If I have several audio files in a directory, how would I go about writing a script that would take the audio from the time 10 seconds to 20 seconds from each file and concatenate these 10 second chunks together into a single audio file? Any specific packages, extensions, etc. which would be helpful? (I would prefer an answer for generic file types, but if a specific format is needed let's go with mp3) I would prefer python, but if you have an easy solution in some other language, please tell me. Thank you much!
Python - How to split and join audio?
0.099668
0
0
731
12,960,535
2012-10-18T17:42:00.000
1
0
1
0
python,audio,join,split
12,960,848
2
true
0
0
I've used mp3wrap from a bash script to catenate mp3's. The mp3wrap package includes an mp3splt executable. It's not Python, but you could subprocess them from Python. They appear to be written in C, based on a quick ldd.
2
0
0
If I have several audio files in a directory, how would I go about writing a script that would take the audio from the time 10 seconds to 20 seconds from each file and concatenate these 10 second chunks together into a single audio file? Any specific packages, extensions, etc. which would be helpful? (I would prefer an answer for generic file types, but if a specific format is needed let's go with mp3) I would prefer python, but if you have an easy solution in some other language, please tell me. Thank you much!
Python - How to split and join audio?
1.2
0
0
731
12,960,907
2012-10-18T18:04:00.000
-1
0
1
0
python,dictionary
12,960,956
4
false
0
0
Yes, it's possible. Iterate through every key in the dictionary and set the related value to 1.
1
12
0
Is it possible to replace all values in a dictionary, regardless of value, with the integer 1? Thank you!
Python: Replace All Values in a Dictionary
-0.049958
0
0
34,084
12,964,666
2012-10-18T22:21:00.000
0
0
0
1
python,google-nativeclient
13,452,816
2
true
0
0
I got a solution- not a direct one, though. managed to use a program to redirect the HTTPS traffic through the HTTP proxy. I used the program called "proxifier". Works great.
1
1
0
For the last few days I have been trying ti install the Native Client SDK for chrome in Windows and/or Ubuntu. I'm behind a corporate network, and the only internet access is through an HTTP proxy with authentication involved. When I run "naclsdk update" in Ubuntu, it shows "urlopen error Tunnel connection failed: 407 Proxy Authentication Required" Can anyone please help ?
Installing Chrome Native Client SDK
1.2
0
1
1,326
12,964,736
2012-10-18T22:28:00.000
1
0
1
0
python
12,988,142
2
true
0
0
I'm posting this answer on behalf of an "Anonymous user" (presumably the OP who had not logged in properly?!) who had written this as an edit to the question (edit got rejected because it should have been posted as an answer -- which is what I'm doing now). EDIT 3 (and answer): Done. It turned out that win32com.client couldn't be imported more than once within the same process. And even for the first import it was necessary to regenerate the COM cache and make sure it's not read-only. We have worked around it by not using win32com at all, as it appeared it was only needed for MSXML. The XML processing was rewritten by using python's standard library xml.sax.
2
0
0
I understand the question might well be too vague, but just in case anyone seen this or can have any clues. I have an application which works just fine under python 2.5.1. But when running it on Python 2.5.4, it starts giving "NoneType is not callable". I have checked changelogs between the versions, but there are quite a few changes, and it's really hard to say what might be related. Can you give any hint please where to start digging further? EDIT: Clearly something is None, but the question is why it is not None under 2.5.1. I was hoping that there is a good known issue for this, so that this question might hit it. EDIT 2: It has just become clear that the error occurs when trying to import win32com.client. Any clues how to work around it?
Python 2.5.4 throws 'NoneType is not callable' whereas 2.5.1 works fine
1.2
0
0
162
12,964,736
2012-10-18T22:28:00.000
1
0
1
0
python
12,964,845
2
false
0
0
The place to start digging is your app. The error message is telling you some variable is None when you think it should point to an object. So, find that variable (the error message should tell you exactly where it is), then figure out why that varianle contains None. There's a good chance your code is either silently catching an error, or it's making some assumption that is false. Find that error or assumption, and work backwards from there.
2
0
0
I understand the question might well be too vague, but just in case anyone seen this or can have any clues. I have an application which works just fine under python 2.5.1. But when running it on Python 2.5.4, it starts giving "NoneType is not callable". I have checked changelogs between the versions, but there are quite a few changes, and it's really hard to say what might be related. Can you give any hint please where to start digging further? EDIT: Clearly something is None, but the question is why it is not None under 2.5.1. I was hoping that there is a good known issue for this, so that this question might hit it. EDIT 2: It has just become clear that the error occurs when trying to import win32com.client. Any clues how to work around it?
Python 2.5.4 throws 'NoneType is not callable' whereas 2.5.1 works fine
0.099668
0
0
162
12,966,573
2012-10-19T02:34:00.000
2
0
0
0
python,ruby,session-state,web.py
12,966,697
1
false
1
0
Usually a session cookie is used to keep track of the session between client (in your case your Python web.py part) and server (in this case the Ruby server). Make sure you save the the session cookie when in your Python part when making the first request to the Ruby service. When making a second request simple send the cookie information as well. I think Ruby will then treat as the same session.
1
1
0
I have an app running on python web.py. For a couple of data items, I contact a ruby service. I was wondering if I could, in some way, keep track of the session in ruby as well. Every time the python calls the ruby service, it treats it as a different session. Any help? Thanks! PS - This might not be trivial and I might have to pass the session variables as parameters to the ruby but I am curious. Having this will save me some time when I start refactoring.
Session variables on Ruby when called through a Python web service
0.379949
0
0
95
12,968,446
2012-10-19T06:20:00.000
1
0
0
0
python,image,filter,numpy,scipy
13,009,064
2
false
0
0
Even if python did provide functionality to apply an operation to an NxM array without looping over it, the operation would still not be executed simultaneously in the background since the amount of instructions a CPU can handle per cycle is limited and thus no time could be saved. For your use case this might even be counterproductive since the fields in your arrays proably have dependencies and if you don't know in what order they are accessed this will most likely end up in a mess. Hugues provided some useful links about parallel processing in Python, but be careful when accessing the same data structure such as an array with multiple threads at the same time. If you don't synchronize the threads they might access the same part of the array at the same time and mess things up. And be aware, the amount of threads that can effectively be run in parallel is limited by the number of processor cores.
1
0
1
I want to apply a 3x3 or larger image filter (gaussian or median) on a 2-d array. Though there are several ways for doing that such as scipy.ndimage.gaussian_filter or applying a loop, I want to know if there is a way to apply a 3x3 or larger filter on each pixel of a mxn array simultaneously, because it would save a lot of time bypassing loops. Can functional programming be used for the purpose?? There is a module called scipy.ndimage.filters.convolve, please tell whether it is able to perform simultaneous operations.
Python: Perform an operation on each pixel of a 2-d array simultaneously
0.099668
0
0
854
12,969,188
2012-10-19T07:15:00.000
3
0
1
0
python,hex
12,969,264
4
false
0
0
Why do you think that hex(0) should return 0x00? 0x0 is semantically correct and is the shortest representation of the hexadecimal 0. Consider this: when you write decimal zero, it is 0 not 00. Or e.g. 0x9 == 00009 == 9. And the latter is the natural non-redundant decimal representation of the number 9.
1
1
0
I'm using hex(int('00000000', 2)) to convert a binary string into hex. It works fine for all (output) values from 10 to FF, but its not padding 00 to 09 properly, and I'm seeing 0x0 to 0x9 instead of the 0x00 to 0x09 that I am expecting. What am I doing wrong?
Invalid hex values
0.148885
0
0
2,059
12,970,909
2012-10-19T09:08:00.000
1
0
1
0
python,python-2.7
12,972,735
4
false
0
0
All this libraries are automatically installed with python may there should be a problem at installation time you can try this sudo apt-get update i am not sure for this. If this does not solve your issue the better way is completely uninstall python and reinstall it.
3
0
0
I have a system that has installed Python 2.7, but not the standard libraries like minidom, etc. I guess it is a broken install. How can I fix that and install separately the standard libraries for that Python?
How can I install the standard libraries for a Python that doesn't have them?
0.049958
0
0
1,632
12,970,909
2012-10-19T09:08:00.000
1
0
1
0
python,python-2.7
12,971,177
4
false
0
0
(As it is standard library, it should have been shipped with python, thus no need to explicitly download) Try importing like this: from xml.dom import minidom or import xml.dom.minidom
3
0
0
I have a system that has installed Python 2.7, but not the standard libraries like minidom, etc. I guess it is a broken install. How can I fix that and install separately the standard libraries for that Python?
How can I install the standard libraries for a Python that doesn't have them?
0.049958
0
0
1,632
12,970,909
2012-10-19T09:08:00.000
2
0
1
0
python,python-2.7
12,971,103
4
true
0
0
I think you should totally uninstall the python, and reinstall a full-package version. Standard lib comes with install package, it's hard to separate it.
3
0
0
I have a system that has installed Python 2.7, but not the standard libraries like minidom, etc. I guess it is a broken install. How can I fix that and install separately the standard libraries for that Python?
How can I install the standard libraries for a Python that doesn't have them?
1.2
0
0
1,632
12,972,530
2012-10-19T10:38:00.000
1
0
0
1
python,windows,shell,unix
12,973,389
2
false
0
0
Have you considered using Cygwin together with rsync? You could write a small bash script that uses rsync to fetch the files you need, and run this as a daily cron job.
2
2
0
I would like to write a script to automate a task which I do manually everyday. This task requires me to download some files from a UNIX server (Solaris) to my desktop (Windows XP) using WinSCP. Is there any way to copy/move the files from a path in the UNIX server to a path in my Windows XP PC using Python or shell script?
Copy file from UNIX to Windows using script
0.099668
0
0
1,745
12,972,530
2012-10-19T10:38:00.000
2
0
0
1
python,windows,shell,unix
12,972,627
2
false
0
0
If you are planning to use python, then you can use paramiko library. It has sftp support. Once you have the file on windows, use shutils library to move it to your path on windows
2
2
0
I would like to write a script to automate a task which I do manually everyday. This task requires me to download some files from a UNIX server (Solaris) to my desktop (Windows XP) using WinSCP. Is there any way to copy/move the files from a path in the UNIX server to a path in my Windows XP PC using Python or shell script?
Copy file from UNIX to Windows using script
0.197375
0
0
1,745
12,976,652
2012-10-19T14:43:00.000
0
0
0
1
python,google-app-engine
12,980,347
3
false
1
0
I misunderstood part of your problem, I thought you were issuing a query that was giving you 250 entities. I see what the problem is now, you're issuing an IN query with a list of 250 phone numbers, behind the scenes, the datastore is actually doing 250 individual queries, which is why you're getting 250 read ops. I can't think of a way to avoid this. I'd recommend avoiding searching on long lists of phone numbers. This seems like something you'd need to do only once, the first time the user logs in using that phone. Try to find some way to store the results and avoid the query again.
1
3
0
A user accesses his contacts on his mobile device. I want to send back to the server all the phone numbers (say 250), and then query for any User entities that have matching phone numbers. A user has a phone field which is indexed. So I do User.query(User.phone.IN(phone_list)), but I just looked at AppStats, and is this damn expensive. It cost me 250 reads for this one operation, and this is something I expect a user to do often. What are some alternatives? I suppose I can set the User entity's id value to be his phone number (i.e when creating a user I'd do user = User(id = phone_number)), and then get directly by keys via ndb.get_multi(phones), but I also want to perform this same query with emails too. Any ideas?
Efficient way to do large IN query in Google App Engine?
0
1
0
176
12,977,110
2012-10-19T15:05:00.000
3
0
0
1
python,google-app-engine
13,069,255
1
true
1
0
The entity ID forms part of the primary key for the entity, so there's no way to change it. Changing it is identical to creating a new entity with the new key and deleting the old one - which is one thing you can do, if you want. A better solution would be to create a PhoneNumber kind that provides a reference to the associated User, allowing you to do lookups with get operations, but not requiring every user to have exactly one phone number.
1
2
0
I'm using Google App Engine NDB. Sometimes I will want to get all users with a phone number in a specified list. Using queries is extremely expensive for this, so I thought I'll just make the id value of the User entity the phone number of the user so I can fetch directly by ids. The problem is that the phone number field is optional, so initially a User entity is created without a phone number, and thus no value for id. So it would be created user = User() as opposed to user = User(id = phone_number). So when a user at a later point decides to add a phone number to his account, is there anyway to modify that User entity's id value to the new phone number?
Modify a Google App Engine entity id?
1.2
0
0
1,404
12,977,441
2012-10-19T15:22:00.000
5
0
1
0
python,python-3.x
12,977,721
3
false
0
0
You can use the -i interpreter option. python -c "import os" -i will import the os module and go to the interpreter read/eval loop. You can also put some statements (imports, definitions, etc) on a file and load it with python -i <file.py>
1
1
0
I want to bring up a python window (could be idle or cmd based) with some packages already imported by double clicking a python script. Is this possible? If so, how do I do it?
How can I bring up a python shell with packages already imported with a python script?
0.321513
0
0
115
12,981,607
2012-10-19T19:58:00.000
2
0
0
0
python,3d,rendering,panda3d
15,449,900
2
false
0
0
You can use a buffer with setOneShot enabled to make it render only a single frame. You can start Panda3D without a window by setting the "window-type" PRC variable to "none", and then opening an offscreen buffer yourself. (Note: offscreen buffers without a host window may not be supported universally.) If you set "window-type" to "offscreen", base.win will actually be a buffer (which may be a bit easier than having to set up your own), after which you can call base.graphicsEngine.render_frame() to render a single frame while avoiding the overhead of the task manager. You have to call it twice because of double-buffering. Yes. Panda3D is used by Disney for some of their MMORPGs. I do have to add that Panda's high-level networking interfaces are poorly documented. You can calculate the transformation from an object to the camera using nodepath.get_transform(base.cam), which is a TransformState object that you may optionally convert into a matrix using ts.get_mat(). This is surprisingly fast, since Panda maintains a composition cache for transformations so that this doesn't have to happen multiple times. You can get the projection matrix (from view space to clip space) using lens.get_projection_mat() or the inverse using lens.get_projection_mat_inv(). You may also access the individual vertex data using the Geom interfaces, this is described to detail in the Panda3D manual. You can use set_color to change the base colour of the object (replacing any vertex colours), or you can use set_color_scale to tint the objects, ie. applying a colour that is multiplied with the existing colour. You can also apply a Material object if you use lights and want to use different colours for the diffuse, specular and ambient components.
1
1
1
I would like to use Panda3D for my personal project, but after reading the documentation and some example sourcecodes, I still have a few questions: How can I render just one frame and save it in a file? In fact I would need to render 2 different images: a single object, and a scene of multiple objects including the previous single object, but just one frame for each and they both need to be saved as image files. The application will be coded in Python, and needs to be very scalable (be used by thousand of users). Would Panda3D fit the bill here? (about my program in Python, it's almost a constant complexity so no problem here, and 3D models will be low-poly and about 5 to 20 per scene). I need to calculate the perspective projection of every object to the camera. Is it possible to directly access the vertexes and faces (position, parameters, etc..)? Can I recolor my 3D objects? I need to set a simple color for the whole object, but a different color per object. Is it possible? Please also note that I'm quite a newbie in the field of graphical and game development, but I know some bits of 3D modelling and 3D theory, as well as computer imaging theory. Thank you for reading me. PS: My main alternative currently is to use Soya3D or PySoy, but they don't seem to be very actively developped nor optimized, so although they would both have a smaller memory footprints, I don't know if they would really perform faster than Panda3D since they're not very optimized...
Panda3D and Python, render only one frame and other questions
0.197375
0
0
2,225
12,990,315
2012-10-20T16:17:00.000
0
0
1
0
python,graph,plot,interpolation
12,990,987
2
false
0
0
If you use matplotlib, you can just call plot(X1, Y1, 'bo', X2, Y2, 'r+'). Change the formatting as you'd like, but it can cope with different lengths just fine. You can provide more than two without any issue.
1
3
1
I have four one dimensional lists: X1, Y1, X2, Y2. X1 and Y1 each have 203 data points. X2 and Y2 each have 1532 data points. X1 and X2 are at different intervals, but both measure time. I want to graph Y1 vs Y2. I can plot just fine once I get the interpolated data, but can't think of how to interpolate data. I've thought and researched this a couple hours, and just can't figure it out. I don't mind a linear interpolation, but just can't figure out a way.
Data interpolation in python
0
0
0
2,501
12,991,111
2012-10-20T17:52:00.000
0
1
0
0
java,python,visual-studio-2010,arcgis,arcobjects
32,124,775
3
false
1
0
Creating a Python AddIn is probably the quickest and easiest approach if you just want to do some geoprocessing and deploy the tool to lots of users. But as soon as you need a user interface (that does more than simply select GIS data sources) you should create a .Net AddIn (using either C# or VB.net). I've created many AddIns over the years and they are a dramatic improvement to the old ArcGIS "plugins" that involved lots of complicated COM registration. AddIns are easy to build and deploy. Easy for users to install and uninstall. .Net has excellent, powerful features for creating rich user interfaces with the kind of drag and drop that you require. And there are great books, forums, samples to leverage.
1
0
0
I've been tasked with a thesis project where i have to extend the features of ArcGis. I've been asked to create a model written in Python that can run out of ArcGIS 10. This model will have a simple user interface where the user can drag/drop a variety of shapefiles and enter the values for particular variables in order for the model to run effectively. Once the model has finished running, a new shapefile is created that lays out the most cost effective Collector Cable route for a wind turbine from point A to point B. I'd like to know if such a functionality/ extension already exists in ArcGIS so i don't have to re-invent the wheel. If not then what is the best programming language to learn to extend ArcGIS for this (Python vs Visual basic vs Java). My background is Java, PHP, Jquery and Javascript. Also any pointers in the right direction i.e documentation, resources etc would be hugely appreciated
Extending ArcGIS
0
0
0
727
12,991,858
2012-10-20T19:24:00.000
0
0
1
0
python,oop
12,991,951
2
false
0
0
In short, you can't. There is no such thing as a reference to an object attribute, only references to objects. You'd indeed have to store an object an an identifier which denotes a storage location in that object (doesn't have to be a string which is an attribute name, could be a key for a dictionary which is a member, or an index into a list). However, you don't necessarily need that information. It may make more sense to model these relations differently, and/or to not give the objects themselves that knowledge, but treat them as a graph and perform traversals over that graph. This gives you knowledge of the endpoints of each reference, without explicitly recording it.
1
0
0
I am working on a project that attempts to represent an electronic circuit. This problem doesn't deal with circuit theory, just with connections between objects. The problem is I need to make a connection between two objects in two different ways. I have a Component and a Node. A component has two terminals (positive and negative), each of which connects to a node. A node can have many different terminals connected to it. So, I can have component1.positive = node1 But if I wanted to also do node1.add_terminal( component1.positive ) That would just give node1 a reference to itself. I would like to be able to have Nodes contain a collection of which terminals of which components connect to it or reference it, without having to write node1.add_terminal( component1, "positive") or something similar. So, is there a way to store "component1.positive", so that it can be followed back to a Component and the specific terminal of that component? Or is there another way to represent this many-to-one and one-to-many relationship? EDIT: It's important that the Node object can tell which of the two terminals of the component it was connected to.
Python - Making a reference to an objects attribute
0
0
0
143
12,995,006
2012-10-21T04:34:00.000
1
1
1
0
python,simics
13,001,532
1
false
0
0
In Simics 4.x, eval_cli_line has been replaced with run_command(). Read the migration guide.
1
0
0
eval_cli_line("cache_%s" % cpu.name + ".ptime") in my python script is constantly giving the following error NameError: global name 'eval_cli_line' is not defined Any suggestions ?
Python eval_cli_line()
0.197375
0
0
241
12,995,925
2012-10-21T07:40:00.000
1
0
1
0
python,ctypes
12,996,825
2
true
0
1
The easiest way is to attach the CFUNCTYPE objects to a module or class. I've found it useful to have a Python wrapper module - in essence a separate file that keeps everything at the ctypes level in one namespace. You could add and remove your callbacks from there as necessary.
1
1
0
I am creating CFUNCTYPE objects in my script and passing them as callbacks to c. If I make them local they are destroyed immediately and I am getting access violaton errors Tried to gc.disable(), it doesnt help. Today Im making global lists and putting/removing FUNCTYPE objects when needed. Is there better solution?
How to prevent FUNCTYPE from being collected
1.2
0
0
265
12,997,225
2012-10-21T11:03:00.000
0
0
1
0
python,macos
12,997,556
1
false
0
0
I'm guessing the performance drop is because it is swapping stuff in and out of memory. I don't think the issue is how long the program is running - Python uses a garbage collector, so it doesn't have memory leaks. Well, that's not entirely true. The garbage collector will make sure that it deletes anything that isn't reachable. (In other words, it will not delete something that you could conceivably access.) However, it's not smart enough to detect when a data structure won't be used for the rest of the program; you may need to clarify by setting all references to it to None that for it to work correctly. Could you post the code involved? Is this a program where you need a given record more than once? Is the order that you load records important for your program? If the python process has only a few gigabytes of memory allocated, then why do you have 300 GB of used memory?
1
0
0
I wrote a program to crunch some data about 2.2 million records. For each record, it will pass through a series of 20 computations that takes about total 0.01 second. To make it run faster, I use Python multiprocess, break my data generally into 6 blocks and run them in parallel with a master process distributing payloads to the processes and coordinating execution. BTW, as a result of the computations, the program will write out about 22 millions records to database. I'm running this on a MacBookPro i7 2.2GHz with 8GB RAM running on Python 3.2.2. The data is on a local MySQL server. The program starts well - running in an predictable manner, CPU on average is 60-70% utilized and my Macbook Pro just heats up like an oven. Then it slows down after running for about 5 hours with CPU utilization drops to average 20% on each core. Some observations I made then are: - each Python process is consuming about 480 MB real RAM and about 850 MB virtual RAM. There are total 6 of these heavy processes - The total virtual memory consumed by OSX (as shown by Activity Monitor) is about 300GB I suspect the performance degradation is due to huge memory consumption and potentially high page swaps. How can I better diagnose these symptoms? Is there any problem with Python running with large in-memory objects for long period of time? Really I don't think running 6 hours is heavy for today's technology but heck I am only about half a year of Python experience, so... what do I know?! Thanks!
Strange Python performance degradation after running a long time and consuming large memory footprint
0
0
0
763
12,999,262
2012-10-21T15:33:00.000
5
0
0
0
python,amazon-web-services,boto,amazon-dynamodb
13,003,481
1
true
1
0
The item_count value is only updated every six hours or so. So, I think boto is returning you the value as it is returned by the service but that value is probably not up to date.
1
3
0
I'm using the python framework Boto to interact with AWS DynamoDB. But when I use "item_count" it returns the wrong value. What's the best way to retrieve the number of items in a table? I know it would be possible using the Scan operation, but this is very expensive on resources and can take a long time if the table is quiet large.
Boto's DynamoDB API returns wrong value when using item_count
1.2
0
1
592
12,999,970
2012-10-21T16:51:00.000
0
1
0
0
python,keyboard,signals
36,716,865
1
false
0
0
Have a parent process that monitors keyboard input, and forward a signal to the child if it occurs.
1
4
0
My Python 2.7 script (on Raspberry Pi Debian) runs a couple of stepper motors synchronously via the GPIO port. I currently have a signal handler in place for Ctrl-C to clean up tidily before exit. I'd now like to extend that method such that keyboard inputs could also generate SIGUSR1 or similar as an asynchonous control mechanism. I know this could be achieve through threading, but I'm after a KISS approach. Ta
Getting keyboard inputs to cause SIGUSR1/2 akin to ctrl-C / SIGINT to trigger signal_handler
0
0
0
415
13,000,007
2012-10-21T16:56:00.000
1
0
0
0
postgresql,content-management-system,python-2.7
13,003,890
2
false
1
0
Have you tried Drupal. Drupal supports PostgreSQL and is written in PHP and is open source.
1
5
0
Php or python Use and connect to our existing postgres databases open source / or very low license fees Common features of cms, with admin tools to help manage / moderate community have a large member base on very basic site where members provide us contact info and info about their professional characteristics. About to expand to build new community site (to migrate our member base to) where the users will be able to msg each other, post to forums, blog, share private group discussions, and members will be sent inivitations to earn compensation for their expertise. Profile pages, job postings, and video chat would be plus. Already have a team of admins savvy with web apps to help manage it but our developer resources are limited (3-4 programmers) and looking to save time in development as opposed to building our new site from scratch.
What is a good cms that is postgres compatible, open source and either php or python based?
0.099668
1
0
5,703
13,001,913
2012-10-21T20:32:00.000
1
0
1
0
python,hash,dictionary,equality
13,001,958
3
false
0
0
__hash__ defines the bucket the object is put into, __eq__ gets called only when objects are in the same bucket.
2
6
0
I have a class (let's call it myClass) that implements both __hash__ and __eq__. I also have a dict that maps myClass objects to some value, computing which takes some time. Over the course of my program, many (in the order of millions) myClass objects are instantiated. This is why I use the dict to keep track of those values. However, sometimes a new myClass object might be equivalent to an older one (as defined by the __eq__ method). So rather than compute the value for that object again, I'd rather just lookup the value of older myClass object in the dict. To accomplish this, I do if myNewMyClassObj in dict. Here's my question: When I use that in clause, what gets called, __hash__ or __eq__? The point of using a dict is that it's O(1) lookup time. So then __hash__ must be called. But what if __hash__ and __eq__ aren't equivalent methods? In that case, will I get a false positive for if myNewMyClassObj in dict? Follow up question: I want to minimize the number of entries in my dict, so I would ideally like to keep only one of a set of equivalent myClass objects in the dict. So again, it seems that __eq__ needs to be called when computing if myNewClassObj in dict, which would defile a dict's O(1) lookup time to an O(n) lookup time
What happens when you call `if key in dict`
0.066568
0
0
607