Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
12,167,396
2012-08-28T20:55:00.000
0
0
0
0
python,user-interface,wxpython
12,167,427
2
false
0
1
ummmm my_text_control.SetValue("some value to set") ?? If you clarify your question ill clarify my answer
1
0
0
How can I add extra data to wx.TextCtrl widgets? At the moment I'm using the GetToolTipString() method to add the extra data but this is obviously wrong.
How to add extra data to a wx.TextCtrl?
0
0
0
112
12,167,452
2012-08-28T21:00:00.000
5
0
1
0
python,string,function
12,167,479
3
true
0
0
getattr(moduleA, 'func1')() == moduleA.func1()
2
1
0
I got this: tu = ("func1", "func2", "func3") And with the operation I am looking for I would get this for the first string: moduleA.func1() I know how to concatenate strings, but is there a way to join into a callable string?
Join two strings into a callable string 'moduleA' + 'func1' into moduleA.func1()
1.2
0
0
186
12,167,452
2012-08-28T21:00:00.000
0
0
1
0
python,string,function
12,167,492
3
false
0
0
If you mean get a function or method on a class or module, all entities (including classes, modules, functions, and methods) are objects, so you can do a func = getattr(thing 'func1') to get the function, then func() to call it.
2
1
0
I got this: tu = ("func1", "func2", "func3") And with the operation I am looking for I would get this for the first string: moduleA.func1() I know how to concatenate strings, but is there a way to join into a callable string?
Join two strings into a callable string 'moduleA' + 'func1' into moduleA.func1()
0
0
0
186
12,168,110
2012-08-28T21:58:00.000
-1
0
1
0
python,permissions,active-directory,share
12,184,034
8
false
0
0
For starters, the user's profile directory is created automatically if it does not exist, and the permissions are set to reasonable defaults. Unless you have a specific need to use python, you could just let windows create the folder and sort permissions out for you. If you wish to use python anyway, you could consider just using os.system() to call cacls or icacls with the correct arguments. And instead of permissions, you might simply need to change the folder's owner to the user who will own the folder. Good luck with your endeavours.
1
29
0
I'm using Python to create a new personal folder when a users AD account is created. The folder is being created but the permissions are not correct. Can Python add the user to the newly created folder and change their permissions? I'm not sure where to begin coding this.
How to set folder permissions in Windows?
-0.024995
0
0
48,831
12,173,541
2012-08-29T08:16:00.000
5
0
0
0
python,matplotlib,sublimetext2,sublimetext
12,610,165
2
true
0
0
I had the same problem and the following fix worked for me: 1 - Open Sublime Text 2 -> Preferences -> Browse Packages 2 - Go to the Python folder, select file Python.sublime-build 3 - Replace the existing cmd line for this one: "cmd": ["/Library/Frameworks/Python.framework/Versions/Current/bin/python", "$file"], Then click CMD+B and your script with matplotlib stuff will work.
1
3
1
I want to use matplotlib from my Sublime Text 2 directly via the build command. Does anybody know how I accomplish that? I'm really confused about the whole multiple python installations/environments. Google didn't help. My python is installed via homebrew and in my terminal (which uses brew python), I have no problem importing matplotlib from there. But Sublime Text shows me an import Error (No module named matplotlib.pyplot). I have installed Matplotlib via EPD free. The main matplotlib .dmg installer refused to install it on my disk, because no system version 2.7 was found. I have given up to understand the whole thing. I just want it to work. And, I have to say, for every bit of joy python brings with it, the whole thing with installations and versions and path, environments is a real hassle. Beneath a help for this specific problem I would appreciate any helpful link to understand and this environment mess.
MatPlotLib with Sublime Text 2 on OSX
1.2
0
0
3,245
12,175,404
2012-08-29T10:01:00.000
13
0
0
0
python,machine-learning,scikit-learn,mixture-model
12,199,026
1
true
0
0
Positive log probabilities are okay. Remember that the GMM computed probability is a probability density function (PDF), so can be greater than one at any individual point. The restriction is that the PDF must integrate to one over the data domain. If the log probability grows very large, then the inference algorithm may have reached a degenerate solution (common with maximum likelihood estimation if you have a small dataset). To check that the GMM algorithm has not reached a degenerate solution, you should look at the variances for each component. If any of the variances is close to zero, then this is bad. As an alternative, you should use a Bayesian model rather than maximum likelihood estimation (if you aren't doing so already).
1
7
1
I am using Gaussian Mixture Model from python scikit-learn package to train my dataset , however , I fount that when I code -- G=mixture.GMM(...) -- G.fit(...) -- G.score(sum feature) the resulting log probability is positive real number... why is that? isn't log probability guaranteed to be negative? I get it. what Gaussian Mixture Model returns to us i the log probability "density" instead of probability "mass" so positive value is totally reasonable. If the covariance matrix is near to singular, then the GMM will not perfomr well, and generally it means the data is not good for such generative task
scikit-learn GMM produce positive log probability
1.2
0
0
5,301
12,178,659
2012-08-29T13:05:00.000
1
0
0
1
python,pcap,scapy
13,545,284
2
false
0
0
Open the pcap file with pcap_open_offline(), compile the filter "arp" with pcap_compile(), set the filter on the pcap_t * to the resulting filter with pcap_setfilter(), and then read the packets from that pcap_t *.
1
1
0
I have written a code which sniffs packets on the network. It then filters it according to the MAC address and stores it as a .pcap file. Now I want to add a function to the code which can read the .pcap file or the object that holds the sniffed packets to filter it again to get ARP packets. I tried using PCAP library's bpf function but it doesn't help. Any other way this might work?
Using BPF on a PCAP file
0.099668
0
0
993
12,179,271
2012-08-29T13:37:00.000
0
0
1
0
python,oop,static-methods,class-method
41,840,490
12
false
0
0
A slightly different way to think about it that might be useful for someone... A class method is used in a superclass to define how that method should behave when it's called by different child classes. A static method is used when we want to return the same thing regardless of the child class that we are calling.
3
1,809
0
Could someone explain to me the meaning of @classmethod and @staticmethod in python? I need to know the difference and the meaning. As far as I understand, @classmethod tells a class that it's a method which should be inherited into subclasses, or... something. However, what's the point of that? Why not just define the class method without adding @classmethod or @staticmethod or any @ definitions? tl;dr: when should I use them, why should I use them, and how should I use them?
Meaning of @classmethod and @staticmethod for beginner?
0
0
0
721,840
12,179,271
2012-08-29T13:37:00.000
279
0
1
0
python,oop,static-methods,class-method
12,179,325
12
false
0
0
@classmethod means: when this method is called, we pass the class as the first argument instead of the instance of that class (as we normally do with methods). This means you can use the class and its properties inside that method rather than a particular instance. @staticmethod means: when this method is called, we don't pass an instance of the class to it (as we normally do with methods). This means you can put a function inside a class but you can't access the instance of that class (this is useful when your method does not use the instance).
3
1,809
0
Could someone explain to me the meaning of @classmethod and @staticmethod in python? I need to know the difference and the meaning. As far as I understand, @classmethod tells a class that it's a method which should be inherited into subclasses, or... something. However, what's the point of that? Why not just define the class method without adding @classmethod or @staticmethod or any @ definitions? tl;dr: when should I use them, why should I use them, and how should I use them?
Meaning of @classmethod and @staticmethod for beginner?
1
0
0
721,840
12,179,271
2012-08-29T13:37:00.000
1
0
1
0
python,oop,static-methods,class-method
45,264,634
12
false
0
0
I'm a beginner on this site, I have read all above answers, and got the information what I want. However, I don't have the right to upvote. So I want to get my start on StackOverflow with the answer as I understand it. @staticmethod doesn't need self or cls as the first parameter of the method @staticmethod and @classmethod wrapped function could be called by instance or class variable @staticmethod decorated function impact some kind 'immutable property' that subclass inheritance can't overwrite its base class function which is wrapped by a @staticmethod decorator. @classmethod need cls (Class name, you could change the variable name if you want, but it's not advised) as the first parameter of function @classmethod always used by subclass manner, subclass inheritance may change the effect of base class function, i.e. @classmethod wrapped base class function could be overwritten by different subclasses.
3
1,809
0
Could someone explain to me the meaning of @classmethod and @staticmethod in python? I need to know the difference and the meaning. As far as I understand, @classmethod tells a class that it's a method which should be inherited into subclasses, or... something. However, what's the point of that? Why not just define the class method without adding @classmethod or @staticmethod or any @ definitions? tl;dr: when should I use them, why should I use them, and how should I use them?
Meaning of @classmethod and @staticmethod for beginner?
0.016665
0
0
721,840
12,182,882
2012-08-29T16:53:00.000
2
0
0
1
continuous-integration,hudson,shutdown,python-idle
12,382,944
2
true
1
0
Solution: disable the thinBackup plugin ... I figured this out by taking a look at the Hudson logs at http://localhost:8080/log/all thinBackup was running every time the Hudson instance went into shutdown mode. The fact that shutdown mode was occurring at periods of inactivity is also consistent with the behavior of thinBackup. I then disabled the plug-in and Hudson no longer enters shutdown mode. What's odd is that thinBackup had been installed for some time before this problem starting occurring. I am seeking out a solution from thinBackup to re-enable the plugin without the negative effects and will update here if I get an answer.
1
1
0
After several months of successful and unadulterated continuous integration, my Hudson instance, running on Mac OSX 10.7.4 Lion, decides it wants to enter shutdown mode after every 20-30 minutes of inactivity. For those of you familiar with shutdown mode, the instance of course doesn't shutdown, but has the undesirable effect (in this case) of stopping new jobs from starting. I know I haven't changed any settings, so it makes me think the problem was slowly growing and keeps triggering shutdown mode. I know there is plenty of storage space on the machine with 400+ GB to go so I'm wondering what else would trigger shutdown mode without actually using the Hudson web portal to manually do it. As mentioned before, the problem also seems to be tied to inactivity. I tried creating a quick fix, which is a build job that does nothing every 5 minutes. It appeared to work at first, but after long periods of inactivity I will find it back in shutdown mode. Any ideas what might be going on?
How to prevent Hudson from entering shutdown mode automatically or when idle?
1.2
0
0
864
12,183,730
2012-08-29T17:53:00.000
0
0
0
0
python,django
12,183,778
2
false
1
0
For this you would have some kind of editor that would create a html string. This string would be stored into your database and then upon request you would display it on the user's site. The editor should be very strict into what it can add and what the user has control over, there are some javascript editors available that will be able to provide this functionality. The only issue I can think of is that you may run into django escaping the form when displayed to the page.
1
0
0
For example: I have a user that wants to create a contact form for their personal website. They want three input type=text and one textarea and they specify a label and an name/id for them on my site. Then they can use this form on their site, but I will handle it on mine? Is it possible for django to spit out custom forms specified by the user? Edit: If django is too "locked down" what would you recommend I do? I would like to stay with python.
Is it possible to have a form built by a user?
0
0
0
71
12,183,759
2012-08-29T17:55:00.000
0
0
0
1
python,numpy,scipy,apache-pig
12,618,627
1
false
0
0
You can stream through a (C)Python script that imports scipy. I am for instance using this to cluster data inside bags, using import scipy.cluster.hierarchy
1
1
1
I want to write UDFs in Apache Pig. I'll be using Python UDFs. My issue is I have tons of data to analyse and need packages like NumPy and SciPy. Buy this they dont have Jython support I cant use them along with Pig. Do we have a substitue ?
Using Numpy and SciPy on Apache Pig
0
0
0
524
12,184,372
2012-08-29T18:35:00.000
5
0
0
0
python,boto,amazon-dynamodb
12,184,769
1
true
0
0
It is only a matter of abstraction levels. In most cases, you will want to use the highest level API. Layer1 API is a direct mapping of Amazon's API layer2 API add some nice abstractions like a generator for scan and query results as well as answer cleaning. When you call layer2, it calls layer1 which ends up generating HTTP calls.
1
3
0
Which should be used and for what? Is there any advantage to one over the other?
In Python Boto's API for DynamoDB, what are the differences between Layer1 and Layer2?
1.2
0
1
271
12,184,739
2012-08-29T18:56:00.000
2
0
0
0
python,tkinter
12,184,877
1
true
0
1
Solution: When creating the entry widgets in the popup, set their exportselection property to 0. Then selecting them won't affect any other selections.
1
1
0
I am noticing an issue in my Tkinter application when I create a new Toplevel popup (actually a subclass of tkSimpleDialog.Dialog) and try to navigate through its widgets with the Tab key. It works as expected, except whatever I had selected in a Listbox in my application's main window becomes unselected, as if the widget in the popup took the focus from it. Does anyone know why this is happening and how to prevent it? My Tkinter knowledge doesn't cover how interactions between windows affect the focus...
Widgets in a Tkinter popup taking Tab focus from widgets in another window
1.2
0
0
236
12,185,006
2012-08-29T19:16:00.000
3
0
1
0
python,concurrency,python-3.x
13,961,911
1
false
0
0
map is used to call a single function on one or more iterables. submit is used to generate a Future object for a single function call with its associated arguments. Think of concurrent.map as just a parallel version of the builtin map function. submit is used to generate a future.
1
4
0
I am going through the Python concurrent.futures module and using it to become more familiar with parallel/concurrent programming models. Unfortunately, since it is a relatively new module, I cannot find a significant amount of beginner-oriented literature. I understand that map() returns the direct return value of a function called on the iterable through the processes or threads. And submit() returns a futures object. I would like more explanation of why one might choose to use one or the other. It seems like map() is more for clearly parallel tasks that do not need to be coordinated. And submit() might be more useful for complex concurrent use cases. However, I am pretty new to this, and was hoping that someone more knowledgeable could expand. Thanks.
Concurrent.futures: what are the use cases for map() vs. submit()?
0.53705
0
0
757
12,185,117
2012-08-29T19:26:00.000
3
0
0
0
python,matrix,numpy,python-2.7,matrix-multiplication
12,185,246
1
true
0
0
In numpy convention, the transpose of X is represented byX.T and you're in luck, X.T is just a view of the original array X, meaning that no copy is done.
1
1
1
I am trying to optimize (memorywise the multiplication of X and its transpose X' Does anyone know if numpys matrix multiplication takes into consideration that X' is just the transpose of X. What I mean is that if it detects this and therfore does not create the object X' but just works on the cols/rows of X to produce the product? Thank you for any help on this! J.
Python - Numpy matrix multiplication
1.2
0
0
505
12,185,218
2012-08-29T19:32:00.000
5
0
0
0
python,django,unicode,internationalization,django-registration
12,185,565
2
true
1
0
It is really not a problem - because this character restriction is in UserCreationForm (or RegistrationForm in django-registration) only as I remember, and you can easily make your own since field in database is just normal TextField. But those restriction is there not without a reason. One of the possible problems I can think of now is creating links - usernames are often used for that and it may cause a problem. There is also bigger possibility of fake accounts with usernames looking the same but being in fact different characters, etc.
1
6
0
I am developing a website using Django 1.4 and I use django-registration for the signup process. It turns out that Unicode characters are not allowed as usernames, whenever a user enters e.g. a Chinese character as part of username the registration fails with: This value may contain only letters, numbers and @/./+/-/_ characters. Is it possible to change it so Unicode characters are allowed in usernames? If yes, how can I do it? Also, can it cause any problem?
Unicode characters in Django usernames
1.2
0
0
2,967
12,186,132
2012-08-29T16:25:00.000
0
0
1
0
python,wing-ide
12,186,133
1
true
0
0
Check that the radio button selection for Initial Directory is set to Use Default or enter a valid initial directory there if leaving it set to Custom. Similarly, under the Environment tab you need to correct the radio button setting for Python Executable or enter a valid path to a python.exe.
1
0
0
In WingIDE 101: When I go to Source- Current File Properties - Debug - Show this dialog before each run - Apply: I get the error: Some values are invalid: Python executable 'C:\Program Files\Common Files\Microsoft Shared\Windows Live\' is not a file Initial directory '' does not exist Please correct the values and try again. What is the problem?
Wing IDE 101 says some values are invalid
1.2
0
0
240
12,186,994
2012-08-29T21:47:00.000
9
0
0
0
python,statistics,pandas,statsmodels
12,187,770
4
true
0
0
statsmodels doesn't have a Johansen cointegration test. And, I have never seen it in any other python package either. statsmodels has VAR and structural VAR, but no VECM (vector error correction models) yet. update: As Wes mentioned, there is now a pull request for Johansen's cointegration test for statsmodels. I have translated the matlab version in LeSage's spatial econometrics toolbox and wrote a set of tests to verify that we get the same results. It should be available in the next release of statsmodels. update 2: The test for cointegration coint_johansen was included in statsmodels 0.9.0 together with the vector error correction models VECM. (see also 3rd answer)
1
13
1
I can't find any reference on funcionality to perform Johansen cointegration test in any Python module dealing with statistics and time series analysis (pandas and statsmodel). Does anybody know if there's some code around that can perform such a test for cointegration among time series?
Johansen cointegration test in python
1.2
0
0
18,067
12,187,115
2012-08-29T21:59:00.000
1
0
1
0
python,numpy,version,scipy,optparse
12,205,078
5
false
0
0
I personally use Debian stable for my own projects so naturally I gravitate toward what the distribution uses as the default Python installation. For Squeeze (current stable), it's 2.6.6 but Wheezy will use 2.7. Why is this relevant? Well, as a programmer there are a number of times I wish I had access to new features from more recent versions of Python, but Debian in general is so conservative that I find it's a good metric of covering wider audience who may be running an older OS. Since Wheezy probably will become stable by the end of the year (or earlier next year), I'll be moving to 2.7 as well.
3
2
1
I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7. I am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools. Which version, 2.6 or 2.7, would be better for this? This would also clear up whether or not to use optparse when making my scripts. Edit: I am working at a university and the workstation I picked up had Python 2.4. Picking between 2.6 and 2.7 determines which distro to upgrade to. Thanks for the advice!
Open Source Scientific Project - Use Python 2.6 or 2.7?
0.039979
0
0
169
12,187,115
2012-08-29T21:59:00.000
2
0
1
0
python,numpy,version,scipy,optparse
12,187,327
5
false
0
0
If you intend to distribute this code, your answer depends on your target audience, actually. A recent stint in some private sector research lab showed me that Python 2.5 is still often use. Another example: EnSight, a commercial package for 3D visualization/manipulation, ships with Python 2.5 (and NumPy 1.3 or 1.4, if I'm not mistaken). For a personal project, I'd shoot for 2.7. For a larger audience, I'd err towards 2.6.
3
2
1
I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7. I am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools. Which version, 2.6 or 2.7, would be better for this? This would also clear up whether or not to use optparse when making my scripts. Edit: I am working at a university and the workstation I picked up had Python 2.4. Picking between 2.6 and 2.7 determines which distro to upgrade to. Thanks for the advice!
Open Source Scientific Project - Use Python 2.6 or 2.7?
0.07983
0
0
169
12,187,115
2012-08-29T21:59:00.000
9
0
1
0
python,numpy,version,scipy,optparse
12,187,140
5
true
0
0
If everything you need would work with 2.7 I would use it, no point staying with 2.6. Also, .format() works a bit nicer (no need to specify positions in the {} for the arguments to the formatting directives). FWIW, I usually use 2.7 or 3.2 and every once in a while I end up porting some code to my Linux box which still runs 2.6.5 and the format() thing is annoying enough :) 2.7 has been around enough to be supported well - and 3.x is hopefully getting there too.
3
2
1
I've seen several other topics on whether to use 2.x or 3.x. However, most of these are at least two years old and do not distinguish between 2.6 and 2.7. I am rebooting a scientific project that I ultimately may want to release by 2013. I make use of numpy, scipy, and pylab, among standard 2.6+ modules like itertools. Which version, 2.6 or 2.7, would be better for this? This would also clear up whether or not to use optparse when making my scripts. Edit: I am working at a university and the workstation I picked up had Python 2.4. Picking between 2.6 and 2.7 determines which distro to upgrade to. Thanks for the advice!
Open Source Scientific Project - Use Python 2.6 or 2.7?
1.2
0
0
169
12,187,795
2012-08-29T23:16:00.000
0
0
0
1
python,macos,r,bluetooth
12,187,989
1
true
0
0
there is a strong probability that you can enumerate the bluetooth as serial port for the bluetooth and use pyserial module to communicate pretty easily... but if this device does not enumerate serially you will have a very large headache trying to do this... see if there are any com ports that are available if there are its almost definitely enumerating as a serial connection
1
0
1
I have a device that is connected to my Mac via bluetooth. I would like to use R (or maybe Python, but R is preferred) to read the data real-time and process it. Does anyone know how I can do the data streaming using R on a Mac? Cheers
How can I stream data, on my Mac, from a bluetooth source using R?
1.2
0
0
377
12,188,758
2012-08-30T01:34:00.000
1
0
0
0
c++,python,c,web-applications,wsgi
12,188,910
2
true
0
1
ISAPI on Windows/IIS. NSAPI is supported by a number of vendors.
1
0
0
Or is there any? I would be happy to know if any. Thank you.
Why there is no any standard Gateway Interface for languages C and/or C++ as WSGI for Python?
1.2
0
0
233
12,190,125
2012-08-30T04:55:00.000
4
1
0
0
python,apache,mod-wsgi,pyramid
12,203,642
2
true
1
0
It's usually a lot easier to use something other than mod_wsgi to develop your Python WSGI application (mod_wsgi captures stdout and stderr, which makes it tricky to use things like pdb). The Pyramid scaffolding generates code that allows you to do something like "pserve development.ini" to start a server. If you use this instead of mod_wsgi to do your development, you can do "pserve development.ini --reload" and your changes to Python source will be reflected immediately. This doesn't mean you can't use mod_wsgi to serve your application in production. After you get done developing, you can then put your application into mod_wsgi for its productiony goodness.
1
1
0
The problem that I am facing is whenever I make changes to my Python code, like in __init__.py or views.py file, they are not reflected on the server immediately. I am running the server using Apache+mod_wsgi, so all the Daemon process and virtual host are configured properly. I find that I have to run setup.py each time for new changes to take place. Is this how Pyramid works or I am missing something. Shouldn't the updated files be served instead of the old ones.
File changes not reflecting immediately
1.2
0
0
1,229
12,190,128
2012-08-30T04:55:00.000
0
0
0
0
python,file,binaryfiles,binary-data
12,190,201
3
false
0
0
PDF documents start with %PDF-version number , but some of them could be entirely compressed.
2
3
0
I have some files without extension. I would like associate extensions to them. For that I have written a python program to read the data in the file. My doubt is how can I identify its type without the extension without using third party tools. I have to identify a pdf, doc and text file only. Other type of files are not possible. My server is cent os
Identifying the type of a file without extension from binary data
0
0
0
5,921
12,190,128
2012-08-30T04:55:00.000
5
0
0
0
python,file,binaryfiles,binary-data
12,190,162
3
false
0
0
You haven't said what OS your on. If its a *nix based one then there is a python wrapper (that uses ctypes) around libmagic which uses the same underlying mechanism as the file command which can identify files without extensions by examining the contents. Alternately just examine how libmagic uses the file definitions and just work out how it identifies the two primary file types (doc, pdf) and everything left must be text ;-) and extend your existing code.
2
3
0
I have some files without extension. I would like associate extensions to them. For that I have written a python program to read the data in the file. My doubt is how can I identify its type without the extension without using third party tools. I have to identify a pdf, doc and text file only. Other type of files are not possible. My server is cent os
Identifying the type of a file without extension from binary data
0.321513
0
0
5,921
12,195,063
2012-08-30T10:42:00.000
0
0
0
0
python,encryption,twisted,ssl
12,195,181
2
false
0
0
In TLS, the server (the side which listen's for connections) always needs a certificate. Client-side certificates may be used only for peer authentication, but not for the channel encryption. Keep in mind also, that you can't simply "encrypt" a connection without some infrastructure to verify the certificates in some way (using certification authorities, or trust databases for example). Encryption without certificate validity verification does not hold against an active adversary (google for 'man in the middle attack' for more details on this).
1
3
0
Is it possible for a client to establish a SSL connection to a server using the server's certificate already exchanged through other means? The point would be to encrypt the connection using the certificate already with the client and not have to rely on the server to provide it. The server would still have the private key for the certificate the client uses. This question isn't language specific, but answers specific to python and twisted are appreciated.
SSL connection with client-side certificate
0
0
1
2,594
12,195,459
2012-08-30T11:05:00.000
0
0
0
0
python,selenium
12,197,060
1
true
1
0
I got the solution, Actually problem was the transaction handling. Through out the program django uses auto_commit transaction so database change was happening only after program executed completely. So instead of auto_commit I am handling transaction manually by using transaction.commit_manually and transaction.commit() and transaction.rollback() to properly commit and rollback transactions at the point where I want them to save.
1
0
0
While writing a selenium test case,I have found out a weird situation I was saving a form , and at the time of saving the form, I created a user into the database. Now what is happening that user has been created successfully in the database but at the time of getting it in that same selenium test case I am getting DOESNOTEXIST exception. when I check manaually in the database, the newly created user is there. Can anybody explain how can I create and test that user has been created or not on DB in same program ? and if it is not possible than Why ?
Creating and testing that a database field has been created or not in same program
1.2
0
0
43
12,196,442
2012-08-30T12:04:00.000
3
1
0
0
python,login,pyramid
12,196,884
1
true
1
0
there are three parts you need; The page that handles the authenticated form submission should check to see if the request is properly authenticated, perform the action, but if it isn't, store all of the data in a server side session and redirect the use to a login page. The login page should look for a "was trying to do X" sort of query param (eg, ...?fromurl=/post/a/comment. After the user successfully logs in, the login page should redirect the user to that page instead of the site's front page. The url the user was redirected to should be the same form they used to originally fill out the unauthenticated request. In this case, though, the server should recognize that there are field values stored in the server side session for this user; and so it should populate all of the form fields with those values. The user could then hit submit immediately and complete the post. This could work in a similar way that fields are repopulated when a request contains some invalid form values. It's important that step 3 should not perform the post directly; The original data and request came from a user who was not authenticated.
1
0
0
Let us have some simple page that allows logged users to edit articles. Imagine following situation: User Bob is logged into the system and is editing long article. As it takes really long to edit such article, his authentication becomes expired. After that, he clicks submit button and because of expired authentication, he is redirected to login page. It is really desirable to finish the action (saving article) after his successful login. So we shall restore the request that was done while Bob was unauthenticated and repeat it now, after successful login. How could this be done with pyramids?
Request restoration after login in pyramid
1.2
0
0
227
12,199,998
2012-08-30T15:07:00.000
0
0
0
0
python,windows,terminal,wxwidgets,gtk3
46,878,690
3
false
0
1
I think, the best way is installing python and gtk3 with msys2. firstly install msys2 and install python and gtk3 using msys2. search on Google "how to install python gobject to msys2"
1
4
0
I'm trying to port a small app to windows (I made it for ubuntu initally), it's written on python + gtk3... I know that gtk3 is hard to make it work on windows (even on c++), but is it possible to make it work on Windows with Python? I do not want to re-write it on another toolkit, and if so, it will probably be wxWidgets, because I'm using an embedded terminal on it (Vte.Terminal()), that IIRC is part of gtk3 too. If it's not possible, is there a way to make a terminal widget on wxPython in Windows?
python + gtk3 on windows?
0
0
0
3,832
12,200,972
2012-08-30T16:00:00.000
0
0
0
1
python,celery
18,464,160
1
false
1
0
Use AbortableTask as a template and create a RevokableTask class to your specification.
1
0
0
I have a task that retries often, and I would like a way for it to cleanup if it is revoked while it is in the retry state. It seems like there are a few options for doing this, and I'm wondering what the most acceptable/cleanest would be. Here's what I've thought of so far: Custom Camera that picks up revoked tasks and calls on_revoked Custom Event Consumer that knows to process on_revoked on tasks that get revoked Using AbortableTasks and using abort instead of revoke (I'd really like to avoid this) Are there any other options that I am missing?
how to implement an on_revoked event in celery
0
0
0
355
12,201,074
2012-08-30T16:05:00.000
3
0
0
0
python,selenium
12,201,383
1
false
0
0
No this is not possible with any Selenium library. Use the normal Python method of renaming a file.
1
0
0
Is it possible to change the file name of a file with selenium in python before/after it downloads? I do not want to use the os module. Thanks! For example, what if my file was being download to C:\foo\bar, and its name is foo.csv, could i change it to bar.csv.
changing the name of a downloaded file selenium
0.53705
0
1
1,061
12,201,811
2012-08-30T16:53:00.000
1
0
1
0
python,list,subclass
12,202,135
6
false
0
0
Actually the best answer may be: don't. Checking all objects as they get added to the list will be computationally expensive. What do you gain by doing those checks? It seems to me that you gain very little, and I'd recommend against implementing it. Python doesn't check types, and so trying to have a little bit of type checking for one object really doesn't make a lot of sense.
1
4
0
I want a python list which represents itself externally as an average of its internal list items, but otherwise behaves as a list. It should raise a TypeError if an item is added that can't be cast to a float. The part I'm stuck on is raising TypeError. It should be raised for invalid items added via any list method, like .append, .extend, +=, setting by slice, etc. Is there a way to intercept new items added to the list and validate them? I tried re-validating the whole list in __getattribute__, but when its called I only have access to the old version of the list, plus it doesn't even get called initialization, operators like +=, or for slices like mylist[0] = 5. Any ideas?
Subclass Python list to Validate New Items
0.033321
0
0
1,462
12,202,303
2012-08-30T17:27:00.000
0
0
0
0
python,percona
12,202,936
1
true
0
0
mysql_config is a part of mysql-devel package.
1
1
0
I'm trying to install mysql-python package on a machine with Centos 6.2 with Percona Server. However I'm running into EnvironmentError: mysql_config not found error. I've carefully searched information regarding this error but all I found is that one needs to add path to mysql_config binary to the PATH system variable. But it looks like, with my percona installation, a don't have mysql_config file at all find / -type f -name mysql_config returns nothing.
mysql-python with Percona Server installation
1.2
1
0
1,025
12,202,815
2012-08-30T18:00:00.000
1
0
0
0
python,api,google-app-engine,jinja2
12,203,940
1
true
1
0
If you don't expect them to change constantly, you can cache the results in memcache and only hit the real API when necessary. On top of that, if you think that the API calls are predictable, you can do this using a backend, and memcache the results (basically scraping), so that users can get at the cached results rather than having to hit the real API.
1
1
0
I just have a simple question here. I'm making a total of 10 calls to the Twitch TV API and indexing them, which is rather slow (15 seconds - 25 seconds slow). Whenever I make these calls browser side (i.e. throw them into my url), they load rather quickly. Since I am coding in python, is there any way I could fetch/index multiple URL's using say, jinja2? If not, is there anything else I could do? Thank you!
API Call Is Extremely Slow Server Side on GAE but Fast Browser Side
1.2
0
1
159
12,203,149
2012-08-30T18:23:00.000
2
0
0
0
python,django,flash,api,graphics
12,203,505
1
false
1
0
Assuming you mean drawing plans interactively in the browser, rather than maps in the sense of Google Maps, you need something like HTML5 canvas or SVG, and a library like fabric.js (for canvas) or Raphael (for SVG). Your JS code will then handle the mechanics of drawing lines from mouse input, producing a picture in the browser. You can then extract that picture using JS and pass it back to the server for saving as a PNG or whatever. If you're targeting modern browsers, canvas is definitely the way to go - it's a much nicer API, has better libraries (IMO) and is easier to extract PNGs from. SVG isn't too bad, but getting PNGs out is tricky - it relies either on hacks (converting the SVG to canvas in JS, rendering it in an invisible element, then converting that to PNG!) or sending the whole SVG to the server to be rendered there. I've recently implemented something requiring very similar mechanics, although for a very different purpose, so if you have any more detailed questions feel free to ask.
1
0
0
i'll try to be the most specific possible. I have a Django project and i want to be able to draw a inner map of a certain place. By that, i mean a graphical representation of important objects like the tables positions, bathrooms etc. I'm trying to avoid Flash as an option. Is there an existing API that i can use? Or how can i get this thing working? I don't mean to draw soemthing in 3d, just a simple view from above, like a blueprint. Thanks in advance.
Draw an inner map of a certain place (like a house blueprint) in Django
0.379949
0
0
211
12,203,295
2012-08-30T18:34:00.000
0
0
1
0
python,twisted,fix-protocol
12,395,806
2
false
0
0
Printing out repr as suggested by Ignacio Vazquez-Abrams, solves part of the problem.
2
2
0
I am sending a fix message over socket connection and recieving within a python client. I know there is a SOH seperating each name=value pair in the data. But the data when printed(as a string), does not show the SOH. The problem arises because I want to be able to show the '|' or I cannot tell within a regular expression, what the boundaries for the individual fields are. I have looked at decode('hex'), decode('uu') on the recieved string , without much success. Also the pack/unpack require that you supply a format string(which I would have to do for every type of fix). I am using the Twisted ClientFactory for the client. Any suggestions? Follow Up Question: I use the repr and pass it to a function to replace the '\x01' with '|'. Now when I pass in the data recieved from the network directly, replace seems to have no affect. However when I copy the output and pass it as a string literal into the same function. It behaves as expected(replaces '\x01' with '|'). I also tried using a re.sub, with exactly the same results( works when passed in as a string literal , but not when passed in directly from the network). I also printed the value from the network into a file , and compared using vi hex editor , to the string literal. It does not reveal any differences. Some additional information: When I print the value from a file and read it back, I am not able to use find on '\x01', implying that replace would not work either(it does not). When I try to convert this into a byte array , it would appear that each of the '\' , 'x' , '0', '1' are interpreted as different bytes, when i iterate over the byte array. Which is strange. either the '\x01' is a string or its not and is hex. Any suggestions? thanks
Interpreting python string(FIX) recieved over the network as binary
0
0
0
190
12,203,295
2012-08-30T18:34:00.000
0
0
1
0
python,twisted,fix-protocol
12,414,985
2
true
0
0
It would appear that replace using '\x01' works on the data coming in over the network(and not the output of repr). I am not sure what the reason is, but this meets my requirement.
2
2
0
I am sending a fix message over socket connection and recieving within a python client. I know there is a SOH seperating each name=value pair in the data. But the data when printed(as a string), does not show the SOH. The problem arises because I want to be able to show the '|' or I cannot tell within a regular expression, what the boundaries for the individual fields are. I have looked at decode('hex'), decode('uu') on the recieved string , without much success. Also the pack/unpack require that you supply a format string(which I would have to do for every type of fix). I am using the Twisted ClientFactory for the client. Any suggestions? Follow Up Question: I use the repr and pass it to a function to replace the '\x01' with '|'. Now when I pass in the data recieved from the network directly, replace seems to have no affect. However when I copy the output and pass it as a string literal into the same function. It behaves as expected(replaces '\x01' with '|'). I also tried using a re.sub, with exactly the same results( works when passed in as a string literal , but not when passed in directly from the network). I also printed the value from the network into a file , and compared using vi hex editor , to the string literal. It does not reveal any differences. Some additional information: When I print the value from a file and read it back, I am not able to use find on '\x01', implying that replace would not work either(it does not). When I try to convert this into a byte array , it would appear that each of the '\' , 'x' , '0', '1' are interpreted as different bytes, when i iterate over the byte array. Which is strange. either the '\x01' is a string or its not and is hex. Any suggestions? thanks
Interpreting python string(FIX) recieved over the network as binary
1.2
0
0
190
12,204,330
2012-08-30T19:46:00.000
1
1
0
0
python,django,mod-wsgi,pyc
12,204,524
2
true
1
0
By default apache probably doesn't have any write access to your django app directory which is a good thing security wise. Now Python will byte recompile your code once every apache restart then cache it in memory. As it is a longlive process it is ok. Note: if you really really want to have those pyc, give a write access to your apache user to the source directory. Note2: This can create a hell lot of confusion when you start with manage.py a test instance shared by apache as this will create those pyc as root and will keep them if you then run apache despite a source code change.
1
1
0
I am running a web app on python2.7 with mod_wsgi/apache. Everything is fine but I can't find any .pyc files. Do they not get generated with mod_wsgi?
Cannot find .pyc files for django/apache/mod_wsgi
1.2
0
0
1,616
12,206,384
2012-08-30T22:19:00.000
2
0
1
0
python,dependencies,development-environment,virtualenv,setuptools
12,639,836
2
true
0
0
I do believe and fear the answer to your question is simply "No". setup.py just does not fit your needs. Finding tricks to do it nonetheless will probably cause more problems with new developers. Sadly I cannot provide perfect alternatives: Loouk zc.buildout as Lukas Graf described in a comment. Distribute a zipped quick start working directory with everything configured, if you know the platform of your devs. Provide a shell script which does the whole setup. Teach your devs the proper use and philosophy of setuptool (setup.py) and virtualenv, and have them call "setup.py develop" explicitly for every package they need it for. Remember explicit is better than implicit is part of the python zen. I would go for the last one, but YMMV.
1
2
0
If you run python setup.py develop on several packages in the same virtualenv, you can develop both of them without having to re-install after making changes. I recently extracted functionality from my project into a separate package, which I am now developing in this manner. Is there a way to express this dependency in my setup.py file, so new developers can simply run python setup.py develop once in the primary project's directory?
How can I automate installation of local "development mode" packages?
1.2
0
0
348
12,206,583
2012-08-30T22:37:00.000
0
0
1
0
python,decorator
12,206,644
3
false
0
0
you could use dir(cls) to get all the stuff on it and apply a try except decorator to each item in there that is callable ... then when you catch an exception set a flag on the class (valid=False) and then raise the error ... then memoize checks that flag before returning ... if there is no flag just return... at least thats the only way i can think of doing it...
2
0
0
I have several different classes that are relatively expensive to instantiate and their methods work fine until one of them throws and at that point there's no point in keeping the object around since the state is messed up in some way. I would usually just memoize a function that instantiates the object, but that won't work in this case since the memoize decorator only knows about the object at instantiation time, it doesn't know about each of the method calls. How might I solve this problem? Is it possible to create a decorator that wraps all of the methods individually and instantiates a new object when one of them throws?
Memoize class instance until one of its methods throws
0
0
0
95
12,206,583
2012-08-30T22:37:00.000
0
0
1
0
python,decorator
12,213,259
3
false
0
0
You can add a "._valid" attribute to the instances, and before returning it from the cache you can check if it's true. And if it isn't treat it as a cache miss.
2
0
0
I have several different classes that are relatively expensive to instantiate and their methods work fine until one of them throws and at that point there's no point in keeping the object around since the state is messed up in some way. I would usually just memoize a function that instantiates the object, but that won't work in this case since the memoize decorator only knows about the object at instantiation time, it doesn't know about each of the method calls. How might I solve this problem? Is it possible to create a decorator that wraps all of the methods individually and instantiates a new object when one of them throws?
Memoize class instance until one of its methods throws
0
0
0
95
12,206,879
2012-08-30T23:10:00.000
1
1
0
1
java,python,ruby,remote-debugging
12,206,913
2
false
1
0
If you're only looking to determine when it has completed (and not looking to really capture all the output, as in your other question) you can simply check for the existence of the process id and, when you fail to find the process id, phone home. You really don't need the logs for that.
1
0
0
I want to run a java program on a remote machine, and intercept its logs-- also I want to be able to know if the program has completed execution, and also whether it was successful execution or if execution was halted due to an error. Is there any ready-made java library available for this purpose? Also, I would like to be able to use this program for obtaining logs/execution completion for remote programs in different languages-- like Java/Ruby/Python etc--
java- how to code a process to intercept the output streams of program running on remote machine/know when remote program has halted/completed
0.099668
0
0
142
12,208,141
2012-08-31T02:27:00.000
0
0
1
0
python,pyephem
12,209,397
1
false
0
0
Unfortunately, no; PyEphem includes neither a pertubation module, nor gravitational simulation more generally, nor indeed any sort of dynamical model — all of the math currently built in to the package is analytic and predicts positions using orbital elements or polynomials that have had any pertubation built into them by somebody else.
1
2
0
Does anyone know if pyephem provides a function to compute the "perturbed" orbital element values for a given date/time, given a set of input orbital element values and an associated epoch? I am currently using the sla_pertel function from the pyslalib package to perturb the orbital elements and that works fine, but I would prefer not to have to use that package if there is already something in pyephem that I could use to do the same thing. I looked through the pyephem documentation, but didn't see anything obvious to do that. Thanks.
How to perturb orbital elements in pyephem?
0
0
0
188
12,208,680
2012-08-31T03:54:00.000
3
0
1
1
python,django,debian
12,208,688
2
false
1
0
Django does not support Python 3. You will need to install a version of Python 2.x.
1
2
0
I installed Python 3.2.3 in Debian /usr/local/bin/python3 and I installed Django 1.4 in the same directory. But when I try to import django from python 3 shell interpreter I get syntax error! What am I doing wrong?
How to install Django 1.4 with Python 3.2.3 in Debian?
0.291313
0
0
728
12,208,839
2012-08-31T04:18:00.000
0
0
1
1
python,python-2.7
37,494,427
2
false
0
0
if your target function is simple enough, you may want to try anonymous functions ("lambda function"). And, you can place this lambda function as it is or, as a function object (ex)f=lambda x,y: x+y and, no need to use partial nor importing "functools" package. (btw, if you want to use only partial, you can also clean up as 'from functools import partial' and directly use partial as your local function. <example with anonymous function> import subprocess subprocess.Popen(<cmd>, preexec_fn=(lambda x,y:x+y))
1
0
0
I have read the python document about subprocesses, but the argument preexec_fn for subprocess.Popen can only point to a function with no argument. Now I want to call a function with two arguments just like what preexec_fn does, I've tried to use global variables, but it doesn't work. How can I do that?
python subprocess.popen use preexec_fn with arguments
0
0
0
2,368
12,210,307
2012-08-31T06:56:00.000
1
0
0
0
python,mongodb
12,216,914
1
true
0
0
It sounds like the larger set (A if I followed along correctly), could reasonably be put into its own database. I say database rather than collection, because now that 2.2 is released you would want to minimize lock contention between the busier database and the others, and to do that a separate database would be best (2.2 introduced database level locking). That is looking at this from a single replica set model, of course. Also the index sizes sound a bit out of proportion to your data size - are you sure they are all necessary? Pruning unneeded indexes, combining and using compound indexes may well significantly reduce the pain you are hitting in terms of index growth (it would potentially make updates and inserts more efficient too). This really does need specifics and probably belongs in another question, or possibly a thread in the mongodb-user group so multiple eyes can take a look and make suggestions. If we look at it with the possibility of sharding thrown in, then the truly important piece is to pick a shard key that allows you to make sure locality is preserved on the shards for the pieces you will frequently need to access together. That would lend itself more toward a single sharded collection (preserving locality across multiple related sharded collections is going to be very tricky unless you manually split and balance the chunks in some way). Sharding gives you the ability to scale out horizontally as your indexes hit the single instance limit etc. but it is going to make the shard key decision very important. Again, specifics for picking that shard key are beyond the scope of this more general discussion, similar to the potential index review I mentioned above.
1
2
0
I have a collection that is potentially going to be very large. Now I know MongoDB doesn't really have a problem with this, but I don't really know how to go about designing a schema that can handle a very large dataset comfortably. So I'm going to give an outline of the problem. We are collecting large amounts of data for our customers. Basically, when we gather this data it is represented as a 3-tuple, lets say (a, b, c), where b and c are members of sets B and C respectively. In this particular case we know that the B and C sets will not grow very much over time. For our current customers we are talking about ~200,000 members. However, the A set is the one that keeps growing over time. Currently we are at about ~2,000,000 members per customer, but this is going to grow (possibly rapidly.) Also, there are 1->n relations between b->a and c->a. The workload on this data set is basically split up into 3 use cases. The collections will be periodically updated, where A will get the most writes, and B and C will get some, but not many. The second use case is random access into B, then aggregating over some number of documents in C that pertain to b \in B. And the last usecase is basically streaming a large subset from A and B to generate some new data. The problem that we are facing is that the indexes are getting quite big. Currently we have a test setup with about 8 small customers, the total dataset is about 15GB in size at the moment, and indexes are running at about 3GB to 4GB. The problem here is that we don't really have any hot zones in our dataset. It's basically going to get an evenly distributed load amongst all documents. Basically we've come up with 2 options to do this. The one that I described above, where all data for all customers is piled into one collection. This means we'd have to create an index om some field that links the documents in that collection to a particular customer. The other options is to throw all b's and c's together (these sets are relatively small) but divide up the C collection, one per customer. I can imangine this last solution being a bit harder to manage, but since we rarely access data for multiple customers at the same time, it would prevent memory problems. MongoDB would be able to load the customers index into memory and just run from there. What are your thoughts on this? P.S.: I hope this wasn't too vague, if anything is unclear I'll go into some more details.
Split large collection into smaller ones?
1.2
1
0
239
12,210,976
2012-08-31T07:46:00.000
0
0
0
0
python,windows,wxpython
12,216,343
2
false
0
1
The simple answer is No, unless this is a touchscreen application with no access to the computer hardware. That would probably work. Otherwise you'll have to look into how to lockdown your PC with Microsoft Policies etc. Or you might be able to do it with a locked down Linux install. Regardless, it's not really something you can manage with wxPython. It's something you have to manage at the OS level.
1
1
0
I am creating an application with wxpython for writing tests in schools, and it needs to be able to block the windows key, alt-tab and so on to prevent cheating. Is this possible and if it is, how do you do it? I know that you can't block ctrl + alt + del, but is it possible to detect when it is pressed?
Block windows key wxPython
0
0
0
876
12,212,321
2012-08-31T09:14:00.000
4
0
0
0
c#,java,c++,python,cassandra
12,212,637
1
true
0
0
The columns for each row will be returned in sorted order, sorted by the column key, depending on you comparator_type. The row ordering will depend on your partitioner, and if you use the random partitioner, the rows will come back in a 'random' order. In Cassandra, it is possible for each row to have a different set of columns, so you should really read the column key before using the value. This will depend on the data you have inserted into you cluster.
1
2
1
When get_range_slice returns, in what order are the columns returned? Is it random or the order in which the columns were created? Is it best practice to iterate through all resulting columns for each row and compare the column name prior to using the value or can one just index into the returning array?
Cassandra get_range_slice
1.2
0
0
161
12,212,473
2012-08-31T09:24:00.000
0
0
0
0
python,emacs,rope
12,436,344
1
true
0
0
Ok found the solution. Edit .ropeproject/config.py and add these lines to to set_prefs function def set_prefs(prefs):     ...     prefs.add('python_path', '<path to your external library>') Example:     prefs.add('python_path', '/usr/local/google_appengine')
1
0
0
I am using epy/ropemacs for my python project. "C-c g" (rope-goto-definition) works fine if the target is my source file. But it doesnt jump to third party source files. What I want to be able to do is jump to relevant third party source files. This might be just a matter of letting rope know what the path the libraries are. I dont know how to do it though. Any pointers will be helpful
Jumping to Python library sources with epy/emacs
1.2
0
0
218
12,213,780
2012-08-31T10:46:00.000
0
0
0
0
python,pygame
12,273,411
3
false
0
1
There is probably an easier way to do this, but you could get the state of all the keys on the keyboard using pygame.key.get_pressed() and then check to see if the two keys pressed are the Ctrl, Shift, and 9 keys. then you could manually return the string Ctrl+( instead of getting that from a function. Not sure if there is a really elegant way to do this though.
1
0
0
I was using event.unicode for this task previously. If I hit shift+9, then the event for the key down has a unicode attribute of '('. And if I hit shift+alt+9, then the event for the key down has a unicode attribute of '(', and the keyboard also sends a keydown for an alt key, so that I know one was pressed. If, however, I press ctrl+shift+9, then event.unicode == u''. How do I get u'(' back from that? (similarly, for ctrl+shift+a, I get '\x01')
How do I turn the key combination, ctrl+shift+9, into the string "ctrl+(" in PyGame?
0
0
0
1,924
12,218,088
2012-08-31T15:08:00.000
0
1
0
0
c++,python,linux,perforce,vms
13,205,270
2
false
0
0
Indeed, it's not clear from your question what sort of programming you want to do on VMS: C++ or python?? Assuming your first goal is to get familiar with the code-base, i.e. you want the ease of cross-ref'ing the sources: If you have Perforce server running on VMS, then you may try to connect to it directly with Linux Perforce client. And do "review" locally on Linux. If you've no Linux client, I'd try fetching the latest revisions off and importing raw files it into an external repository (svn, git, fossil etc.). Then again using Linux client and tools. If your ultimate goal is to do all development off VMS, then it may not really be viable -- the code may use VMS specific includes, system/RMS calls, data structs. And sync'ing the changes back and forth to VMS will get messy. From my experience, once you're familiar with the code-base, it's a lot more effective to make the code-changes directly on VMS using whatever editor is available (EDIT/TPU, EDT, LSE, emacs or vim ports etc.). As for debugging - VMS native debugger supports X-GUI as well as command-line. Check your build system for debug build, or use /NOOPT /DEBUG compile and /DEBUG link qualifiers. BTW, have a look into DECset, if installed on your VMS system.
2
1
0
I am working on C++ programming with perforce (a version control tool) on VMS. I need to handle tens or even hundreds of C++ files (managed by perforce) on VMS. I am familiar with Linux, python but not DCL (a script language) on VMS. I need to find a way to make programming/debug/code-review as easy as possible. I prefer using python and kscope (a kde based file search/code-review GUI tool that can generate call graph) or similar tools on VMS. I do not have sys-adm authorization, so I prefer some code-review GUI tools that can be installed without the authorization. Would you please give me some suggestions about how to do code-review/debug/programing/compile/test by python on VMS meanwhile using kscope or similar large-scale files management tools for code-review ? Any help will really be appreciated. Thanks
How to do code-review/debug/coding/test/version-control for C++ on perforce and VMS
0
0
0
281
12,218,088
2012-08-31T15:08:00.000
1
1
0
0
c++,python,linux,perforce,vms
12,220,702
2
false
0
0
Your question is pretty broad so it's tough to give a specific answer. It sounds like you have big goals in mind which is good, but since you are on VMS, you won't have a whole lot of tools at your disposal. It's unlikely that kscope works on VMS. Correct me if I'm wrong. I believe a semi-recent version of python is functional there. I would recommend starting off with the basics. Get a basic build system working that let's you build in release and debug. Consider starting with either MMS (an HP provided make like tool) or GNU make. You should also spend some time making sure that your VMS based Perforce client is working too. There are some quirks that may or may not have been fixed by the nice folks at Perforce. If you have more specific issues in setting up GNU make (on VMS) or dealing with the Perforce client on VMS, do ask, but I'd recommend creating separate questions for those.
2
1
0
I am working on C++ programming with perforce (a version control tool) on VMS. I need to handle tens or even hundreds of C++ files (managed by perforce) on VMS. I am familiar with Linux, python but not DCL (a script language) on VMS. I need to find a way to make programming/debug/code-review as easy as possible. I prefer using python and kscope (a kde based file search/code-review GUI tool that can generate call graph) or similar tools on VMS. I do not have sys-adm authorization, so I prefer some code-review GUI tools that can be installed without the authorization. Would you please give me some suggestions about how to do code-review/debug/programing/compile/test by python on VMS meanwhile using kscope or similar large-scale files management tools for code-review ? Any help will really be appreciated. Thanks
How to do code-review/debug/coding/test/version-control for C++ on perforce and VMS
0.099668
0
0
281
12,219,231
2012-08-31T16:26:00.000
0
0
1
1
python,pdb
12,219,648
2
false
0
0
It seems like it automatically gets switched at some point (probably I/O). If you want to force it though, you should call time.sleep().
2
2
0
I am debugging a Python application, that makes use of os.fork() at some point. After evaluating the function PDB remains in the parent process (as I can see from the value returned from the function). How do I switch between child and parent process in PDB?
how to switch between processes in pdb
0
0
0
869
12,219,231
2012-08-31T16:26:00.000
0
0
1
1
python,pdb
12,220,323
2
false
0
0
There is no way to do that with pdb. Your best bet will be to start your pdb session (using pdb.set_trace()) inside the child process after the fork.
2
2
0
I am debugging a Python application, that makes use of os.fork() at some point. After evaluating the function PDB remains in the parent process (as I can see from the value returned from the function). How do I switch between child and parent process in PDB?
how to switch between processes in pdb
0
0
0
869
12,222,349
2012-08-31T20:54:00.000
0
0
1
0
python,multithreading
12,223,654
3
false
0
0
Threads end when they do. You can signal a thread that you want it to terminate ASAP, but that assumes collaboration of the code running in a thread, and it offers no upper bound guarantee for when that happens. A classic way is to use a variable like exit_immediately = False and have threads' main routines periodically check it and terminate if the value is True. To have the threads exit, you set exit_immediately = True and call .join() on all threads. Obviously, this works only when threads are able to check in periodically.
1
3
0
I want to force threads termination in python: I don't want to set an event and wait until the thread checks it and exits. I'm looking for a simple solution like kill -9. Is this possible to do that without dirty hacks like operating with private methods etc.?
Is there a canonical way of terminating a thread in python?
0
0
0
301
12,224,671
2012-09-01T03:29:00.000
6
0
0
0
python,amazon-web-services,amazon-emr,amazon-vpc,mrjob
12,321,461
1
false
1
0
Right now (v 0.3.5) is not possible. I made a pull request on the github project to add support for the 'api_params' parameter of boto, so you can pass parameters directly to the AWS API, and use the 'Instances.Ec2SubnetId' parameter to run a job flow in a VPC subnet.
1
3
0
I'm using mrjob to run some MapReduce tasks on EMR, and I want to run a job flow in a VPC. I looked at the documentation of mrjob and boto, and none of them seems to support this. Does anyone know if this is possible to do?
mrjob: Is it possible to run a job flow in a VPC?
1
0
0
587
12,225,244
2012-09-01T05:42:00.000
0
0
0
1
python,unit-testing,nose
12,226,647
1
false
0
0
Solved* it with some outside help. I wouldn't consider this the proper solution, but by searching through sys.modules for all of my test_modules (which point to *.pyc files) and deling them, nose finally recognizes changes again. I'll have to delete them before each nose.run() call. These must be in-memory versions of the pyc files, as simply deleting them out in the shell wasn't doing it. Good enough for now. Edit: *Apparently I didn't entirely solve it. It does seem to work for a bit, and then all of a sudden it won't anymore, and I have to restart my shell. Now I'm even more confused.
1
0
0
I'm having the same problem on Windows and Linux. I launch any of various python 2.6 shells and run nose.py() to run my test suite. It works fine. However, the second time I run it, and every time thereafter I get exactly the same output, no matter how I change code or test files. My guess is that it's holding onto file references somehow, but even deleting the *.pyc files, I can never get the output of nose.run() to change until I restart the shell, or open another one, whereupon the problem starts again on the second run. I've tried both del nose and reload(nose) to no avail.
nose.run() seems to hold test files open after the first run
0
0
0
193
12,226,108
2012-09-01T08:13:00.000
0
0
1
0
python,debugging,pycharm
12,226,424
1
false
0
0
This is a bit unusual, are you using any extensions (written in C) that might cause the crash? If you can get a stack trace from the crashed process you should be able to see if it crashes in external modules or something that comes with python. You could also try upgrading to the latest 2.7 version of python. If it is really a bug within python, there's a chance it has been fixed in the latest release. Another thing to try is to try running it with PyPy, jython, IronPython (provided you don't need extensions that are unavailable on those platforms) and on other operating systems, which might give you some help in finding out where things go wrong. Also, like Blender already said, if you could post some code that causes the problem, people might be able to help finding the problem. Without more information it is hard to give better advice. Update: Just looked at the pythonocc you mentioned. It looks like it uses a lot c/c++ code so testing it on other python implementations or versions might not work all that well. Since it seems like a bug somewhere, I'd suggest contacting the pythonocc guys to try to describe your problem and hopefully they'll fix it.
1
1
0
I am dealing with a strange problem. I have some substantial code written in Python. When I run it in debug mode with PyCharm, it works fine. But it crashes in run mode in both Python IDLE and PyCharm IDE and a Windows dialog appears that says 'Pythonw.exe stopped working'. I am using Python 2.6.6 in Windows 7 32-bit with PyCharm 2.5 as my IDE. Thanks in advance.
Python Code Crashes with 'Pythonw.exe stopped working' but Works Fine in Debug Mode
0
0
0
2,203
12,226,224
2012-09-01T08:33:00.000
0
0
0
0
python,storage,tree-traversal
12,226,537
3
false
0
0
Maybe this is too obvious, but you could store your results in a similar tree. Since your computation is slow, the results tree should not grow out of hand too quickly. Then just look up if you have results for a given node.
2
4
0
I have a tree. It has a flat bottom. We're only interested in the bottom-most leaves, but this is roughly how many leaves there are at the bottom... 2 x 1600 x 1600 x 10 x 4 x 1600 x 10 x 4 That's ~13,107,200,000,000 leaves? Because of the size (the calculation performed on each leaf seems unlikely to be optimised to ever take less than one second) I've given up thinking it will be possible to visit every leaf. So I'm thinking I'll build a 'smart' leaf crawler which inspects the most "likely" nodes first (based on results from the ones around it). So it's reasonable to expect the leaves to be evaluated in branches/groups of neighbours, but the groups will vary in size and distribution. What's the smartest way to record which leaves have been visited and which have not?
How to track the progress of a tree traversal?
0
0
0
235
12,226,224
2012-09-01T08:33:00.000
1
0
0
0
python,storage,tree-traversal
12,231,278
3
false
0
0
You don't give a lot of information, but I would suggest tuning your search algorithm to help you keep track of what it's seen. If you had a global way of ranking leaves by "likelihood", you wouldn't have a problem since you could just visit leaves in descending order of likelihood. But if I understand you correctly, you're just doing a sort of hill climbing, right? You can reduce storage requirements by searching complete subtrees (e.g., all 1600 x 10 x 4 leaves in a cluster that was chosen as "likely"), and keeping track of clusters rather than individual leaves. It sounds like your tree geometry is consistent, so depending on how your search works, it should be easy to merge your nodes upwards... e.g., keep track of level 1 nodes whose leaves have all been examined, and when all children of a level 2 node are in your list, drop the children and keep their parent. This might also be a good way to choose what to examine: If three children of a level 3 node have been examined, the fourth and last one is probably worth examining too. Finally, a thought: Are you really, really sure that there's no way to exclude some solutions in groups (without examining every individual one)? Problems like sudoku have an astronomically large search space, but a good brute-force solver eliminates large blocks of possibilities without examining every possible 9 x 9 board. Given the scale of your problem, this would be the most practical way to attack it.
2
4
0
I have a tree. It has a flat bottom. We're only interested in the bottom-most leaves, but this is roughly how many leaves there are at the bottom... 2 x 1600 x 1600 x 10 x 4 x 1600 x 10 x 4 That's ~13,107,200,000,000 leaves? Because of the size (the calculation performed on each leaf seems unlikely to be optimised to ever take less than one second) I've given up thinking it will be possible to visit every leaf. So I'm thinking I'll build a 'smart' leaf crawler which inspects the most "likely" nodes first (based on results from the ones around it). So it's reasonable to expect the leaves to be evaluated in branches/groups of neighbours, but the groups will vary in size and distribution. What's the smartest way to record which leaves have been visited and which have not?
How to track the progress of a tree traversal?
0.066568
0
0
235
12,229,101
2012-09-01T15:36:00.000
0
1
1
0
python,debugging,reverse-engineering,ida
12,314,987
2
false
0
0
We've just got a notice from one of our users that the latest version of WingIDE supports debugging of IDAPython scripts. I think there are a couple of other programs using the same approach (import a module to do RPC debugging) that might work.
1
0
0
I'm kinda new to scripting for IDA - nevertheless, I've written a complex script I need to debug, as it is not working properly. It is composed of a few different files containing a few different classes. Writing line-by-line in the commandline is not effective for obvious reasons. Running a whole script from the File doesn't allow debugging. Is there a way of using the idc, idautils, idaapi not from within IDA? I've written the script on PyDev for Eclipse, I'm hoping for a way to run the scripts from within it. A similar question is, can the api classes I have mentioned work on idb files without IDA having them loaded? Thanks.
Debugging IDAPython Scripts outside of IDAPro
0
0
0
2,173
12,229,752
2012-09-01T17:11:00.000
3
0
0
1
python,linux,share,root,drive
12,229,835
2
false
0
0
You can't mount without root privileges (except in some circumstances, see below.) If you have no privileges on that machine, you have to ask the administrator. What an administrator can do is insert certain mount points into /etc/fstab and mark them user. An administrator could also install sudo for you and allow you to execute sudo mount. Python has no way (and shouldn't have a way) to circumvent these basic security features.
1
0
0
I am trying to mount a shared drive by using os.system() in python. The problem is, that the installed linux version has no sudo command. Installing a sudo-package has failed. When using the command su, I am getting an error that it must be used with suid. I can't chmod +s because I have no root. Any ideas? Mods? Or Buffer Overflow is the only solution here? =) Thank you in advance.
How Can I mount shared drive in Python with no root?
0.291313
0
0
813
12,229,842
2012-09-01T17:25:00.000
0
0
1
0
python,inline
12,229,926
5
false
0
0
Python will optimize the code for you, but you don't have any influence in how it does. To speed up the loadtimes of about any python code, you can compile it to bytecode before execution.
2
0
0
Does python have something similar to inline in C? If not, how can I speed up the execution of a function?
Does Python have an inline statement?
0
0
0
333
12,229,842
2012-09-01T17:25:00.000
0
0
1
0
python,inline
12,229,867
5
false
0
0
I am not aware of an equivalent inline feature in Python. The best thing you can do to speed up a function it to examine the algorithm assuming you have exhausted idiomatic language features (such as list comprehension etc).
2
0
0
Does python have something similar to inline in C? If not, how can I speed up the execution of a function?
Does Python have an inline statement?
0
0
0
333
12,229,881
2012-09-01T17:30:00.000
0
0
0
0
google-app-engine,version-control,python-2.7
12,230,416
2
false
1
0
If you go to the App Engine Admin page, you should be able to see all the instances you have running. Kill all the instances for the old versions and it should stop serving.
2
1
0
I'm at lost as to why I have deleted some versions of my apps in appspot.com, but event after clearing out the cache on both local browsers and appspot.com under pagespeed service. Old versions are still accessible. How long before deleted versions are gone? Also, I have upload changes, but it does not show up at all. So how long before changes show up? If there is a way to force these to happen I would greatly appreciate it. Thank you in advance of your assistance in this matter.
deleted version are still being serverd in appspot.com
0
0
0
155
12,229,881
2012-09-01T17:30:00.000
1
0
0
0
google-app-engine,version-control,python-2.7
18,547,441
2
false
1
0
Google's GSLB proxy will cache your static files for hours, even if you have disable your appspot application and then re-enable it. My solution is to append version number at every css, js, jpg url.
2
1
0
I'm at lost as to why I have deleted some versions of my apps in appspot.com, but event after clearing out the cache on both local browsers and appspot.com under pagespeed service. Old versions are still accessible. How long before deleted versions are gone? Also, I have upload changes, but it does not show up at all. So how long before changes show up? If there is a way to force these to happen I would greatly appreciate it. Thank you in advance of your assistance in this matter.
deleted version are still being serverd in appspot.com
0.099668
0
0
155
12,230,701
2012-09-01T19:25:00.000
3
0
0
1
python,django,tornado
12,256,534
2
true
1
0
I haven't seen big projects that use tornado in front of django. But technically, you can do monkey.patch_all() with gevent. And then tornado will make sense. It's really bad solution, but if all you need is async unstable django waiting for you with chainsaw at the corner to cut your legs off instead of shooting them - then that is yours.
2
1
0
I see a lot of people using django with tornado using WSGIContainer. Does this make any sense? Tornado is supposed to be asynchronous, but Django is synchronous. Aren't you just shooting yourself in the foot by doing that?
Django with Tornado
1.2
0
0
523
12,230,701
2012-09-01T19:25:00.000
0
0
0
1
python,django,tornado
12,254,758
2
false
1
0
Django comes with a debug server, so i guess, using Tornado with Django, the Tornado here is the mix of Apache + mod_WSGI
2
1
0
I see a lot of people using django with tornado using WSGIContainer. Does this make any sense? Tornado is supposed to be asynchronous, but Django is synchronous. Aren't you just shooting yourself in the foot by doing that?
Django with Tornado
0
0
0
523
12,231,053
2012-09-01T20:16:00.000
0
0
0
0
python,django,django-admin
12,232,286
1
true
1
0
Well another one! There was a misspelling in the setting.py file in the TEMPLATE_DIRS entry. This appeard to be the source of the problem.
1
0
0
I am having a weird problem. I have a standard django 1.4 website with an admin section. Locally everything is working fine, but when I deploy online, after login into the admin section the first time, it works. I logout then login again, then I get redirected to the same login page! If I login with incorrect credentials then sure enough the correct errors are shown. If I restart the apache production server then login/logout to the admin section works only for one time then every login from then on produces the same problem. Has anyone had this before? Is it something to do with cookies or maybe caching problems? Note: The app has only one url redirecting to the admin, there are no other views. Also prodcution is using http and not https.
Django: admin login page redirecting to itself
1.2
0
0
628
12,231,412
2012-09-01T21:06:00.000
0
1
0
0
php,python
12,231,539
3
false
0
0
If you put an & on the end of any shell command it will run in the background and return immediately, that's all you really need.
1
4
0
I am using phpseclib to ssh to my server and run a python script. The python script is an infinite loop, so it runs until you stop it. When I execute python script.py via ssh with phpseclib, it works, but the page just loads for ever. It does this because phpseclib does not think it is "done" running the line of code that runs the infinite loop script so it hangs on that line. I have tried using exit and die after that line, but of course, it didnt work because it hangs on the line before, the one that executes the command. Does any one have any ideas on how I can fix this without modifying the python file? Thanks.
PHP "Cancel" line of code
0
0
0
486
12,232,901
2012-09-02T02:01:00.000
0
0
0
0
python,pyglet,terrain,perlin-noise
14,346,374
2
false
0
0
You could also use 1d perlin noise to calculate the radius from each point to the "center" of the island. It should be really easy to implement, but it will make more circular islands, and won't give each point different heights.
1
1
1
I have been experimenting on making a random map for a top down RPG I am making in Python. (and Pyglet) So far I have been making island by starting at 0,0 and going in a random direction 500 times (x+=32 or y -=32 sort of thing) However this doesn't look like a real image very much so I had a look at the Perlin Noise approach. How would I get a randomly generated map out of this :/ (preferably an island) and is it better than the random direction method?
How to make a 2d map with perlin noise python
0
0
0
5,966
12,233,057
2012-09-02T02:48:00.000
-1
0
1
0
python,multithreading
12,233,083
2
false
0
0
Threads are designed to take advantage of multiple cores when they are available. If you only have one core, they'll run on one core too. :-) There's nothing to be concerned about, what you observe is "working as intended".
1
1
0
I am running a multithreaded application(Python2.7.3) in a Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz. I thought it would be using only one core but using the "top" command I see that the python processes are constantly changing the core no. Enabling "SHOW THREADS" in the top command shows diffrent thread processes working on different cores. Can anyone please explain this? It is bothering me as I know from theory that multithreading is executed on a single core.
Python multithreading, How is it using multiple Cores?
-0.099668
0
0
1,246
12,233,151
2012-09-02T03:15:00.000
2
0
0
1
python,google-app-engine,inheritance,data-modeling
12,240,201
1
true
1
0
What you're talking about is inheritance heirarchies, but App Engine keys provide for object heirarchies. An example of the former is "a banana is a fruit", while an example of the latter is "a car has a steering wheel". Parent properties are the wrong thing to use here; you want to use PolyModel.
1
0
0
I am trying to model a parent hierarchy relationship in Google App Engine using Python. For example, I would like to model fruit. So the root would be fruit, then a child of fruit would be vine-based, tree-based. Then for example children of tree-based would be apple, pear, banana, etc. Then as children of apple, I would like to add macintosh, golden delicious, granny smith, etc. I am trying to figure out the easiest way to model this such that I can put in another entity of type basket a an entity of type fruit, or of type granny smith. Any help would be greatly appreciated! Thanks Jon
Python Parent Child Relationships in Google App Engine Datastore
1.2
0
0
299
12,234,623
2012-09-02T08:57:00.000
0
0
0
0
python,user-interface,wxpython,cross-platform
12,235,213
1
true
0
1
The grate benefit from wxPython is its use of native widgets where possible, so on GTK platform it'll use GTK, on windows it will use win32, and on mac it will use cocoa/carbon. If you don't write a cross-platform app, then you should better use the specific API to this platform, e.g. pyGTK (Gnome etc.), PyWin32T (Windows) or whatever native toolkit on your platform, as it is normally more updated to the latest API than wxPython.
1
0
0
I'm a Python newbie but I'm interested in going into the depths of the language. I learned recently how to make simple GUI apps with wxPython and I loved it, I've read around that is the best cross-platform GUI kit around - yet, there are better "native" GUI kits (pyGTK, IronPython (if I'm not mistaken), pyObjc, etc.), but they are individual. Is there a way I can "mix" those GUI libraries into a single app? How can I provide the best GUI experience in a cross-platform app? Any help or suggestions are greatly appreciated.
Multiple GUI Kits in a single Python app
1.2
0
0
294
12,237,658
2012-09-02T16:23:00.000
5
0
1
1
python,google-app-engine,app-engine-ndb
12,237,692
1
true
0
0
ndb is simply a wrapper API. The core datastore is based on protocol buffers, and doesn't care what you use to access it. In other words, yes AFAIK it should work just fine.
1
1
0
Switching to the ndb library on python GAE. Can I use ndb with entities that were created previously using the low-level api? Or do I have to copy all the old entities and re-save them in order to use ndb? Thanks!
Will ndb work with entities that were created without using ndb on GAE?
1.2
0
0
140
12,238,541
2012-09-02T18:19:00.000
1
0
0
0
python,pywinauto
12,282,267
2
false
0
1
This is very difficult. PywinAuto is one of the best ways to handle this kind of problem, but you have to be very careful about which Windows application you are working with. This is because not every Windows application will "publish" it's controls in a reliable way for you to automate. This is particularly true of Mozilla Firefox. However, the Microsoft Office suite does consistently publish just about every control and button on each of its interfaces that I have ever seen. Thus, the real problem is not with PywinAuto, or even with Windows, it is with whoever wrote the application you are trying to automate and whether or not they reliably publish the interfaces you were trying to control. The other question you have to ask yourself is how you are populating the text fields and what is actually taking the time. Filling in fields and buttons should take a fraction of a second if they are independently workable. Otherwise, there is probably something else going on that you should investigate. Good luck. This is a really tough problem.
2
1
0
Im trying to communicate with a windows application with python. Need to fill in text fields and retrieve results (which are also displayed in text fields). Currently using PywinAuto, works perfectly but its too slow for my purpose. Filling in 6 textfields and pressing two buttons takes 2 to 3 seconds... Im looking for a way to speed this up. What is the fastest way to control and retrieve data from a windows application, that is feasible for a beginner in Python? Thanks in advance.
Python: communicating with window application
0.099668
0
0
974
12,238,541
2012-09-02T18:19:00.000
1
0
0
0
python,pywinauto
12,903,265
2
false
0
1
I have been using pywinauto for 1.5 years. And I have tried lots of different tools for UI automation. You know what, pywinauto not the slowest among them. Ofcource some actions can take a long tome (seconds), but as a rule it is a high weith actions, such as count children, etc. Please be sure you do not call findwindows method when it is not realy need.
2
1
0
Im trying to communicate with a windows application with python. Need to fill in text fields and retrieve results (which are also displayed in text fields). Currently using PywinAuto, works perfectly but its too slow for my purpose. Filling in 6 textfields and pressing two buttons takes 2 to 3 seconds... Im looking for a way to speed this up. What is the fastest way to control and retrieve data from a windows application, that is feasible for a beginner in Python? Thanks in advance.
Python: communicating with window application
0.099668
0
0
974
12,239,044
2012-09-02T19:26:00.000
5
0
1
0
python,autocomplete,ide,autosuggest,pycharm
12,239,453
1
true
0
0
in Project sidebar: right-click on desired folder (src in this case) and from context menu choose Mark directory as -> Source root. It will turn blue.
1
0
0
Is there a way to have PyCharm to auto-completion when specifying imports of packages defined in the project? PyCharm seems to do auto-complete just fine for library packages, but not the ones that have been defined in the project. Example project structure: ProjectName > src > package_1 __init__.py package_file.py __init__.py source_1.py If within the file 'source_1.py', I type the following, I get not auto-suggest for the rest of the work 'package_1': from packa
Get PyCharm to do auto-suggest when importing packages defined in the project
1.2
0
0
1,533
12,239,080
2012-09-02T19:31:00.000
7
0
1
0
python,speech-to-text
12,239,939
8
false
0
0
If you really want to understand speech recognition from the ground up, look for a good signal processing package for python and then read up on speech recognition independently of the software. But speech recognition is an extremely complex problem (basically because sounds interact in all sorts of ways when we talk). Even if you start with the best speech recognition library you can get your hands on, you'll by no means find yourself with nothing more to do.
2
26
0
I would like to know where one could get started with speech recognition. Not with a library or anything that is fairly "Black Box'ed" But instead, I want to know where I can Actually make a simple speech recognition script. I have done some searching and found, not much, but what I have seen is that there are dictionaries of 'sounds' or syllables that can be pieced together to form text. So basically my question is where can I get started with this? Also, since this is a little optimistic, I would also be fine with a library (for now) to use in my program. I saw that some speech to text libraries and APIs spit out only one results. This is ok, but it would be unrealiable. My current program already checks the grammar and everything of any text entered, so that way if I were to have say, the top ten results from the speech to text software, than It could check each and rule out any that don't make sense.
Getting started with speech recognition and python
1
0
0
81,327
12,239,080
2012-09-02T19:31:00.000
0
0
1
0
python,speech-to-text
59,616,553
8
false
0
0
This may be the most important thing to learn: the elementary concepts of Signal Processing, in particular, Digital Signal Processing (DSP). A little understanding of the abstract concepts will prepare you for the bewildering cornucopia of tools in, say, scipy.signal. First is Analog to Digital conversion (ADC). This really in the domain of Audio Engineering, and is, nowadays, part of the recording process, even if all you are doing is hooking a microphone to your computer. If you are starting with analog recordings, this may be a question of converting old tapes or vinyl long playing records, to digital form, or extracting the audio from old video tapes. Easiest to just play the source into the audio input jack of your computer and use the built in hardware and software to capture a raw Linear Pulse Code Modulation (LPCM) digital signal to a file. Audacity, which you mentioned, is a great tool for this, and more. The Fourier Transform is your friend. In Data Science terms it is great for feature extraction, and feature-space dimension reduction, particularly if you’re looking for features that span changes in sound over the course of the entire sample. No space to explain here, but raw data in the time domain is much harder for machine learning algorithms to deal with than raw data in the frequency domain. In particular you will be using the Fast Fourier Transform (FFT), a very efficient form of the Discrete Fourier Transform (DFT). Nowadays the FFT is usually done in the DSP hardware.
2
26
0
I would like to know where one could get started with speech recognition. Not with a library or anything that is fairly "Black Box'ed" But instead, I want to know where I can Actually make a simple speech recognition script. I have done some searching and found, not much, but what I have seen is that there are dictionaries of 'sounds' or syllables that can be pieced together to form text. So basically my question is where can I get started with this? Also, since this is a little optimistic, I would also be fine with a library (for now) to use in my program. I saw that some speech to text libraries and APIs spit out only one results. This is ok, but it would be unrealiable. My current program already checks the grammar and everything of any text entered, so that way if I were to have say, the top ten results from the speech to text software, than It could check each and rule out any that don't make sense.
Getting started with speech recognition and python
0
0
0
81,327
12,241,945
2012-09-03T04:22:00.000
1
0
0
0
python,amazon-s3,amazon-web-services
12,242,133
1
false
1
0
I'd save time and not do anything. The wait times are pretty fast. If you wanted to stall the end-user, you could just show a 'success' page without the image. If the image isn't available, most regular users will just hit reload. If you really felt like you had to... I'd probably go with a javascript solution like this: have a 'timestamp uploaded' column in your data store if the upload time is under 1 minute, instead of rendering an img=src tag... render some javascript that polls the s3 bucket in 15s intervals Again, chances are most users will never experience this - and if they do, they won't really care. The UX expectations of user generated content are pretty low ( just look at Facebook ); if this is an admin backend for an 'enterprise' service that would make workflow better, you may want to invest time on the 'optimal' solution. For a public facing website though, i'd just forget about it.
1
0
0
Are there any generally accepted practices to get around this? Specifically, for user-submitted images uploaded to a web service. My application is running in Python. Some hacked solutions that came to mind: Display the uploaded image from a local directory until the S3 image is ready, then "hand it off" and update the database to reflect the change. Display a "waiting" progress indicator as a background gif and the image will just appear when it's ready (w/ JavaScript)
What are some ways to work with Amazon S3 not offering read-after-write consistency in US Standard?
0.197375
1
1
323
12,241,960
2012-09-03T04:24:00.000
0
0
1
1
python,c,debugging,gdb,interactive
12,267,951
2
false
0
0
If you're running the python process interactively from a terminal, just run "gdb" normally (e.g. using os.system()) as a child of the python process. It will inherit the terminal and take over stdio just as it would if it was run from the shell. (And if you aren't running it interactively, you'll need to explain what exactly you mean by "give me a a gdb prompt"). Also, if you know you're going to be debugging the process in the future, it's probably best to spawn the process under gdb in the first place than go through the work of getting it to stop at the right spot and attaching it.
1
3
0
I have a script in python, which spawns a new process which I want to debug in gdb. Generally I do the usual process to debug this child process. Put sleep in this process until some condition is true and then attach gdb through pid of this process in a different session, put some breakpoints and make that conditions true so that it continues after sleep. I want to do this in an automated way, say the python script itself spawns a new gdb process and gives me a gdb prompt ? I know a little about curses so may be I can do something with that. But main problem is how to spawn a interactive process ( gdb here ) in python and how to give gdb prompt to user, I dont have much idea. Any help is appreciated.
How to write a automated tool for debugging a child process through gdb
0
0
0
395
12,242,054
2012-09-03T04:37:00.000
1
1
0
0
python,web,machine-learning,artificial-intelligence
12,243,670
1
true
1
0
I assume you are mostly concerned with a general approach to implementing AI in a web context, and not in the details of the AI algorithms themselves. Any computable algorithm can be implemented in any turing complete language (i.e.all modern programming languages). There's no special limitations for what you can do on the web, it's just a matter of representation, and keeping track of session-specific data, and shared data. Also, there is no need to shy away from "calculation" and "graph based" algorithms; most AI-algorithms will be either one or the other (or indeed both) - and that's part of the fun. For example, as an overall approach for a neural net, you could: Implement a standard neural network using python classes Possibly train the set with historical data Load the state of the net on each request (i.e. from a pickle) Feed a part of the request string (i.e. a product-ID) to the net, and output the result (i.e. a weighted set of other products, like "users who clicked this, also clicked this") Also, store the relevant part of the request (i.e. the product-ID) in a session variable (i.e. "previousProduct"). When a new request (i.e. for another product) comes in from the same user, strengthen/create the connection between the first product and the next. Save the state of the net between each request (i.e. back to pickle) That's just one, very general example. But keep in mind - there is nothing special about web-programming in this context, except keeping track of session-specific data, and shared data.
1
0
1
I am die hard fan of Artificial intelligence and machine learning. I don't know much about them but i am ready to learn. I am currently a web programmer in PHP , and I am learning python/django for a website. Now as this AI field is very wide and there are countless algorithms I don't know where to start. But eventually my main target is to use whichever algorithms; like Genetic Algorithms , Neural networks , optimization which can be programmed in web application to show some stuff. For Example : Recommendation of items in amazon.com Now what I want is that in my personal site I have the demo of each algorithm where if I click run and I can show someone what this algorithm can do. So can anyone please guide which algorithms should I study for web based applications. I see lot of example in sci-kit python library but they are very calculation and graph based. I don't think I can use them from web point of view. Any ideas how should I go?
What algorithms i can use from machine learning or Artificial intelligence which i can show via web site
1.2
0
0
1,093
12,243,454
2012-09-03T07:13:00.000
0
0
0
0
python,wxpython
12,266,400
1
true
0
1
I would recommend always using "|". The only time I've seen other bitwise operators used is in the AGW library's demos in the wxPython demo. I doubt there's a difference in speed though. The "|" is the only one I see used regularly and since Robin Dunn (creator of wxPython) is always using it, I think we should too.
1
0
0
I was playing with wx.Frame.SetWindowStyleFlag() and noticed that to add a new flag I can use either '+' or '|', both resulting the same. My question is, is there a situation where this yields a different result? And is there any performance difference between the 2? I noticed that book like wxPython Application development cookbook use '|' instead of '+'.
wxPython adding Flag
1.2
0
0
111
12,245,859
2012-09-03T10:14:00.000
0
0
0
0
python,numpy,scipy,probability,numerical-methods
12,283,724
3
false
0
0
Another possibilty would be to integrate x -> f( H(x)) where H is the inverse of the cumulative distribution of your probability distribtion. [This is because of change of variable: replacing y=CDF(x) and noting that p(x)=CDF'(x) yields the change dy=p(x)dx and thus int{f(x)p(x)dx}==int{f(x)dy}==int{f(H(y))dy with H the inverse of CDF.]
2
1
1
I would like to integrate a function in python and provide the probability density (measure) used to sample values. If it's not obvious, integrating f(x)dx in [a,b] implicitly use the uniform probability density over [a,b], and I would like to use my own probability density (e.g. exponential). I can do it myself, using np.random.* but then I miss the optimizations available in scipy.integrate.quad. Or maybe all those optimizations assume the uniform density? I need to do the error estimation myself, which is not trivial. Or maybe it is? Maybe the error is just the variance of sum(f(x))/n? Any ideas?
Integrating a function using non-uniform measure (python/scipy)
0
0
0
545
12,245,859
2012-09-03T10:14:00.000
0
0
0
0
python,numpy,scipy,probability,numerical-methods
12,268,227
3
false
0
0
Just for the sake of brevity, 3 ways were suggested for calculating the expected value of f(x) under the probability p(x): Assuming p is given in closed-form, use scipy.integrate.quad to evaluate f(x)p(x) Assuming p can be sampled from, sample N values x=P(N), then evaluate the expected value by np.mean(f(X)) and the error by np.std(f(X))/np.sqrt(N) Assuming p is available at stats.norm, use stats.norm.expect(f) Assuming we have the CDF(x) of the distribution rather than p(x), calculate H=Inverse[CDF] and then integrate f(H(x)) using scipy.integrate.quad
2
1
1
I would like to integrate a function in python and provide the probability density (measure) used to sample values. If it's not obvious, integrating f(x)dx in [a,b] implicitly use the uniform probability density over [a,b], and I would like to use my own probability density (e.g. exponential). I can do it myself, using np.random.* but then I miss the optimizations available in scipy.integrate.quad. Or maybe all those optimizations assume the uniform density? I need to do the error estimation myself, which is not trivial. Or maybe it is? Maybe the error is just the variance of sum(f(x))/n? Any ideas?
Integrating a function using non-uniform measure (python/scipy)
0
0
0
545
12,245,999
2012-09-03T10:23:00.000
6
0
0
1
python,django,rabbitmq,celery,django-celery
12,246,221
1
false
1
0
It really depends on the size of the project, ideally you have RabbitMq, celery workers and web workers running on different machines. You need only one RabbitMQ and eventually multiple queue workers (bigger queues need more workers of course). You dont need 1 celery worker per webworker, the webworkers are going to publish tasks to the broker and then the workers will get them from there, in fact the webworker does not care about the amount of workers connected to the broker as it only communicates with the broker. Of course if you are starting a project it makes sense to keep everything on the same hardware and keep budget low and wait for the traffic and the money to flow :) You want to have the same code on every running instance of your app, no matter if they are celery workers/ webservers or whatever.
1
5
0
I trying to deploy a django project using django, but I have these questions unsolved: Should I run one celeryd for each web server? Should I run just one RabbitMQ server, on another machine (not) running celeryd there, accesible to all my web servers? or RabbitMQ must be run also on each of the web servers? How can I use periodic tasks if the code is the same in all web servers? Thank for your answers.
django-celery in multiple server production environment
1
0
0
1,600
12,251,490
2012-09-03T16:37:00.000
1
0
0
0
python,http,authentication,cherrypy
12,255,276
3
false
0
0
Found the user name encoded in the HTTP request header Authorization. I am able to parse it from there. If there's a "better" place to obtain the username, I'm open to improvements!
1
2
0
I have a CherryPy application running successfully using the built-in digest authentication tool and no session support. Now, I would like to expose additional features to certain users. Is it possible to obtain the currently-authenticated user from the authorization system?
How to get username with CherryPy digest authentication
0.066568
0
1
2,411
12,252,492
2012-09-03T18:18:00.000
0
0
0
0
python,user-interface,wxpython,wxglade
12,266,335
2
false
0
1
The ListCtrl doesn't support that in report mode. I suppose you might be able to do it with one of the other style flags though. However Joran has the right idea. However, I would create a series of wx.Image or wx.StaticBitmap widgets and add them to a horizontal BoxSizer instead of what he did.
1
0
0
I want to display a list of icons in horizontal format in wxpython. I'm using wxglade and I can't find how to set list's orientation. Each item has an icon and below that it has a caption. Is this kind of design possible?
wxpython horizontal listctrl
0
0
0
180
12,253,063
2012-09-03T19:23:00.000
1
1
0
0
python,rabbitmq,message-queue,django-celery
12,258,879
1
true
1
0
Try the Pika client or the Kombu client. Celery is a whole framework for job queues, which you may not need - but it's worth taking a look if you want to understand a queue use case.
1
0
0
I have a system that sends different types of messages (HTTP, SMTP, POP, IMAP, and regular TCP) to different systems, and I need to queue all of those messages in my system, in case of other systems in-availability. I'm a bit new to the message queueing concept. so I don't know the best python library that I shall go for. Is Django-celery (and the underling components - RabbitMQ, MySql, django, apache) is the best choice for me? Will this library cover all my needs?
Queueing HTTP, emails, and TCP messages in Python
1.2
0
0
169
12,254,516
2012-09-03T22:05:00.000
3
0
0
0
python,pygame
12,255,531
3
false
0
1
gfxdraw allows anti-aliasing for all shapes, where drawonly adds antialiasing to lines. I use gfxdraw by default. It has been marked 'experimental' for a long time now, but I've not had any issues.
2
5
0
My question is simple: What is the difference between using pygame.draw and pygame.gfxdraw? I've looked on the pygame documentation, but no one has told what the difference between them is.
Difference between pygame.draw and pygame.gfxdraw
0.197375
0
0
2,283
12,254,516
2012-09-03T22:05:00.000
7
0
0
0
python,pygame
12,273,374
3
true
0
1
The draw function is a bit more stable, and also a bit faster that gfxdraw, although gfxdraw has more options, not only along the lines of antialiasing, but also drawing shapes. pygame.draw has only nine functions, whereas pygame.gfxdraw has 22 options, which include all 9 of the options supplied by draw. I recommend using gfxdraw even though its is "experimental" because it has more capabilities.
2
5
0
My question is simple: What is the difference between using pygame.draw and pygame.gfxdraw? I've looked on the pygame documentation, but no one has told what the difference between them is.
Difference between pygame.draw and pygame.gfxdraw
1.2
0
0
2,283
12,260,983
2012-09-04T09:56:00.000
1
1
1
0
python,pycharm,nose
12,261,280
1
true
0
0
I think it happens because PyCharm have its own "copy" of interpreter which have its own version of sys paths where you project's root set to one level lower the PythonPlayground dir. And you could find preferences of interpreter in PyCharm fro your project and set proper top level. ps. I have same problems but in Eclipse + pydev
1
0
0
I am pretty new to Python. Currently I am trying out PyCharm and I am encountering some weird behavior that I can't explain when I run tests. The project I am currently working on is located in a folder called PythonPlayground. This folder contains some subdirectories. Every folder contains a init.py file. Some of the folders contain nosetest tests. When I run the tests with the nosetest runner from the command line inside the project directory, I have to put "PythonPlayground" in front of all my local imports. E.g. when importing the module called "model" in the folder "ui" I have to import it like this: from PythonPlayground.ui.model import * But when I run the tests from inside Pycharm, I have to remove the leading "PythonPlayground" again, otherwise the tests don't work. Like this: from ui.model import * I am also trying out the mock framework, and for some reason this framework always needs the complete name of the module (including "PythonPlayground"). It doesn't matter whether I run the tests from command line or from inside PyCharm: with patch('PythonPlayground.ui.models.User') as mock: Could somebody explain the difference in behavior to me? And what is the correct behavior?
Nosetest & import
1.2
0
0
514
12,265,561
2012-09-04T14:24:00.000
0
0
0
0
python,websocket,eventlet
12,266,453
1
false
1
0
I think the most efficient way to do this is that client app tell the server what they are displaying. The server keep track of this and send changes only to the objects currently viewed, only to the concerned client. A way to do this is by using a "Who Watch What" list of items. Items are indexed in two ways. From the client ID and with a isVievedBy chainlist inside each data objects (I know it doesn't look clean to mix it with data but it is very efficient). You'll also need a lastupdate timestamp for each data object. When a client change view, it send a "I'm viewing this, wich I have the version -timestamp-" message to the server. The server check timestamp and send back the object if required. It also remove obsolete "Who Watch What" (accessing them by client ID) items and create the new ones. When a data object is updated, loop through the isVievedBy chainlist of this object to know which client should be updated. Put this in message buffers for each client and flush those buffers manually (in case you update several items at the same time, it will send one big message). This is lot of work, but your app will be efficient and scale gracefully, even with lot of objects and lot of clients. It sends only usefull messages and it is very unlikely that there will be too many of them. For your onMessage problem, I would store data in a queue and process them asynchronously.
1
1
0
I have a web app where I am streaming model changes to a backbone collection in a chrome client. There a a few backbone views that may or may not render parts of the page depending on the type of update and what is being looked at. For example some changes to a model result in the view for the collection being re-rendered and there may or may not be a detail panel view open for the model that's being updated. These model changes can happen very fast as the server side workflow involves quite verbose and rapid changes to the model. Here's the problem: I'm getting a large number of errno 32 pipe broken messages in the webserver's process when sending messages to the client, although the websocket connection is still up and its readyState is still 1 (OPEN). What I suspect is happening is that the various views haven't finished rendering in the onmessage callback by the time the next message is coming in. After I get these tracebacks in stdout the websocket connection can still work and the UI will still update. If I put eventlet.sleep(0.02) in the loop that reads model changes off the message queue and sends them on the websocket the broken pipe messages go away, however this isn't a real solution and feels like a nasty hack. Has anyone has similar problems with websocket's onmessage function trying to do too much work and still being busy when the next message comes in? Anyone have a solution?
What's the preferred method for throttle websocket connections?
0
0
0
1,564
12,266,016
2012-09-04T14:47:00.000
0
0
0
0
python,database,sqlite,instance
12,268,131
2
true
0
0
You need an ORM. Either you roll your own (which I never suggest), or you use one that exists already. Probably the two most popular in Python are sqlalchemy, and the ORM bundled with Django.
1
1
0
I'm creating a game mod for Counter-Strike in python, and it's basically all done. The only thing left is to code a REAL database, and I don't have any experience on sqlite, so I need quite a lot of help. I have a Player class with attribute self.steamid, which is unique for every Counter-Strike player (received from the game engine), and self.entity, which holds in an "Entity" for player, and Entity-class has lots and lots of more attributes, such as level, name and loads of methods. And Entity is a self-made Python class). What would be the best way to implement a database, first of all, how can I save instances of Player with an other instance of Entity as it's attribute into a database, powerfully? Also, I will need to get that users data every time he connects to the game server, (I have player_connect event), so how would I receive the data back? All the tutorials I found only taught about saving strings or integers, but nothing about whole instances. Will I have to save every attribute on all instances (Entity instance has few more instances as it's attributes, and all of them have huge amounts of attributes...), or is there a faster, easier way? Also, it's going to be a locally saved database, so I can't really use any other languages than sql.
Python sqlite3, saving instance of a class with an other instance as it's attribute?
1.2
1
0
356
12,272,856
2012-09-04T23:59:00.000
2
0
1
0
python,ruby
12,272,878
7
false
0
0
Why should it work? String classes rarely have void print methods - and you would never need them, because the standard static print function can print those strings anyway. It is important to note: method(someObject) is not necessarily the same as someObject.method().
3
7
0
My understanding of the print() in both Python and Ruby (and other languages) is that it is a method on a string (or other types). Because it is so commonly used the syntax: print "hi" works. So why doesn't "hi".print() in Python or "hi".print in Ruby work?
Why can't I "string".print()?
0.057081
0
0
671
12,272,856
2012-09-04T23:59:00.000
0
0
1
0
python,ruby
12,272,883
7
false
0
0
print isn't a method on a string in Python (or in Ruby, I believe). It's a statement (in Python 3 it's a global function). Why? For one, not everything you can print is a string. How about print 2?
3
7
0
My understanding of the print() in both Python and Ruby (and other languages) is that it is a method on a string (or other types). Because it is so commonly used the syntax: print "hi" works. So why doesn't "hi".print() in Python or "hi".print in Ruby work?
Why can't I "string".print()?
0
0
0
671