Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
14,805,789 |
2013-02-11T04:13:00.000
| 0 | 0 | 1 | 0 |
python,string
| 14,805,802 | 4 | false | 0 | 0 |
array[begin:end:step]
word[1:9:2] means you begin at index 1, and up until index 9, take every second letter.
| 1 | 2 | 0 |
i have a python string
word = helloworld
the answer for
word[1:9:2] will be given as "elwr". How this is happening? Thank you!!
|
How Python String array work?
| 0 | 0 | 0 | 4,999 |
14,813,494 |
2013-02-11T13:57:00.000
| 0 | 0 | 0 | 1 |
python,fortran,f2py
| 14,830,945 | 1 | false | 0 | 0 |
This problem is solved in a following way:
All instances where commercial FFT library is called are replaced by calls to free FFT library (in this case FFTW3). Of course ' include "fftw3.f" ' is placed on top of the fortran subroutines where necessary.
Extension module is created using f2py. First line creates the signature file, and in second line the extension module is compiled. Note that we linked the external library in the process - this was not done previously, which caused stated problems.
f2py -m splib -h splib.fpy splib.f
f2py -c splib splib.f -lfftw3
| 1 | 0 | 0 |
I have a fortran file with a lot of useful subroutines, and I want to make a Python interface to it using f2py.
The problem arises because some fortran subroutines call the FFT subroutine from the NAG library (named c06ebf). When imported into Python, it produces the 'undefined symbol: co6ebf' warning.
Is there other way to perform FFT within my Fortran subroutine and to be able to create Python interface from it using f2py?
|
Excluding a call to a subroutine from a commercial library
| 0 | 0 | 0 | 198 |
14,815,856 |
2013-02-11T15:59:00.000
| 1 | 0 | 0 | 0 |
python,html,compare,match
| 14,815,973 | 2 | true | 1 | 0 |
import urllib and use urllib.urlopen for getting the contents of an html. import re to search for the hash code using regex. You could also use find method on the string instead of regex.
If you encounter problems, then you can ask more specific questions. Your question is too general.
| 1 | 0 | 0 |
I am making a download manager. And I want to make the download manager check the md5 hash of an url after downloading the file. The hash is found on the page. It needs to compute the md5 of the file ( this is done), search for a match on the html page and then compare the WHOLE contents of the html page for a match.
my question is how do i make python return the whole contents of the html and find a match for my "md5 string"?
|
Matching contents of an html file with keyword python
| 1.2 | 0 | 1 | 168 |
14,816,700 |
2013-02-11T16:44:00.000
| 1 | 0 | 1 | 0 |
python,profiling,yappi
| 20,193,234 | 1 | false | 0 | 0 |
This issue is probably fixed in the latest repository head. Other than that, yappi does not accumulate time.sleep() or any other blocking calls timing output if operating in CPU clock mode. See the get_clock_type() api of yappi.
| 1 | 2 | 0 |
I am running yappi python profiler in a multi-threaded process an I get weird results when printing with yappi.print_stats(). Some methods repeat more than once, in each of the lines they show different ttot and ncalls. Some methods surprisingly show tsub equal to 0, where they certainly should not.
Could you explain these phenomena?
|
Yappi returns weird results
| 0.197375 | 0 | 0 | 952 |
14,817,288 |
2013-02-11T17:17:00.000
| 3 | 0 | 1 | 1 |
python,setuptools
| 14,817,342 | 1 | false | 0 | 0 |
Just package your app and put it on PyPI. Trying to automatically package the code running on the server seems over-engineered. Then you can let people use pip to install your app. In your app, provide a link to the PyPI page.
Then you can also add dependencies in the setup.py, and pip will install them for you. It seems like you are trying to build your own packaging infrastructure, but don't have to. Use what's out there.
| 1 | 1 | 0 |
I'm writing a small web app that I'd like to include the ability to download itself. The ideal solution would be for users to be able to "pip install" the full app but that users of the app would be able to download a version of it to use themselves (perhaps with reduced functionality or without some of the less essential dependencies).
I'm currently using Bottle as I'd like to keep everything as close to the standard library as possible. Users could be on different platforms or Python versions, which are other reasons for minimising the use of extra modules. (Though I'll assume 2.7 or 3.3 will be in use regardless of platform).
My current thinking is to have the app use __file__ or similar and zip itself up. It could also use setuptools/distribute and call sdist on itself. Users could then execute the zip file, or install the app using the source distribution. (ideally I'd like to provide both of these options).
The app would include aggressive import checking to fallback to available modules, with Bottle being the only requirement (and would be included in the downloaded file).
Can anyone think of a robust approach to providing this functionality?
Update: users of the app cannot be guaranteed to have internet access at all times, hence the requirement for being able to download a version of the app from someone who as previously installed it. Python experience cannot be assumed either, hence the idea of letting users run python -m myApp.zip to run their own version.
Update II: as the level of python experience also cannot be guaranteed I'd want the simplest way for a user to get a mostly working version of the app. Experienced users would then be free to 'upgrade' the app by installing their own choice of additional modules. The vast majority of these would be different servers to host the app from (CherryPy, Twisted, etc) and so would not strictly count as a dependency but a "nice to have".
Update III: based on the answer below I will look into a PyPI/buildout based solution but would still be interested in whether there is a specific solution to the above approach.
|
Python web app that can download itself
| 0.53705 | 0 | 0 | 142 |
14,817,290 |
2013-02-11T17:17:00.000
| 0 | 1 | 1 | 0 |
python,cgi
| 14,817,753 | 3 | false | 0 | 0 |
The simple (and slow way) is to acquire a lock on the file (in C you'd use flock), write on it and close it. If you think this can be a bottleneck then use a database or something like that.
| 2 | 0 | 0 |
I have a simple cgi script in python collecting a value from form fields submitted through post. After collecting this, iam dumping these values to a single text file.
Now, when multiple users submit at the same time, how do we go about it?
In C\C++ we use semaphore\mutex\rwlocks etc? Do we have anything similar in python. Also, opening and closing the file multiple times doesnt seem to be a good idea for every user request.
We have our code base for our product in C\C++. I was asked to write a simple cgi script for some reporting purpose and was googling with python and cgi.
Please let me know.
Thanks!
Santhosh
|
multiple users doing form submission with python CGI
| 0 | 0 | 0 | 300 |
14,817,290 |
2013-02-11T17:17:00.000
| 0 | 1 | 1 | 0 |
python,cgi
| 14,817,362 | 3 | false | 0 | 0 |
If you're concerned about multiple users, and considering complex solutions like mutexes or semaphores, you should ask yourself why you're planning on using an unsuitable solution like CGI and text files in the first place. Any complexity you're saving by doing this will be more than outweighed by whatever you put in place to allow multiple users.
The right way to do this is to write a simple WSGI app - maybe using something like Flask - which writes to a database, rather than a text file.
| 2 | 0 | 0 |
I have a simple cgi script in python collecting a value from form fields submitted through post. After collecting this, iam dumping these values to a single text file.
Now, when multiple users submit at the same time, how do we go about it?
In C\C++ we use semaphore\mutex\rwlocks etc? Do we have anything similar in python. Also, opening and closing the file multiple times doesnt seem to be a good idea for every user request.
We have our code base for our product in C\C++. I was asked to write a simple cgi script for some reporting purpose and was googling with python and cgi.
Please let me know.
Thanks!
Santhosh
|
multiple users doing form submission with python CGI
| 0 | 0 | 0 | 300 |
14,817,747 |
2013-02-11T17:43:00.000
| 0 | 0 | 1 | 0 |
python,python-2.7
| 14,818,107 | 3 | false | 0 | 0 |
I would think that FooTester would be part of the foo package and BarTester part of the bar package. (Or perhaps each in its own package/module.)
Given the requirement that teams that use the foo package don't install the bar package, it seems weird to put the testers for both in a tools package that both use.
| 1 | 0 | 0 |
I can’t imagine this is an original question, but I can’t seem to find an answer. I must be using the wrong search terms.
Our team is developing a Python utility package. We’ll call it “tools”. Various classes in “tools” require other packages and modules.
“tools” has several classes, with names like:
Parser
Logger
FooTester
BarTester
There are 2 other packages. One is called “foo”, and the other is “bar”. Some teams only use the “foo” package, some teams only use “bar” package. It is a requirement that teams that only use the “foo” package don’t install the “bar” package.
The “FooTester” class requires the “foo” package, as it has a “import foo” in it.
The “BarTester” class requires the “bar” package, as it has a “import bar” in it.
Both teams want to put the following at the top of their script: “import tools”, then use their respective Tester class.
As we have it right now, you can’t do that unless you have the “bar” and the “foo” package installed.
What is the standard way of doing this? Is there?
|
managing a python package that depends on other packages
| 0 | 0 | 0 | 215 |
14,820,071 |
2013-02-11T20:15:00.000
| 0 | 0 | 0 | 0 |
python,apache
| 14,820,176 | 1 | true | 1 | 0 |
you need Choose a web framework. CherryPy. Pylons. Django.
| 1 | 0 | 0 |
I want to create a .py file and display simple html code, just like I simply open any php file. I've put file.py inside of the c:/xampp/cgi-bin directory, I've enagled .py extension in apache configs, but...am I doing this the right way? What next?
How to open this file? localhost/cgi-bin/file.py displays the internal server 500 error with the note "Apache/2.4.2 (Win32) OpenSSL/1.0.1c PHP/5.4.4" at the bottom.
|
Python with xampp
| 1.2 | 0 | 0 | 89 |
14,822,184 |
2013-02-11T22:33:00.000
| 72 | 0 | 1 | 0 |
python,python-3.x
| 14,822,215 | 8 | true | 0 | 0 |
There is no operator which divides with ceil. You need to import math and use math.ceil
| 3 | 183 | 0 |
I found out about the // operator in Python which in Python 3 does division with floor.
Is there an operator which divides with ceil instead? (I know about the / operator which in Python 3 does floating point division.)
|
Is there a ceiling equivalent of // operator in Python?
| 1.2 | 0 | 0 | 118,598 |
14,822,184 |
2013-02-11T22:33:00.000
| 31 | 0 | 1 | 0 |
python,python-3.x
| 14,822,457 | 8 | false | 0 | 0 |
You could do (x + (d-1)) // d when dividing x by d, e.g. (x + 4) // 5.
| 3 | 183 | 0 |
I found out about the // operator in Python which in Python 3 does division with floor.
Is there an operator which divides with ceil instead? (I know about the / operator which in Python 3 does floating point division.)
|
Is there a ceiling equivalent of // operator in Python?
| 1 | 0 | 0 | 118,598 |
14,822,184 |
2013-02-11T22:33:00.000
| -12 | 0 | 1 | 0 |
python,python-3.x
| 59,911,258 | 8 | false | 0 | 0 |
Simple solution:
a // b + 1
| 3 | 183 | 0 |
I found out about the // operator in Python which in Python 3 does division with floor.
Is there an operator which divides with ceil instead? (I know about the / operator which in Python 3 does floating point division.)
|
Is there a ceiling equivalent of // operator in Python?
| -1 | 0 | 0 | 118,598 |
14,823,474 |
2013-02-12T00:24:00.000
| 0 | 0 | 0 | 0 |
python,python-2.7,timestamp,quickfix,timestamping
| 14,843,059 | 2 | true | 0 | 0 |
I solved my problem by specifying the getField function to look at the header. So:
sendingTime= quickfix.SendingTime()
print sendingTime, "\n"
message.getHeader().getField(sendingTime)
print sendingTime, "\n"
The first printed line will be the sending time with milliseconds rounded off (I have no idea why), it lookes like 52=20130207-02:38:32
The second printed line actually looks into the header (where field 52 resides) and gets the whole field, it will look like:
52=20130207-02:38:32.212
This explains why when I tried message.getField(sendingTime) I was getting a Field not Found error.
| 2 | 2 | 0 |
I'm running QuickFix with the Python API and connecting to a TT FIX Adapter using FIX4.2
I'm successfully logging on and sending a market data request. The replies are fine. In my message logs (both screen logs and file logs) I am getting a SendingTime (field 52) that looks something like this:
52=20130207-02:38:32.212
However, when I try to get this field and print it to the terminal or to a file, everything is the same except the milliseconds are dropped. So the result is always:
52=20130207-02:38:32
Obviously this is bad. I can't think why the milliseconds would be present at first, and then get dropped when I'm accessing them.
Maybe this is an artifact of Python, which accesses attributes with the '.' character? But this seems stupid, because SendingTime is a string, and last I checked periods were allowed in strings.
Any help would be great, I'd really like to be able to print accurate timestamps to files.
Thanks,
Wapiti
|
quickfix sendingtime (field 52) dropping milliseconds
| 1.2 | 0 | 0 | 1,894 |
14,823,474 |
2013-02-12T00:24:00.000
| 1 | 0 | 0 | 0 |
python,python-2.7,timestamp,quickfix,timestamping
| 14,825,020 | 2 | false | 0 | 0 |
Try extracting the field with FieldMap's const std::string & getField (int field) function. That will get your field as a string, without trying to convert it to a date type. I bet that will preserve the milliseconds, at least textually.
Sorry I can't help with why Python is losing the ms. I just don't know enough about the Python wrapper.
EDIT: Nope, not the right answer. I didn't know you weren't extracting the field from the header. (You can still use this function on the header, of course.)
| 2 | 2 | 0 |
I'm running QuickFix with the Python API and connecting to a TT FIX Adapter using FIX4.2
I'm successfully logging on and sending a market data request. The replies are fine. In my message logs (both screen logs and file logs) I am getting a SendingTime (field 52) that looks something like this:
52=20130207-02:38:32.212
However, when I try to get this field and print it to the terminal or to a file, everything is the same except the milliseconds are dropped. So the result is always:
52=20130207-02:38:32
Obviously this is bad. I can't think why the milliseconds would be present at first, and then get dropped when I'm accessing them.
Maybe this is an artifact of Python, which accesses attributes with the '.' character? But this seems stupid, because SendingTime is a string, and last I checked periods were allowed in strings.
Any help would be great, I'd really like to be able to print accurate timestamps to files.
Thanks,
Wapiti
|
quickfix sendingtime (field 52) dropping milliseconds
| 0.099668 | 0 | 0 | 1,894 |
14,824,538 |
2013-02-12T02:33:00.000
| 0 | 0 | 1 | 1 |
python,bash,shell,replace
| 14,824,582 | 3 | false | 0 | 0 |
Yes, of course. You can simply make an executable Python script, call it /usr/bin/pysh, add this filename to /etc/shells and then set it as your user's default login shell with chsh.
| 1 | 2 | 0 |
I wonder if is possible to create a bash replacement but in python. I have done REPLs before, know about subprocess and that kind of stuff, but wonder how use my python-like-bash replacement in the OSX terminal as if were a native shell environment (with limitations).
Or simply run ipython as is...
P.D. The majority of the google answer are related to create shell scripts. I'm interested in create a shell..
|
Is possible to create a shell like bash in python, ie: Bash replacement?
| 0 | 0 | 0 | 1,426 |
14,824,862 |
2013-02-12T03:22:00.000
| 2 | 1 | 0 | 1 |
python,linux,perl,scripting
| 14,824,929 | 3 | false | 0 | 0 |
Every programmer will have a biased answer to this, but one thing to keep in mind is what your goal is. For instance, if you're only looking to be a successful sysadmin, then your goals might best be served by learning languages that are more conducive to sysadmin tasks (e.g. bash). However, if you're looking to do more general programming, including data analysis, you might be better served focusing your study on more general-purpose languages like Python or Perl. For web development, Ruby might be worth studying, etc. It really depends on why you're interested in learning scripting.
If you don't really have a specific reason and are looking for general advice, it's probably wise to start with one language and get proficient at it and then expand to other languages. The canonical path would probably be bash --> Python, these days. Of course, this is just one person's opinion. :-)
| 2 | 0 | 0 |
I am very new to linux, and i want to learn scripting. It seems like there are quite a few options to learn about scripting from bash shell scripting, python, perl lisp, and probably more that i dont know about. I am just wonder what are the the advantage and disadvantage of all of them, and what would be a good place to start?
|
different types of scripting in linux
| 0.132549 | 0 | 0 | 966 |
14,824,862 |
2013-02-12T03:22:00.000
| 1 | 1 | 0 | 1 |
python,linux,perl,scripting
| 14,825,010 | 3 | false | 0 | 0 |
I think a lot of times, people new to programming see all the options out there and don't know where to start. You listed a bunch of different languages in your post. My advice would be to pick one of those languages and find a book or tutorial and work through it.
I became interested in "scripting" from just trying to come up with a mIRC script that would fit my needs; however, after completing that, I changed OS from windows to Linux and mIRC scripting no longer would work for me. So I started playing with Perl and Python to see which would work best for xChat.
Eventually, what it all boils down with is that you'll need to experiment with a language and do some hands on learning. I eventually completed project, and used PHP for it. While completing that, I also was working through Michael Hartl's tutorial and worked with Ruby on Rails some. Now I'm in the process of rewriting it using Node.js (javascript).
Best bet, just pick one language and start playing with it.
| 2 | 0 | 0 |
I am very new to linux, and i want to learn scripting. It seems like there are quite a few options to learn about scripting from bash shell scripting, python, perl lisp, and probably more that i dont know about. I am just wonder what are the the advantage and disadvantage of all of them, and what would be a good place to start?
|
different types of scripting in linux
| 0.066568 | 0 | 0 | 966 |
14,825,262 |
2013-02-12T04:15:00.000
| 0 | 1 | 0 | 1 |
python-2.7,telnetlib
| 15,581,083 | 1 | false | 0 | 0 |
You should try to login to the machine using telnet, then you will notice you will login into BusyBox. That string you print not an error it is hte normal BusyBox prompt.
It might not be what you expected, I only know BusyBox from Linux boxes that were unable to properly boot.
| 1 | 0 | 0 |
I am trying to connect a remote machine in python. I used telnetlib module and could connect to machine after entering login id and password as
tn = Telnet("HOST IP")
tn.write("UID")
tn.write("PWD")
After entering password, the terminal connects to the remote machine which is a linux based software [having its own IP address(HOST IP).]
Then after If I try to give a command e.g. tn.write("cd //tmp/media/..) to go to its various folders then it does not work and when checked to see what the screen is showing with
tn.read_very_eager()
error comes up as :
""\r\n\r\n\r\nBusyBox v1.19.4 (2012-07-19 22:27:43 CEST) built-in shell (ash)\r\n
Enter 'help' for a list of built-in commands.\r\n\r\n~ # ""
I wanted to know if there is any method in Python as we have in PERL as $telnet->cmd ("cd //tmp/media/..)
Any suggestions are welcomed if you can give an example!!!
|
How to write on terminal after login with telnet to remote machine using python
| 0 | 0 | 0 | 602 |
14,825,614 |
2013-02-12T04:53:00.000
| 1 | 0 | 0 | 0 |
python,xlrd
| 15,585,668 | 2 | true | 0 | 0 |
Well i think the problem is with file . I think you should check it with the file.
I have the same issue when the xls file with True and False data created using openofffice and libra office return 0 and 1.
I noticed while iterating the file with xlrd and when i create a file microsoft office .. The xlrd was sorking perfectly fine .. so please check it.
| 1 | 3 | 0 |
When Cell contains True it returns 1 and if contain False it returns 0.
Also if 80% are getting rendered as .8
I am confused now do not know where to look up at.
|
XLRD rendering the wrong data while iterating the cellls having True/False and Percentage value
| 1.2 | 0 | 0 | 641 |
14,826,245 |
2013-02-12T05:48:00.000
| 0 | 0 | 0 | 0 |
python,data-structures
| 14,827,656 | 4 | false | 0 | 0 |
I'd give you a code sample if I better understood what your current data structures look like, but this sounds like a job for a pandas dataframe groupby (in case you don't feel like actually using a database as others have suggested).
| 2 | 2 | 1 |
I have a list of user:friends (50,000) and a list of event attendees (25,000 events and list of attendees for each event). I want to find top k friends with whom the user goes to the event. This needs to be done for each user.
I tried traversing lists but is computationally very expensive. I am also trying to do it by creating weighted graph.(Python)
Let me know if there is any other approach.
|
Search in Large data set
| 0 | 0 | 1 | 177 |
14,826,245 |
2013-02-12T05:48:00.000
| 0 | 0 | 0 | 0 |
python,data-structures
| 14,826,472 | 4 | false | 0 | 0 |
Can you do something like this.
Im assuming friends of a user is relatively less, and the events attended by a particular user is also much lesser than total number of events.
So have a boolean vector of attended events for each friend of the user.
Doing a dot product and those that have max will be the friend who most likely resembles the user.
Again,.before you do this..you will have to filter some events to keep the size of your vectors manageable.
| 2 | 2 | 1 |
I have a list of user:friends (50,000) and a list of event attendees (25,000 events and list of attendees for each event). I want to find top k friends with whom the user goes to the event. This needs to be done for each user.
I tried traversing lists but is computationally very expensive. I am also trying to do it by creating weighted graph.(Python)
Let me know if there is any other approach.
|
Search in Large data set
| 0 | 0 | 1 | 177 |
14,827,296 |
2013-02-12T07:10:00.000
| 1 | 1 | 0 | 0 |
python,urllib2,python-2.6
| 14,847,414 | 1 | false | 0 | 0 |
The environment variables that were used by crontab and from the command line were different.
I fixed this by adding */15 * * * * . $HOME/.profile; /path/to/command.
This made the crontab to pick up enivronment variables that were specified for the system.
| 1 | 1 | 0 |
url = "www.someurl.com"
request = urllib2.Request(url,header={"User-agent" : "Mozilla/5.0"})
contentString = urllib2.url(request).read()
contentFile = StringIO.StringIO(contentString)
for i in range(0,2):
html = contentFile.readline()
print html
The above code runs fine from commandline but if i add it to a cron job it throws the following error:
File "/usr/lib64/python2.6/urllib2.py", line 409, in _open
'_open', req)
File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
File "/usr/lib64/python2.6/urllib2.py", line 1186, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib64/python2.6/urllib2.py", line 1161, in do_open
raise URLError(err)
urllib2.URLError:
I did look at some tips on the other forums and tried it but it has been of no use.
Any help will be much appreciated.
|
Urllib2 runs fine if i run the program independently but throws error when i add it to a cronjob
| 0.197375 | 0 | 1 | 603 |
14,829,562 |
2013-02-12T09:44:00.000
| 1 | 0 | 0 | 0 |
python,linux,amazon-web-services,boto,amazon-swf
| 14,829,925 | 4 | false | 1 | 0 |
You can use SNS,
When script A is completed it should trigger SNS, and that will trigger a notification to Server B
| 2 | 9 | 0 |
Use Amazon SWF to communicate messages between servers?
On server A I want to run a script A
When that is finished I want to send a message to server B to run a script B
If it completes successfully I want it to clear the job from the workflow queue
I’m having a really hard time working out how I can use Boto and SWF in combination to do this. I am not after some complete code but what I am after is if anyone can explain a little more about what is involved.
How do I actually tell server B to check for the completion of script
A?
How do I make sure server A wont pick up the completion of script
A and try and run script B (since server B should run this)?
How do I actually notify SWF of script A completion? Is thee a flag, or a
message, or what?
I’m pretty confused about all of this. What design should I use?
|
Using Amazon SWF To communicate between servers
| 0.049958 | 0 | 1 | 3,687 |
14,829,562 |
2013-02-12T09:44:00.000
| 5 | 0 | 0 | 0 |
python,linux,amazon-web-services,boto,amazon-swf
| 14,881,688 | 4 | false | 1 | 0 |
I don't have any example code to share, but you can definitely use SWF to coordinate the execution of scripts across two servers. The main idea with this is to create three pieces of code that talk to SWF:
A component that knows which script to execute first and what to do once that first script is done executing. This is called the "decider" in SWF terms.
Two components that each understand how to execute the specific script you want to run on each machine. These are called "activity workers" in SWF terms.
The first component, the decider, calls two SWF APIs: PollForDecisionTask and RespondDecisionTaskCompleted. The poll request will give the decider component the current history of an executing workflow, basically the "where am i" state information for your script runner. You write code that looks at these events and figure out which script should execute. These "commands" to execute a script would be in the form of a scheduling of an activity task, which is returned as part of the call to RespondDecisionTaskCompleted.
The second components you write, the activity workers, each call two SWF APIs: PollForActivityTask and RespondActivityTaskCompleted. The poll request will give the activity worker an indication that it should execute the script it knows about, what SWF calls an activity task. The information returned from the poll request to SWF can include single execution-specific data that was sent to SWF as part of the scheduling of the activity task. Each of your servers would be independently polling SWF for activity tasks to indicate the execution of the local script on that host. Once the worker is done executing the script, it calls back to SWF through the RespondActivityTaskCompleted API.
The callback from your activity worker to SWF results in a new history being handed out to the decider component that I already mentioned. It will look at the history, see that the first script is done, and schedule the second one to execute. Once it sees that the second one is done, it can "close" the workflow using another type of decision.
You kick off the whole process of executing the scripts on each host by calling the StartWorkflowExecution API. This creates the record of the overall process in SWF and kicks out the first history to the decider process to schedule the execution of the first script on the first host.
Hopefully this gives a bit more context on how to accomplish this type of workflow using SWF. If you haven't already, I would take a look at the dev guide on the SWF page for additional info.
| 2 | 9 | 0 |
Use Amazon SWF to communicate messages between servers?
On server A I want to run a script A
When that is finished I want to send a message to server B to run a script B
If it completes successfully I want it to clear the job from the workflow queue
I’m having a really hard time working out how I can use Boto and SWF in combination to do this. I am not after some complete code but what I am after is if anyone can explain a little more about what is involved.
How do I actually tell server B to check for the completion of script
A?
How do I make sure server A wont pick up the completion of script
A and try and run script B (since server B should run this)?
How do I actually notify SWF of script A completion? Is thee a flag, or a
message, or what?
I’m pretty confused about all of this. What design should I use?
|
Using Amazon SWF To communicate between servers
| 0.244919 | 0 | 1 | 3,687 |
14,830,722 |
2013-02-12T10:47:00.000
| 0 | 1 | 0 | 0 |
python,ubuntu,import,paramiko
| 14,832,457 | 1 | false | 0 | 0 |
To use python libraries, you must have development version of python like python2.6-dev, which can be installed using sudo apt-get install python2.6-dev.
Then you may install any additional development libraries that you want in your code to run.
Whatever you install using sudo apt-get install python-paramkio or python setup.py install will then be available to you.
| 1 | 0 | 0 |
I installed paramiko in my Ubuntu box "sudo apt-get install python-paramkio".
But when import the paramiko module i am getting error.
ImportError:No Module named paramiko
When i list the python modules using help('modules'). i couldn't find paramiko listed.
|
paramiko installation " Unable to import ImportError"
| 0 | 0 | 1 | 4,876 |
14,835,315 |
2013-02-12T14:54:00.000
| 1 | 0 | 0 | 0 |
python,amazon-web-services,boto
| 14,849,632 | 1 | true | 0 | 0 |
It should be a dictionary with the 4 keys in the dictionary (I'm going to push a change that updates the structure type with the dict type). So if you don't want notifications, you just specify empty values for the keys:
{'Progressing': '', 'Completed': '', 'Warning': '', 'Error': ''}
| 1 | 1 | 0 |
I am trying to create my own pipe line in Elastic Transcoder. I am using the boto standard function create_pipeline(name, input_bucket, output_bucket, role, notifications).
Can you please tell me the notifications (structure) how it should look?
So far I have something like this:
create_pipeline('test','test_start', 'test_end',
'arn:aws:iam::789823056103:role/Elastic_Transcoder_Default_Role', ... )
Thank you!
|
Understanding the boto documentation for Elastic Transcoder
| 1.2 | 0 | 0 | 268 |
14,837,251 |
2013-02-12T16:33:00.000
| 0 | 1 | 1 | 0 |
python,actionscript-3,base64,decode
| 14,837,350 | 1 | true | 0 | 0 |
It seems to me you are not feeding a valid string to the function - it tells you so. You can't expect a function to "guess" what you wanted, and base its response on that. You have to use a valid parameter, or the function doesn't work.
| 1 | 1 | 0 |
I have to decode a string. This one comes from Flash AS3 and I want to decode it in Python. I don't have any problems with PHP, but I cannot decode the following string with Python 2.6 'base64.b64decode'.
f3hvQgQaBFp9IC4NQhYZQiAhNhxBAkwIJC0pDR8fBl12ZjkWXwMEWn57bU0dGgBfcWdsTwAbGB4xLmVLAh0FXXd5a0gGHQRWdy5iQANNVAl/KmNLAhUBXyV8PkFQHwNefntjGgpPU18nK21OURtSC35wPE4FHFUJdi4/TlMUVFwlez9JVxtVDH0TB0IGHAc%Pr
Python returns "TypeError: Incorrect Padding". It seems to have superfious characters at the end of the string (from the '%'). But why Python base64 library do not manage this?
Thank you for your answer.
|
Base64 decode does not work every time in Python
| 1.2 | 0 | 0 | 471 |
14,837,339 |
2013-02-12T16:38:00.000
| 1 | 0 | 1 | 0 |
python,ip-address,urllib
| 14,837,808 | 2 | false | 0 | 0 |
Do an online search for a listing of "Proxy services" from the internet. You can then loop through them as proxies in Python. There are companies that maintain open proxy networks , and across multiple continents , to help people get past GEO-IP restrictions.
You can also configure different servers you control on the internet to act as remote proxies for your needs.
Don't use TOR for this project, which has legitimate uses and needs... and has bad bandwidth already. There are rarely any legitimate uses for doing stuff like this.
There are also shady publishers of "Open Proxy" lists - basically a daily updated list of wrongly configured apache instances that can reroute requests. Those are often used by people trying to increase ad impressions / pagecounts or fake online voting contests. ( which are the two main things people want to do with open proxies and repeated 'new' visits )
| 1 | 0 | 0 |
I know, using Python, a webpage can be visited. But is it also possible to visit a webpage with a new IP address each time?
|
Can I visit a webpage many times with a new IP address each time using Python?
| 0.099668 | 0 | 1 | 9,857 |
14,838,158 |
2013-02-12T17:22:00.000
| 2 | 0 | 1 | 0 |
python,oop,nameerror
| 14,838,171 | 1 | false | 0 | 0 |
Globals are per module, and functions look up globals in the module they are defined in.
So a class Foo defined in a module named bar, that needs access to a function named spam will look up that function in it's own namespace, so in module bar.
If functions were to look up globals in the module they were imported into, you'd have to repeatedly import all the dependencies of any function you ever wanted to use. This would not be practical.
| 1 | 0 | 0 |
Don't see much point in posting whole of an actual code here, so I'll try my best to generalize my problem.
Function(let it be named x) is defined at the start of the code.
Then the class(which has a method z, and z is eventually calling x) is imported from separate .py file. Object of that class is created. After z is called, I get "global name 'x' is not defined" error.
Then I thought I paste all the code from my .py files in a single file, put def(x) on top of it, and see what happens. Of course, it worked as intended. Don't get what is the problem with previous way(which I'd prefer to stick with), isn't it virtually the same?
|
Importing classes and NameError
| 0.379949 | 0 | 0 | 121 |
14,842,592 |
2013-02-12T21:57:00.000
| 0 | 0 | 1 | 0 |
python
| 14,842,625 | 1 | false | 0 | 0 |
Floor Divison. I.e. round down to nearest int.
| 1 | 0 | 0 |
What Does this // mean in PYTHON
I know it is some sort of division in python
like
6.0//5 is 1.0
6.0//4 is 1.0
6.0//2 is 3.0
6.8//5.3 is 1
from this I think it just returns integer solution for non integer division
and instead of rounding it just cut's end off is it true ? and if it is not
then what it does ? rounding
I found answer sorry i will delete this post :)
|
What Does this // mean in PYTHON
| 0 | 0 | 0 | 264 |
14,845,942 |
2013-02-13T03:30:00.000
| 0 | 0 | 0 | 0 |
python,django,api,shopify
| 56,934,353 | 3 | false | 1 | 0 |
The documentation is again not promising but one thing to bear in mind is that there should in actual fact be an existing collection already created
Find it by using this code
collection_id = shopify.CustomCollection.find(handle=<your_handle>)[0].id
then consequently add the collection_id, product_id to a Collect object and save, remember to first save your product (or have an existing one which you can find) and then only save your collection, or else the collection won't know what product its posting to (via the api), like so
new_product = shopify.Product()
new_product.save()
add_collection = shopify.Collect('product_id': new_product.id, 'collection_id': collection_id})
add_collection.save()
Also important to note that there is a 1 to 1 relationship between Product and Collect
| 1 | 1 | 0 |
I am using the Shopify Python API in an django app to interact with my Shopify Store.
I have a collection - called - best sellers.
I am looking to create a batch update to this collection - that is add/remove products to this collection. However, they python API docs does not seem to say much about how to do so. How do I fetch a collection by name? How do I add a product to it?
Thank you for your help.
This is what I found
x=shopify.CustomCollection.find(handle="best-sellers")
y=shopify.Collect() #creates a new collect
p = shopify.Product.find(118751076) # gets me the product
So the questions is how do I add the product "p" above to the Custom Collection "x" ?
|
Shopify Python API: How do I add a product to a collection?
| 0 | 0 | 0 | 4,023 |
14,846,333 |
2013-02-13T04:20:00.000
| 0 | 1 | 1 | 1 |
python,windows-7
| 14,846,374 | 2 | false | 0 | 0 |
Instead of
cd.. Python27
you need to type
cd \python27
| 2 | 0 | 0 |
When I open my Command Prompt,
the default path is C:\Users\acer>
so I want to change the path to C:\Python27
the method is as follows
i enter cd.. 2 times..
then I enter cd.. Python27
as my Python27 folder located in C:\
however, I got this message "the system cannot find the path specified"
Can anyone help me?
|
The system cannot find the path specified in cmd
| 0 | 0 | 0 | 7,184 |
14,846,333 |
2013-02-13T04:20:00.000
| 1 | 1 | 1 | 1 |
python,windows-7
| 14,846,408 | 2 | false | 0 | 0 |
No need for cd .. mumbo jumbo, just go cd C:/Python27.
| 2 | 0 | 0 |
When I open my Command Prompt,
the default path is C:\Users\acer>
so I want to change the path to C:\Python27
the method is as follows
i enter cd.. 2 times..
then I enter cd.. Python27
as my Python27 folder located in C:\
however, I got this message "the system cannot find the path specified"
Can anyone help me?
|
The system cannot find the path specified in cmd
| 0.099668 | 0 | 0 | 7,184 |
14,850,696 |
2013-02-13T09:53:00.000
| 1 | 0 | 1 | 0 |
python,regex
| 14,850,806 | 4 | false | 0 | 0 |
With regex try replacing [^a-zA-Z ] with an empty string.
| 1 | 0 | 0 |
I have [110308] Asia and India string and I want only Asia and India as my result by regular expression.
Can anyone please help me?
|
Split string by regular expression
| 0.049958 | 0 | 0 | 101 |
14,850,853 |
2013-02-13T10:01:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine
| 14,850,874 | 6 | false | 1 | 0 |
Just put Beautifulsoup in the root of your project and upload it all
| 2 | 33 | 0 |
How to add third party python libraries in Google App Engine, which are not provided by Google? I am trying to use BeautifulSoup in Google App Engine and unable to do so. But my question is for any library I want to use in Google App Engine.
|
How to include third party Python libraries in Google App Engine?
| 0 | 0 | 0 | 23,439 |
14,850,853 |
2013-02-13T10:01:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine
| 35,193,844 | 6 | false | 1 | 0 |
pip install -t lib package_name
lib: the location for third party libraries
Then you are good to use this package like a normal library you use from ipython or terminal.
| 2 | 33 | 0 |
How to add third party python libraries in Google App Engine, which are not provided by Google? I am trying to use BeautifulSoup in Google App Engine and unable to do so. But my question is for any library I want to use in Google App Engine.
|
How to include third party Python libraries in Google App Engine?
| 0 | 0 | 0 | 23,439 |
14,852,823 |
2013-02-13T11:46:00.000
| 1 | 1 | 1 | 0 |
c#,python
| 14,852,879 | 1 | false | 0 | 0 |
No, there isn't. Roll your own.
| 1 | 0 | 0 |
I'm generating Python script with c#. But I have to know if word is keyword. The question : is there any library for c# which i can get the python keywords ?
|
How can I get python keywords in c#
| 0.197375 | 0 | 0 | 114 |
14,857,531 |
2013-02-13T15:55:00.000
| 0 | 0 | 1 | 0 |
vpython
| 17,823,578 | 1 | false | 0 | 0 |
I think the easiest way might be to display it on your screen, then take a screenshot. On a mac, that's command + shift + 4, not sure about Windows.
| 1 | 0 | 0 |
I'm using VPython to create some static(non-animation) 3D geometry, like an array of
cylinders, or torus. I would like to save them as .jpeg form or .png form so that I
can put into my PPT for demonstration. Is it possible to do this? Or should I turn to
other tools like Mayavi? Thanks.
|
Export VPyhon's image as JPEG form
| 0 | 0 | 0 | 365 |
14,859,956 |
2013-02-13T18:00:00.000
| 0 | 1 | 1 | 0 |
python,perforce,maya
| 14,867,059 | 1 | false | 0 | 0 |
To ask if the contents of your file on your workstation matches the content of the current head revision of the file on the server, you can do something like 'p4 diff -f //depot/path/to/file#head'.
| 1 | 0 | 0 |
I'm trying to write a python script which will run when Maya loads. The script should check a number stored in the file somewhere, possibly just a names object, and compare it to the latest revision of the file in perforce.
if the number stored in maya is not the latest revision, it should show a warning. is this possible?
|
get the revision number of a specific file in perforce from a python script
| 0 | 0 | 0 | 813 |
14,861,194 |
2013-02-13T19:13:00.000
| 0 | 0 | 1 | 0 |
python,licensing,python-imaging-library
| 14,862,709 | 2 | false | 0 | 0 |
Adding to ssidorenko answer: If you really want to make sure to contain the copyright notice on your packaged executable, maybe add it as a "docstring" to the source code, and then it should be contained in the generated *.pyc as well. (I see py2exe can be configured at setup time to generate optimized *.pyo files - in that case, do not use the -OO when running python setup.py py2exe, as it will remove docstrings from the generated *.pyo file.) You can make clear in the docstring that this is the notice for PIL e.g. by saying "This software contains PIL with the following copyright notice:..."
| 2 | 2 | 0 |
Good morning, I have created a little software for photo retouch by PIL python. By py2exe i have created an exe version from my .py file. In my dist and build folder I can find PIL module pyc and every file that permit at my program to work on every computer without python.Now i would distribute this program freeware (only .exe and not the source code) by a my web site. I read on PIL software license this
Permission to use, copy, modify, and distribute this software and its associated documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appears in all copies, and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Secret Labs AB or the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission.
If i have only .pyc file in my distr and build folder created by py2exe how can I maintain the copyright notice?
In a day, my web site will have many visit perhaps i will put ADsense Google to have little profit, are there any PIL license violation? About license, distribution I'm very confused...could someone help me?
|
PIL python,License and Distribution
| 0 | 0 | 0 | 1,644 |
14,861,194 |
2013-02-13T19:13:00.000
| 1 | 0 | 1 | 0 |
python,licensing,python-imaging-library
| 14,861,642 | 2 | true | 0 | 0 |
As long as you put the notice in the documentation of your program, you're free to distribute your program.
Regarding the advertisement in the license, it concerns only the advertising of your own program. It means you're not allowed to use the name of Secret Labs AB on the download page of your program, or on an ad if you're buying ad space to distribute your program for example.
| 2 | 2 | 0 |
Good morning, I have created a little software for photo retouch by PIL python. By py2exe i have created an exe version from my .py file. In my dist and build folder I can find PIL module pyc and every file that permit at my program to work on every computer without python.Now i would distribute this program freeware (only .exe and not the source code) by a my web site. I read on PIL software license this
Permission to use, copy, modify, and distribute this software and its associated documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appears in all copies, and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Secret Labs AB or the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission.
If i have only .pyc file in my distr and build folder created by py2exe how can I maintain the copyright notice?
In a day, my web site will have many visit perhaps i will put ADsense Google to have little profit, are there any PIL license violation? About license, distribution I'm very confused...could someone help me?
|
PIL python,License and Distribution
| 1.2 | 0 | 0 | 1,644 |
14,861,765 |
2013-02-13T19:47:00.000
| 0 | 0 | 1 | 0 |
python,windows,python-2.7,windows-7-x64
| 14,862,329 | 1 | false | 0 | 0 |
You can use virtualenv to maintain environments that use distinct Python versions.
More directly to your question of keeping the same install-base, the answer is generally "no", you cannot keep the same installs. Some packages are obtained for a specific Python-version/OS/hardware combination (e.g. mypackagePy27Win64.exe) and such packages will have placed 64-bit-specific DLLs on your filesystem. Using virtualenv will let you isolate your new x32 work environment from your existing x64 work environment.
| 1 | 1 | 0 |
I have installed 64-bit Python on Windows 7 and used it for a while. Now I need to switch to 32-bit Python because some libraries require 32-bit one. Can I do this easily, while preserving all previously installed libraries and settings?
|
How to safely switch from 64bit to 32bit Python on Windows?
| 0 | 0 | 0 | 2,207 |
14,862,706 |
2013-02-13T20:41:00.000
| 0 | 0 | 0 | 0 |
python,windows,service,proxy,urllib
| 14,879,550 | 1 | false | 0 | 0 |
So the answer is that on windows proxy system settings are stored in the registry under
HKEY_CURRENT_USER
So as the service runs under a special user it can't find it in its HKEY_CURRENT_USER.
The solutions:
1. Run the service under another user.
2. Read the proper user registry
| 1 | 1 | 0 |
So for my application, I'm using
urllib.getproxies() to detect proxy settings.
The function runs well when I call it from a python shell.
But when my application runs as a service (and only when it runs as a service),
urllib.getproxies() returns me an empty dictionary.
I'm using windows 2008 R2 and python 2.7.
Do you guys have any idea where it could come from ?
Thanks
|
Python urllib.getproxies() on windows doesn't work when run as a service
| 0 | 0 | 1 | 813 |
14,863,125 |
2013-02-13T21:06:00.000
| 22 | 0 | 0 | 0 |
python,scikit-learn,classification
| 14,864,547 | 2 | false | 0 | 0 |
Have you tried to pass to your class_weight="auto" classifier? Not all classifiers in sklearn support this, but some do. Check the docstrings.
Also you can rebalance your dataset by randomly dropping negative examples and / or over-sampling positive examples (+ potentially adding some slight gaussian feature noise).
| 1 | 22 | 1 |
I'm solving a classification problem with sklearn's logistic regression in python.
My problem is a general/generic one. I have a dataset with two classes/result (positive/negative or 1/0), but the set is highly unbalanced. There are ~5% positives and ~95% negatives.
I know there are a number of ways to deal with an unbalanced problem like this, but have not found a good explanation of how to implement properly using the sklearn package.
What I've done thus far is to build a balanced training set by selecting entries with a positive outcome and an equal number of randomly selected negative entries. I can then train the model to this set, but I'm stuck with how to modify the model to then work on the original unbalanced population/set.
What are the specific steps to do this? I've poured over the sklearn documentation and examples and haven't found a good explanation.
|
sklearn logistic regression with unbalanced classes
| 1 | 0 | 0 | 18,285 |
14,864,378 |
2013-02-13T22:23:00.000
| 0 | 1 | 0 | 1 |
python,linux,process
| 14,864,397 | 1 | false | 0 | 0 |
ps aux | grep json ought to do it, or just pgrep -lf json.
| 1 | 0 | 0 |
I have a cron who execute 2 python scripts. How I can see with the "ps" command if the process are running ?
my scripts names are:
json1.py
json2.py
|
Unix process running python
| 0 | 0 | 0 | 127 |
14,867,945 |
2013-02-14T04:44:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,urlfetch,app.yaml
| 14,868,496 | 1 | false | 1 | 0 |
The myScript.py was for the 2.5 runtime, the model for invoking apps with 2.7 runtime normally utilises the myScript.app method. Have a look at the age of the tutorials and also what Python runtime they have configured in their app.yaml.
| 1 | 0 | 0 |
I am writing an app engine application to fetch url content using urlfetch available in google app engine.
however in the app.yaml file, I have a doubt in script handle
I have found that some people use script name as myScript.py while some tutorials use myScript.app
what's the difference between the two uses ?
|
Python script handler for Google AppEngine
| 0.379949 | 0 | 0 | 100 |
14,868,003 |
2013-02-14T04:49:00.000
| 1 | 0 | 0 | 0 |
python,python-2.7,lazy-loading,beautifulsoup
| 16,251,642 | 1 | true | 1 | 0 |
It turns out that the problem itself wasn't BeautifulSoup, but the dynamics of the page itself. For this specific scenario that is.
The page returns part of the page, so headers need to be analysed and sent to the server accordingly. This isn't a BeautifulSoup problem itself.
Therefore, it is important to take a look at how the data is loaded on a specific site. It's not always a "Load a whole page, process the whole page" paradigm. In some cases, you need to load part of the page and send a specific parameter to the server in order to keep loading the rest of the page.
| 1 | 3 | 0 |
I am toying around with BeautifulSoup and I like it so far.
The problem is the site I am trying to scrap has a lazyloader... And it only scraps one part of the site.
Can I have a hint as to how to proceed? Must I look at how the lazyloader is implemented and parametrize anything else?
|
Crawling a page using LazyLoader with Python BeautifulSoup
| 1.2 | 0 | 1 | 1,587 |
14,869,145 |
2013-02-14T06:40:00.000
| 1 | 0 | 0 | 0 |
python,matplotlib
| 14,877,368 | 1 | true | 0 | 0 |
I suspect that exactly what you will have to do will depend on you gui toolkit. The code that you want to look at is in matplotlib/lib/matplotlib/backends and you want to find the class that sub-classes NavigationToolbar2 in which ever backend you are using.
| 1 | 1 | 1 |
i want to change the default icon images of a matplotplib.
even when i replaced the image with the same name and size from the image location
i.e. C:\Python27\Lib\site-packages\matplotlib\mpl-data\images\home.png
its still plotting the the graphs with the same default images.
If I need to change the code of the image location in any file kindly direct me to the code and the code segment.
|
customize the default toolbar icon images of a matplotlib graph
| 1.2 | 0 | 0 | 888 |
14,869,718 |
2013-02-14T07:23:00.000
| 1 | 0 | 0 | 0 |
python,django,postgresql
| 14,880,796 | 2 | false | 1 | 0 |
Most likely somewhere along the line, you created your objects in the template1 database (or in older versions the postgres database) and every time you create a new db i thas all those objects in it. You can either drop the template1 / postgres database and recreate it or connect to it and drop all those objects by hand.
| 2 | 0 | 0 |
I dropped my database that I had previously created for django using :
dropdb <database>
but when I go to the psql prompt and say \d, I still see the relations there :
How do I remove everything from postgres so that I can do everything from scratch ?
|
postgres : relation there even after dropping the database
| 0.099668 | 1 | 0 | 88 |
14,869,718 |
2013-02-14T07:23:00.000
| 0 | 0 | 0 | 0 |
python,django,postgresql
| 14,870,374 | 2 | false | 1 | 0 |
Chances are that you never created the tables in the correct schema in the first place. Either that or your dropdb failed to complete.
Try to drop the database again and see what it says. If that appears to work then go in to postgres and type \l, putting the output here.
| 2 | 0 | 0 |
I dropped my database that I had previously created for django using :
dropdb <database>
but when I go to the psql prompt and say \d, I still see the relations there :
How do I remove everything from postgres so that I can do everything from scratch ?
|
postgres : relation there even after dropping the database
| 0 | 1 | 0 | 88 |
14,869,861 |
2013-02-14T07:33:00.000
| 3 | 0 | 1 | 1 |
python,linux
| 14,869,972 | 1 | true | 0 | 0 |
Do not try to uninstall the pre-installed Python.
Install other Python interpreters side by side (in different directories).
You may come across an option to choose the default Python interpreter for your system. Don't change that from the pre-installed one, as that may break some important scripts used by the system. Customize the default Python interpreter for your user only, not for the entire system. (I don't have a Fedora at hand so don't know how that works exactly.)
Also have a look at virtualenv for having multiple isolated Python environments with their independent collection of Python modules, and pythonbrew for installing multiple Python interpreters.
| 1 | 0 | 0 |
I have a Fedora virtual machine. It comes with Python pre-installed. I've read that it's not a good idea to uninstall it. I want to install a different version of Python, Enthought Python. Should I try to uninstall the existing Python installation and how would I do that? Should I instead install Enthought Python to a new directory? Will that be a problem with the existing Python installation?
|
Installing a new distribution of Python on Fedora
| 1.2 | 0 | 0 | 91 |
14,871,454 |
2013-02-14T09:22:00.000
| 2 | 0 | 1 | 0 |
django,python-imaging-library
| 24,042,390 | 3 | false | 1 | 0 |
I was able to solve this cleanly on Windows with pip install --use-wheel Pillow. I'm not sure what changed because the PILLOW installs used to work on my windows setup. I must have some mixed versions or the default behaviors have changed.
| 1 | 4 | 0 |
I installed django-photologue. But then when I try to save a photo in django admin it throws this error:
'decoder zip not available'
I have already un-installed and re-installed PIL. I hope someone can help me with the complete steps on how to overcome this error.
|
decoder zip not available (Windows 7)
| 0.132549 | 0 | 0 | 1,765 |
14,875,450 |
2013-02-14T13:01:00.000
| 4 | 0 | 0 | 0 |
python,language-agnostic,machine-learning,object-recognition,pybrain
| 14,877,671 | 2 | false | 0 | 0 |
First, a note regarding the classification method to use.
If you intend to use the image pixels themselves as features, neural network might be a fitting classification method. In that case, I think it might be a better idea to train the same network to distinguish between the various objects, rather than using a separate network for each, because it would allow the network to focus on the most discriminative features.
However, if you intend to extract synthetic features from the image and base the classification on them, I would suggest considering other classification methods, e.g. SVM.
The reason is that neural networks generally have many parameters to set (e.g. network size and architecture), making the process of building a classifier longer and more complicated.
Specifically regarding your NN-related questions, I would suggest using a feedforward network, which is relatively easy to build and train, with a softmax output layer, which allows assigning probabilities to the various classes.
In case you're using a single network for classification, the question regarding negative examples is irrelevant; for each class, other classes would be its negative examples. If you decide to use different networks, you can use the same counter-examples (i.e. other classes), but as a rule of thumb, I'd suggest showing no more than 2-10 negative examples per positive example.
EDIT:
based on the comments below, it seems the problem is to decide how fitting is a given image (drawing) to a given concept, e.g. how similar to a tree is the the user-supplied tree drawing.
In this case, I'd suggest a radically different approach: extract visual features from each drawing, and perform knn classification, based on all past user-supplied drawings and their classifications (possibly, plus a predefined set generated by you). You can score the similarity either by the nominal distance to same-class examples, or by the class distribution of the closest matches.
I know that this is not neccessarily what you're asking, but this seems to me an easier and more direct approach, especially given the fact that the number of examples and classes is expected to constantly grow.
| 1 | 0 | 1 |
I was thinking of doing a little project that involves recognizing simple two-dimensional objects using some kind of machine learning. I think it's better that I have each network devoted to recognizing only one type of object. So here are my two questions:
What kind of network should I use? The two I can think of that could work are simple feed-forward networks and Hopfield networks. Since I also want to know how much the input looks like the target, Hopfield nets are probably not suitable.
If I use something that requires supervised learning and I only want one output unit that indicates how much the input looks like the target, what counter-examples should I show it during the training process? Just giving it positive examples I'm pretty sure won't work (the network will just learn to always say 'yes').
The images are going to be low resolution and black and white.
|
How to Train Single-Object Recognition?
| 0.379949 | 0 | 0 | 2,529 |
14,878,215 |
2013-02-14T15:26:00.000
| 0 | 0 | 0 | 1 |
python,shell,command-line-interface,tab-completion
| 14,886,568 | 4 | false | 0 | 0 |
Take a look at the source of the 'cmd' module in the Python library. It supports command completion.
| 1 | 9 | 0 |
I've noticed that some programs (e.g. hg) allow the user to tab-complete specific parts of the command. For example, if, in an hg repository working directory, I type:
hg qpush --move b8<TAB>
It will try to complete the command with any mercurial patches in my patch queue that start with "b8".
What I'd like to do is imitate this behavior in my program. That is, I have a series of commands that depend on files within a certain directory, and I'd like to be able to provide tab completion in the shell. Is there an API for providing this on Ubuntu Linux (preferably using python, as that's what my script is written in)?
|
How can I make my program utilize tab completion?
| 0 | 0 | 0 | 3,643 |
14,881,732 |
2013-02-14T18:36:00.000
| 0 | 0 | 0 | 0 |
python,gtk,pygtk,gdk
| 14,906,082 | 1 | false | 0 | 1 |
It turned out to be an issue of mixing GTK2 and GTK3. gi.repository.Gtk has the Color constructor.
| 1 | 0 | 0 |
I'm creating a simple application through glade, and I want to be able to set the color displayed in the color selection dialog. I found the set_current_color function, however, it requires a gdk.Color object.
Trying to import gtk.gdk.Color fails (actually, just importing gtk fails). Is there another method I can use to create a color object?
|
pygtk issue - unable to create gdk.Color object
| 0 | 0 | 0 | 125 |
14,883,346 |
2013-02-14T20:20:00.000
| 4 | 0 | 0 | 0 |
python,mysql
| 14,883,719 | 2 | false | 1 | 0 |
MySQL connections are relatively fast, so this might not be a problem (i.e. you should measure). Most other databases take much more resources to create a connection.
Creating a new connection when you need one is always the safest, and is a good first choice. Some db libraries, e.g. SqlAlchemy, have connection pools built in that transparently will re-use connections for you correctly.
If you decide you want to keep a connection alive so that you can re-use it, there are a few points to be aware of:
Connections that are only used for reading are easier to re-use than connections that that you've used to modify database data.
When you start a transaction on a connection, be careful that nothing else can use that connection for something else while you're using it.
Connections that sit around for a long time get stale and can be closed from underneath you, so if you're re-using a connection you'll need to check if it is still "alive", e.g. by sending "select 1" and verifying that you get a result.
I would personally recommend against implementing your own connection pooling algorithm. It's really hard to debug when things go wrong. Instead choose a db library that does it for you.
| 2 | 17 | 0 |
We have a Python application with over twenty modules, most of which are shared by several web and console applications.
I've never had a clear understanding of the best practice for establishing and managing database connection in multi module Python apps. Consider this example:
I have a module defining an object class for Users. It has many defs for creating/deleting/updating users in the database. The users.py module is imported into a) a console based utility, 2) a web.py based web application and 3) a constantly running daemon process.
Each of these three application have different life cycles. The daemon can open a connection and keep it open. The console utility connects, does work, then dies. Of course the http requests are atomic, however the web server is a daemon.
I am currently opening, using then closing a connection inside each function in the Users class. This seems the most inefficient, but it works in all examples. An alternative used as a test is to declare and open a global connection for the entire module. Another option would be to create the connection at the top application layer and pass references when instantiating classes, but this seems the worst idea to me.
I know every application architecture is different. I'm just wondering if there's a best practice, and what it would be?
|
How should I establish and manage database connections in a multi-module Python app?
| 0.379949 | 1 | 0 | 6,022 |
14,883,346 |
2013-02-14T20:20:00.000
| 16 | 0 | 0 | 0 |
python,mysql
| 14,883,590 | 2 | true | 1 | 0 |
The best method is to open a connection when you need to do some operations (like getting and/or updating data); manipulate the data; write it back to the database in one query (very important for performance), and then close the connection. Opening a connection is a fairly light process.
Some pitfalls for performance include
opening the database when you won't definitely interact with it
using selectors that take more data than you need (e.g., getting data about all users and filtering it in Python, instead of asking MySQL to filter out the useless data)
writing values that haven't changed (e.g. updating all values of a user profile, when just their email has changed)
having each field update the server individually (e.g., open the db, update the user email, close the db, open the db, update the user password, close the db, open th... you get the idea)
The bottom line is that it doesn't matter how many times you open the database, it's how many queries you run. If you can get your code to join related queries, you've won the battle.
| 2 | 17 | 0 |
We have a Python application with over twenty modules, most of which are shared by several web and console applications.
I've never had a clear understanding of the best practice for establishing and managing database connection in multi module Python apps. Consider this example:
I have a module defining an object class for Users. It has many defs for creating/deleting/updating users in the database. The users.py module is imported into a) a console based utility, 2) a web.py based web application and 3) a constantly running daemon process.
Each of these three application have different life cycles. The daemon can open a connection and keep it open. The console utility connects, does work, then dies. Of course the http requests are atomic, however the web server is a daemon.
I am currently opening, using then closing a connection inside each function in the Users class. This seems the most inefficient, but it works in all examples. An alternative used as a test is to declare and open a global connection for the entire module. Another option would be to create the connection at the top application layer and pass references when instantiating classes, but this seems the worst idea to me.
I know every application architecture is different. I'm just wondering if there's a best practice, and what it would be?
|
How should I establish and manage database connections in a multi-module Python app?
| 1.2 | 1 | 0 | 6,022 |
14,883,568 |
2013-02-14T20:37:00.000
| 0 | 0 | 0 | 1 |
python,ios,objective-c
| 14,883,949 | 2 | true | 0 | 0 |
One solution I am currently considering:
Add NewAppDelegate.m/h file that subclasses AppDelegate.
This subclass, does what I want, and then calls the super methods.
Find/replace AppDelegate with NewAppDelegate.m.h in main.m
This seems pretty simple and robust. Thoughts on this? Will this work for all/most projects?
| 1 | 1 | 0 |
I am working on a framework installer script. The script needs to modify the users AppDelegate file and inject a few lines of code at the beginning or end of the applicationDidFinishLaunching and applicationWillTerminatate methods.
Some options I've thought about:
Parse the source code, and insert lines at correct positions. (Can be difficult to get right and work for everyone's code, just about equivalent to writing a compiler...)
Subclass the AppDelegate file (is this possible?)
Categories??
Which of these is the best option? Any other suggestions?
|
Programmatically modifying someones AppDelegate - categories, subclass?
| 1.2 | 0 | 0 | 530 |
14,884,214 |
2013-02-14T21:18:00.000
| 1 | 0 | 0 | 0 |
python,numpy
| 14,884,277 | 1 | true | 0 | 0 |
Just use loadtxt and reshape (or ravel) the resulting array.
| 1 | 0 | 1 |
I have a text file with a bunch of number that contains newlines every 32 entries. I want to read this file a a column vector using Numpy. How can I use numpy.loadtxt and ignore the newlines such that the generated array is of size 1024x1 and not 32x32?
|
Read Vector from Text File
| 1.2 | 0 | 0 | 174 |
14,886,053 |
2013-02-14T23:39:00.000
| 0 | 0 | 1 | 0 |
python,algorithm,filter
| 14,886,178 | 1 | false | 0 | 0 |
You could try keeping a count of hits against filters and for each file evaluated, mark it against the filter with the lowest hit count. This strategy would tend to spread the hits around the filters.
You could also do multiple passes, so that in the first pass, you figure out how many filters each file matches, then sort them based on filter hit count. You can then discard matches against more common filters and keep uncommon filters for those files with a high filter match count.
You should also research graph theory algorithms; you might be able to convert this problem to an analogous graph theory problem.
Having said this, you might want to examine why you are using this strategy for auto-categorisation in the first place, since 100 matches seems a bit arbitrary. Finally, I suspect that you will not find a deterministic algorithm for this task. I have a feeling it is NP-complete, or at least NP-hard.
| 1 | 0 | 0 |
We are working on a small automatic categorization system in our office.
We have many filters. They are written as python functions, and they either match a file - or not.
For each file, We run all the filters. It scans the file from top to bottom, and if it matches a filter - the file will be categorized and the logfile will have the path of the file and the name of the category.
Each file must fall into only one category.
For each file, we ran all the filters and generated a big excel file which contains for each file, all the filters the apply to this file.
Name of the file | Name of filter, So for example, the file looks like:
test.docx | Financial Report
test.docx | Normal document
pass.txt | Password file
and so on and so on.
As you can see, one file can match more than one filter.
We need to work based on this file (since we don't have access to the filters themselves), and generate a list of filters so that each filter will not match more than 100 files - even if it means some of the files wont be categorized. and of course, we preffer that each filter will match only a small amount of files.
The order of the lines log file is important. In the example log file, if both the "Financial Report" and the "Normal document" filter are on, it will always be categorized as the first match - "Financial Report".
Any ideas?
|
Choosing which text filters to activate
| 0 | 0 | 0 | 48 |
14,886,400 |
2013-02-15T00:13:00.000
| 2 | 0 | 0 | 0 |
python,flask
| 14,915,333 | 1 | true | 1 | 0 |
Basically you can't tell that the user has left your site on the server-side. The common way to do what you want to achieve is to use a time limit after the last known request as a cutoff between the online/offline states.
To make this more accurate you can have a script on the client-side that does regular AJAX polling, if you must consider that a user is online long after their last request while your site is still open in a tab. If you must check that the user has the tab active, make that request conditional on recent mouse or keyboard events.
| 1 | 1 | 0 |
I am trying to check if a user disconnects from my site, how would I go about doing this?
I am doing this in order to check if a user is "online" or not.
|
Checking if a user disconnects using Flask
| 1.2 | 0 | 1 | 1,304 |
14,886,640 |
2013-02-15T00:39:00.000
| 2 | 0 | 0 | 0 |
python,python-2.7,ubuntu-12.04
| 14,886,804 | 3 | false | 0 | 0 |
As you apparently have passed '.cache' to the httplib.Http constructor, you should change this to something more appropriate or disable the cache.
| 1 | 2 | 0 |
Working on a python scraper/spider and encountered a URL that exceeds the char limit with the titled IOError. Using httplib2 and when I attempt to retrieve the URL I receive a file name too long error. I prefer to have all of my projects within the home directory since I am using Dropbox. Anyway around this issue or should I just setup my working directory outside of home?
|
Ubuntu encrypted home directory | Errno 36 File Name too long
| 0.132549 | 0 | 1 | 5,706 |
14,887,374 |
2013-02-15T02:09:00.000
| 0 | 0 | 0 | 0 |
python,c,swig,ctypes
| 14,943,730 | 1 | false | 0 | 1 |
I ended up using Swig with dynamic linking to the so library generated by the C code. In this way, I only have to include the header files in the swig interface file to tell swig what functions/datatypes to expose. Another advantage of this approach is that I can write testing helper functions in C and easily expose those as well.
| 1 | 0 | 0 |
I am trying to find a way to test my C code using python scripts. So far my findings are
1) with Ctypes, I can easily load the so and call the function directly from python. Plus, everything happens at run-time, so no extra compiling/wrapping stuff.
2) However, re-writing every types in python is tedious and error prone, especially for complex data types. And whenever the definitions change, I will have to update the definition in python scripts.
I am wondering since Swig can export datatypes automatically, is it possible to mix Swig and Ctypes together? i.e. use Swig to export datatypes, which can be used to call functions through Ctypes.
p.s I am not sure whether Cython suits better, but we don't have Cython in the environment.
|
Python & C: Is it possible to mix Ctypes and Swig together?
| 0 | 0 | 0 | 321 |
14,888,428 |
2013-02-15T04:49:00.000
| 0 | 0 | 0 | 0 |
django,python-2.7,socket.io,gevent,gevent-socketio
| 14,888,521 | 2 | false | 1 | 0 |
I think what you want is from socketio.server import SocketIOServer
| 2 | 0 | 0 |
Kindly help me in configuring the socketio in my django module. Am using windows7 OS
File wsgi.py
Sample Code - from socketio import SocketIOServer
Error - Unresolved import:SocketIOServer
Am new to python and Django Frameworks.!
|
socketio in python
| 0 | 0 | 1 | 475 |
14,888,428 |
2013-02-15T04:49:00.000
| 1 | 0 | 0 | 0 |
django,python-2.7,socket.io,gevent,gevent-socketio
| 53,155,835 | 2 | false | 1 | 0 |
Try this:
pip install socketIO-server
| 2 | 0 | 0 |
Kindly help me in configuring the socketio in my django module. Am using windows7 OS
File wsgi.py
Sample Code - from socketio import SocketIOServer
Error - Unresolved import:SocketIOServer
Am new to python and Django Frameworks.!
|
socketio in python
| 0.099668 | 0 | 1 | 475 |
14,888,843 |
2013-02-15T05:33:00.000
| 0 | 0 | 0 | 0 |
python,linux,django,django-class-based-views
| 14,888,883 | 2 | false | 1 | 0 |
Create a new view in Django
in the controller, import os
use lastLines = os.popen("tail /path/to/logFile").read()
show these listLines in the view
| 1 | 2 | 0 |
I have my two log files in my django root dir called apache.error.log and django.log. In my app/static folder I have the HTML file mylog.html. Now I want to view those log files inside that HTML page.
Is this possible? I want to view the last 20 lines of both files. basically something like tail -f, but inside the browser so that I can have my one tab always open for debugging.
|
How can I open two log files in one HTML page in Django?
| 0 | 0 | 0 | 282 |
14,889,206 |
2013-02-15T06:10:00.000
| 3 | 0 | 0 | 0 |
python,mysql,search,search-engine
| 14,889,522 | 2 | false | 1 | 0 |
The best bet for you to do "Search Engine" for the 10,000 Articles is to read "Programming Collective Intelligence" by Toby Segaran. Wonderful read and to save your time go to Chapter 4 of August 2007 issue.
| 1 | 0 | 0 |
I have a MySQL database with around 10,000 articles in it, but that number will probably go up with time. I want to be able to search through these articles and pull out the most relevent results based on some keywords. I know there are a number of projects that I can plug into that can essentially do this for me. However, the application for this is very simple, and it would be nice to have direct control and working knowledge of how the whole thing operates. Therefore, I would like to look into building a very simple search engine from scratch in Python.
I'm not even sure where to start, really. I could just dump everything from the MySQL DB into a list and try to sort that list based on relevance, however that seems like it would be slow, and get slower as the amount of database items increase. I could use some basic MySQL search to get the top 100 most relevant results from what MySQL thinks, then sort those 100. But that is a two step process which may be less efficient, and I might risk missing an article if it is just out of range.
What are the best approaches I can take to this?
|
Search engine from scratch
| 0.291313 | 1 | 0 | 643 |
14,890,211 |
2013-02-15T07:45:00.000
| 1 | 0 | 0 | 0 |
python,file,postgresql,csv
| 14,890,240 | 1 | true | 0 | 0 |
Reduce the size of your transactions?
| 1 | 0 | 0 |
I have a situation where my script parse approx 20000 entries and save them to db. I have used transaction which takes around 35 seconds to save and also consume high memory since until committed queries are saved in memory.
I have Found another way to write CSV then load into postgres using "copy_from" which is very fast. If anyone can suggest that if I should open file once at start then close file while loading to postgres or open file when single entry is ready to write then close.
what will be the best approach to save memory utilization?
|
File writing in python
| 1.2 | 1 | 0 | 90 |
14,891,492 |
2013-02-15T09:22:00.000
| 3 | 1 | 0 | 0 |
php,python,aes,pycrypto,phpseclib
| 14,892,493 | 1 | true | 0 | 0 |
I strongly recommend you adjust your PHP code to use (at least) a sixteen byte key, otherwise your crypto system is considerably weaker than it might otherwise be.
I would also recommend you switch to CBC-mode, as ECB-mode may reveal patterns in your input data. Ensure you use a random IV each time you encrypt and store this with the ciphertext.
Finally, to address your original question:
According to the phpseclib documentation the "keys are null-padded to the closest valid size", but I'm not sure how to implement that in Python. Simply extending the length of the string with 6 spaces is not working.
The space character 0x20 is not the same as the null character 0x00.
| 1 | 1 | 0 |
I am working on a data intensive project where I have been using PHP for fetching data and encrypting it using phpseclib. A chunk of the data has been encrypted in AES with the ECB mode -- however the key length is only 10. I am able to decrypt the data successfully.
However, I need to use Python in the later stages of the project and consequently need to decrypt my data using it. I tried employing PyCrypto but it tells me the key length must be 16, 24 or 32 bytes long, which is not the case. According to the phpseclib documentation the "keys are null-padded to the closest valid size", but I'm not sure how to implement that in Python. Simply extending the length of the string with 6 spaces is not working.
What should I do?
|
Key length issue: AES encryption on phpseclib and decryption on PyCrypto
| 1.2 | 0 | 0 | 975 |
14,895,234 |
2013-02-15T12:57:00.000
| 2 | 1 | 0 | 0 |
python
| 14,895,361 | 1 | true | 0 | 0 |
Several reasons:
Not all packages are pure-python packages. It's easy to include C-extensions in your package and have setup.py automate the compilation process.
Automated dependency management; dependencies are declared and installed for you by the installer tools (pip, easy_install, zc.buildout). Dependencies can be declared dynamically too (try to import json, if that fails, declare a dependency on simplejson, etc.).
Custom resource installation setups. The installation process is highly configurable and dynamic. The same goes for dependency detection; the cx_Oracle has to jump through quite some hoops to make installation straightforward with all the various platforms and quirks of the Oracle library distribution options it needs to support, for example.
Why would you still want to do this for CLI scripts? That depends on how crucial the CLI is to you; will you be maintaining this over the coming years? Then I'd still use a setup.py, because it documents what the dependencies are, including minimal version needs. You can add tests (python setup.py test), and deploy to new locations or upgrade dependencies with ease.
| 1 | 3 | 0 |
I am writing a CLI python application that has dependencies on a few libraries (Paramiko etc.).
If I download their source and just place them under my main application source, I can import them and everything works just fine.
Why would I ever need to run their setup.py installers or deal with python package managers?
I understand that when deploying server side applications it is OK for an admin to run easy_install/pip commands etc to install the prerequsites, but for a script like CLI apps that have to be distributed as a self-contained apps that only depend on a python binary, what is the recommented approach?
|
why run setup.py, can I just embed the code?
| 1.2 | 0 | 0 | 251 |
14,896,418 |
2013-02-15T14:03:00.000
| 0 | 0 | 0 | 1 |
python,python-imaging-library,celery
| 14,896,601 | 1 | false | 0 | 0 |
ok, ive realised this only happens if im running celery in Debug mode. outside of this it works fine
| 1 | 0 | 0 |
Im trying to write a celery task for processing large tif files. From past experience Ive found vipscc uses less memory than pil to process/resize tif's so id like to use that module. The problem is that when i try to import vipscc inside a celery task excuted by a worker i get this message:
fatal Python error: can't initialise module vips
vipsmodule: Missing argument for -c
Ive tried executing the script from the shell outside of a celery worker and it works fine
Im totally stumped I can't even find out what -c is for, has anyone got any ideas?
Thanks
|
python vipscc in celery Missing argument
| 0 | 0 | 0 | 77 |
14,897,285 |
2013-02-15T14:53:00.000
| 2 | 0 | 1 | 0 |
python,performance,optimization,memory-management
| 14,897,321 | 3 | false | 0 | 0 |
With regards to reclaiming the memory, there will be no difference; assuming the refcount of the object in both situations drops to 0, the memory will be reclaimed in exactly the same manner.
| 2 | 0 | 0 |
I wish to delete a large item in Python inside a function. Some forum suggest to use del LargeObject and other LargeObject= None. In term of performance (speed and reclaiming memory after deleting item) which is the best solution?
|
Python eliminate a Large item with "del LargeItem" or "LargeItem= None" inside a function
| 0.132549 | 0 | 0 | 70 |
14,897,285 |
2013-02-15T14:53:00.000
| 1 | 0 | 1 | 0 |
python,performance,optimization,memory-management
| 14,897,322 | 3 | false | 0 | 0 |
The difference between the two statements is that del will remove LargeObject from the local namespace (resulting in a NameError if you try to use it). The other will keep a LargeObject in the current namespace, but it's value will be None -- Most likely resulting in a ValueError or TypeError if you try to use it. Otherwise, I don't really see much difference between the two approaches. Either way, if you want to reclaim your memory, you need to make sure you don't have other references to LargeObject sitting around.
| 2 | 0 | 0 |
I wish to delete a large item in Python inside a function. Some forum suggest to use del LargeObject and other LargeObject= None. In term of performance (speed and reclaiming memory after deleting item) which is the best solution?
|
Python eliminate a Large item with "del LargeItem" or "LargeItem= None" inside a function
| 0.066568 | 0 | 0 | 70 |
14,898,843 |
2013-02-15T16:19:00.000
| 4 | 0 | 1 | 0 |
python,static-methods
| 14,898,860 | 3 | false | 0 | 0 |
If none of the methods share any state, there's not much of a reason to have a class at all. a module with functions is probably a better idea ...
| 1 | 1 | 0 |
Is there any way to apply staticmethod to all methods of a given class.
I was thinking we can access methods in its metaclass (in new of metaclass) and apply staticmethod, but i am not aware of syntax. Can any one please shed light on this?
|
Any way to apply staticmethod to all class methods?
| 0.26052 | 0 | 0 | 150 |
14,899,139 |
2013-02-15T16:35:00.000
| 2 | 0 | 1 | 0 |
python,matrix,numpy,integration,exponential
| 14,900,251 | 1 | false | 0 | 0 |
Provided A has the right properties, you could transform it to the diagonal form A0 by calculating its eigenvectors and eigenvalues. In the diagonal form, the solution is sol = [exp(A0*b) - exp(A0*a)] * inv(A0), where A0 is the diagonal matrix with the eigenvalues and inv(A0) just contains the inverse of the eigenvalues in its diagonal. Finally, you transform back the solution by multiplying it with the transpose of the eigenvalues from the left and the eigenvalues from the right: transpose(eigvecs) * sol * eigvecs.
| 1 | 1 | 1 |
I have a matrix of the form, say e^(Ax) where A is a square matrix. How can I integrate it from a given value a to another value bso that the output is a corresponding array?
|
how to find the integral of a matrix exponential in python
| 0.379949 | 0 | 0 | 1,283 |
14,900,510 |
2013-02-15T17:53:00.000
| 0 | 0 | 0 | 0 |
python,tkinter
| 42,867,640 | 7 | false | 0 | 1 |
Create an single file exe using PyInstaller and use Inno Setup to build the windows installer package. Inno Setup will do the icon stuff for you.
| 2 | 15 | 0 |
I've been working on a very simple python/tkinter script (a .pyw file) and I'd like to change it's application icon (the 'file' icon shown at the explorer window and the start/all programs window, for example - not the 'file type' icon nor the main window of the app icon) and the taskbar icon (the icon shown at the taskbar when the application is minimized). Is it possible to change them or is it something only doable when you effectively install an application through an .exe?
This little app is supposed to run on Windows XP / 7 machines only and it's in Python 2.7.3.
Thanks in advance!
|
Changing the application and taskbar icon - Python/Tkinter
| 0 | 0 | 0 | 35,930 |
14,900,510 |
2013-02-15T17:53:00.000
| 0 | 0 | 0 | 0 |
python,tkinter
| 72,111,647 | 7 | false | 0 | 1 |
add
--icon=iconname.ico
to the pyinstaller command in the prompt
eg>> pyinstaller --windowed --add-data "pics/myicon.ico;pics" --add-data "pics/*.png;pics" --icon=pics/myicon.ico -d bootloader myscript.py
this will show your icon on the windows taskbar instead of the default python pkg icon
| 2 | 15 | 0 |
I've been working on a very simple python/tkinter script (a .pyw file) and I'd like to change it's application icon (the 'file' icon shown at the explorer window and the start/all programs window, for example - not the 'file type' icon nor the main window of the app icon) and the taskbar icon (the icon shown at the taskbar when the application is minimized). Is it possible to change them or is it something only doable when you effectively install an application through an .exe?
This little app is supposed to run on Windows XP / 7 machines only and it's in Python 2.7.3.
Thanks in advance!
|
Changing the application and taskbar icon - Python/Tkinter
| 0 | 0 | 0 | 35,930 |
14,902,023 |
2013-02-15T19:34:00.000
| 1 | 0 | 0 | 0 |
python,django
| 14,902,498 | 2 | false | 1 | 0 |
I believe this is not supported out of the box. Off the top of my head, one way to do it would be with a special 404 handler that, having failed to match against any of the defined URLs, treats the request as a request for a static resource. This would be reasonably easy to do in the development environment but significantly more difficult when nginx, Apache, and/or gunicorn get involved.
In other words, don't do this. Nest your statics (or put them on a different sub domain) but don't mix the URL hierarchy in this way.
| 1 | 2 | 0 |
Yep, I want it to work like in Flask framework - there I could set parameters like this:
static_folder=os.getcwd()+"/static/", static_url_path=""
and all the files that lies in ./static/files/blabla.bla would be accessible by mysite.com/files/blabla.bla address. I really don't want to add static after mysite.com.
But if I set STATIC_URL = '/' in Django then I could get my static files by this address, but suddenly I could not fetch my pages that described in urls.py.
|
Django: how to make STATIC_URL empty?
| 0.099668 | 0 | 0 | 294 |
14,902,181 |
2013-02-15T19:44:00.000
| 16 | 0 | 1 | 0 |
python,python-2.7,compiler-errors,pycharm
| 18,464,696 | 3 | false | 0 | 0 |
In Pycharm 2.6.3:
Code -> Inspect Code
| 2 | 101 | 0 |
I use python 2.7 in a virtual environment and PyCharm 2.7 (new build as of feb 07 2013).
Whenever I open a python file in it that has unambiguous errors (equivalent to compile errors in other languages, e.g. using undeclared variables, calling non-existing functions), it shows red stripes in the gutter of the file.
So, I discover errors randomly as I happened to navigate to a file that contains them. What I would really like is to be able to list all of the python errors in a separate window. The Visual Studio 2005/2008/2010/... IDE has a separate "Errors" view that lists all of them with file names and line numbers, and gives me the ability to click on any one of these errors and navigate directly to the source.
Does PyCharm have anything like this?
|
Can PyCharm list all of Python errors in a project?
| 1 | 0 | 0 | 25,204 |
14,902,181 |
2013-02-15T19:44:00.000
| 15 | 0 | 1 | 0 |
python,python-2.7,compiler-errors,pycharm
| 14,902,247 | 3 | false | 0 | 0 |
Yes, run Analyze|Inspect Code and specify Whole project as the scope of analysis.
| 2 | 101 | 0 |
I use python 2.7 in a virtual environment and PyCharm 2.7 (new build as of feb 07 2013).
Whenever I open a python file in it that has unambiguous errors (equivalent to compile errors in other languages, e.g. using undeclared variables, calling non-existing functions), it shows red stripes in the gutter of the file.
So, I discover errors randomly as I happened to navigate to a file that contains them. What I would really like is to be able to list all of the python errors in a separate window. The Visual Studio 2005/2008/2010/... IDE has a separate "Errors" view that lists all of them with file names and line numbers, and gives me the ability to click on any one of these errors and navigate directly to the source.
Does PyCharm have anything like this?
|
Can PyCharm list all of Python errors in a project?
| 1 | 0 | 0 | 25,204 |
14,907,457 |
2013-02-16T05:56:00.000
| 2 | 0 | 0 | 0 |
python,django,caching,memcached
| 14,907,596 | 2 | false | 1 | 0 |
Memcached is more or less only limited by available (free) memory in the number of servers you run it on. The more memory, the more data fits, and since it uses fairly efficient in-memory indexes, you won't really see performance degrade in any significant way with more objects.
Remember though, it's a cache and there is no guarantee that you'll be able to retrieve what you put in. More memory will make memcached try to keep more data in memory, but there is no guarantee that it won't just throw data away even if memory is available if it somehow finds that a better idea.
| 1 | 1 | 0 |
How large values can I store and retrieve from memcached without degrading its performance?
I am using memcached with python-memcached in a django based web application.
|
How large data can memcached handle efficiently?
| 0.197375 | 0 | 0 | 2,162 |
14,910,250 |
2013-02-16T12:23:00.000
| 1 | 0 | 0 | 1 |
lxml,importerror,python-3.3
| 14,927,230 | 1 | false | 0 | 0 |
You should probably mention the specific operating system you're trying to install on, but I'll assume it's some form of Linux, perhaps Ubuntu or Debian since you mention apt-get.
The error message you mention is typical on lxml when the libxml2 and/or libxslt libraries are not installed for it to link with. For whatever reason, the install procedure does not detect when these are not present and can give the sense the install has succeeded even though those dependencies are not satisfied.
If you issue apt-get install libxml2 libxml2-dev libxslt libxslt-dev that should eliminate this error.
| 1 | 1 | 0 |
I am having a hard time installing lxml(3.1.0) on python-3.3.0. It installs without errors and I can see the lxml-3.1.0-py3.3-linux-i686.egg in the correct folder (/usr/local/lib/python3.3/site-packages/), but when I try to import etree, I get this:
from lxml import etree
Traceback (most recent call last):
File "", line 1, in
ImportError: /usr/local/lib/python3.3/site-packages/lxml-3.1.0-py3.3-linux-i686.egg/lxml/etree.cpython-33m.so: undefined symbol: xmlBufContent
I did try to install with apt-get, I tried "python3 setup.py install" and I did via easy_install. I have to mention that I have 3 versions installed (2.7, 3.2.3 and 3.3.0.), but I am too much of a beginner to tell if this has to do with it.
I did search all over, but I could not find any solution to this.
Any help is greatly appreciated!
best,
Uhru
|
lxml on python-3.3.0 ImportError: undefined symbol: xmlBufContent
| 0.197375 | 0 | 1 | 2,277 |
14,910,858 |
2013-02-16T13:30:00.000
| 2 | 0 | 0 | 0 |
python,python-2.7,tkinter
| 55,293,163 | 6 | false | 0 | 1 |
root.geometry('520x400+350+200')
Explanation: ('width x height + X coordinate + Y coordinate')
| 1 | 67 | 0 |
How can I tell a Tkinter window where to open, based on screen dimensions? I would like it to open in the middle.
|
How to specify where a Tkinter window opens?
| 0.066568 | 0 | 0 | 110,461 |
14,910,977 |
2013-02-16T13:46:00.000
| 3 | 0 | 1 | 0 |
python
| 45,383,110 | 2 | false | 0 | 0 |
I just ran into a similar problem. Everything was working fine for a week, but today suddenly an import failed.
I found that in the directory of the module I was trying to import, there was a corrupt .pyc file. I deleted that file, then everything worked again.
| 1 | 2 | 0 |
I had everything installed good on my laptop. Everything was working. Today when i wanted to work with my projects again just everything suddenly gives an error on imports. It's not just one package, but multiple. The only thing I could think of is Windows Update, but actually I doubt this would be the cause of the problem.
I have everything installed in 32-bit since some packages were only 32-bit. I already reinstalled everything and restarted several times. I'm not very experienced with Python, so thats why I'm wondering if anybody with more knowledge could help me with this.
The packages installed are: CherryPy, Cython, Oursql, PIL, pywin32, setuptools
Thanks in advance!!
|
python imports suddenly not working
| 0.291313 | 0 | 0 | 3,681 |
14,912,150 |
2013-02-16T15:54:00.000
| 0 | 0 | 0 | 1 |
python,python-2.7,windows-server-2008,ubuntu-12.04,nfs
| 15,278,451 | 2 | false | 0 | 0 |
pynfs is a test suite and not ment to run as nfs server in production
| 2 | 0 | 0 |
I need to connect from a windows server 2008 in a secure network to an ubuntu box and write and read files easily from python code. I want to avoid samba or ftp, so I am considering NFS and my question is, if pynfs works stable on windows (if at all, or does it work on linux only?)
I found the source and some forks on github
I am also unsure about the state of the project, it is not in pypi, it cannot be installed over pip, so I wonder if this is a maintaned and updated project with a future. It would be great to hear from someone who has some production experience with it, I am using python 2.7
|
Is pynfs stable to run on windows server 2008?
| 0 | 0 | 0 | 173 |
14,912,150 |
2013-02-16T15:54:00.000
| 0 | 0 | 0 | 1 |
python,python-2.7,windows-server-2008,ubuntu-12.04,nfs
| 15,273,016 | 2 | true | 0 | 0 |
I would prefer pynfs had some modern infrastructure around it.
I went with samba this time.
| 2 | 0 | 0 |
I need to connect from a windows server 2008 in a secure network to an ubuntu box and write and read files easily from python code. I want to avoid samba or ftp, so I am considering NFS and my question is, if pynfs works stable on windows (if at all, or does it work on linux only?)
I found the source and some forks on github
I am also unsure about the state of the project, it is not in pypi, it cannot be installed over pip, so I wonder if this is a maintaned and updated project with a future. It would be great to hear from someone who has some production experience with it, I am using python 2.7
|
Is pynfs stable to run on windows server 2008?
| 1.2 | 0 | 0 | 173 |
14,915,048 |
2013-02-16T21:00:00.000
| 0 | 0 | 0 | 1 |
python,process,terminate
| 14,915,445 | 2 | false | 0 | 0 |
Create a thread when your process starts.
Make that thread sleep for the required duration.
When that sleep is over, kill the process.
| 1 | 0 | 0 |
How is it possible to get a compiled .exe program written in Python to kill itself after a period of time after it is launched?
If I have some code and I compile it into an .exe, then launch it and it stays in a 'running' or 'waiting' state, how can I get it to terminate after a few mins regardless of what the program is doing?
The reason why is that the exe that is launched envokes a URL using PAMIE and automates some clicks. What I have noticed is that if the browser is closed the process remains in memory and does not clean itself up. I wanted to find a way to auto-clean up the process after say 5 mins which is more then enough time. I've tried using psutils to detect the process but that does not work in my case. Any suggestions is greatly appreciated.
|
Python built exe process to kill itself after a period of time
| 0 | 0 | 0 | 1,455 |
14,915,097 |
2013-02-16T21:05:00.000
| 0 | 0 | 1 | 0 |
python,windows-7,sleep
| 50,913,327 | 3 | false | 0 | 0 |
You could try to make a Visual Basic Script (.vbs) or Batch (.bat) program and run it from python. The program should have code to run the sleep/wake up function.
Then again, since you are using windows, just use task scheduler.
| 2 | 2 | 0 |
I have a Python script doing some job which takes up to 5 minutes, then it sleeps for an hour and starts again. Now I want my laptop to sleep instead of being always on while waiting, and to wake up roughly every hour just to run the job. Is it possible to sleep and wake up with Python?
I am using Windows 7.
|
Sleep and wake with python
| 0 | 0 | 0 | 8,990 |
14,915,097 |
2013-02-16T21:05:00.000
| 1 | 0 | 1 | 0 |
python,windows-7,sleep
| 14,915,196 | 3 | false | 0 | 0 |
You should better create a Task Scheduler task in Windows instead, that runs your python script on schedule and wakes up the PC if needed (the task's setting). To put it to sleep just set up energy settings to sleep after several minutes of inactivity.
| 2 | 2 | 0 |
I have a Python script doing some job which takes up to 5 minutes, then it sleeps for an hour and starts again. Now I want my laptop to sleep instead of being always on while waiting, and to wake up roughly every hour just to run the job. Is it possible to sleep and wake up with Python?
I am using Windows 7.
|
Sleep and wake with python
| 0.066568 | 0 | 0 | 8,990 |
14,923,583 |
2013-02-17T16:59:00.000
| 0 | 0 | 0 | 1 |
python,celery
| 71,510,516 | 2 | false | 0 | 0 |
I think you can, at least in chord. When you bind=True your task, you can access self.request. In self.request.chord You can find a detailed dict. In its kwargs or options['chord'] you will find what you're looking for, but it's not an elegant solution. Also, if the parent has been replaced, you will only be able to see the final state.
| 1 | 2 | 0 |
Is it possible to access the arguments with which a parent task A was called, from its child task Z? Put differently, when Task Z gets called in a chain, can it somehow access an argument V that was invoked when Task A was fired, but that was not passed through any intermediary nodes between tasks A and Z? And if so, how?
Using Celery 3.0 with RabbitMQ for results backend.
|
Access Arguments to Parent Task from Subtask in Celery
| 0 | 0 | 0 | 960 |
14,923,821 |
2013-02-17T17:23:00.000
| 1 | 1 | 0 | 0 |
debugging,python-3.x,aptana,pydev
| 19,110,870 | 1 | true | 0 | 0 |
You need to make sure that the run/debug configuration uses the correct main module otherwise it will take the current windows source file to be the main module. If there is no executable code in that file, i.e. there is nothing in global scope, the file will simply run to completion.
| 1 | 2 | 0 |
I am using Apatana Studio 3.3.1 with PyDev 2.7 and writing code in Python 3.3.
I was debugging my code by setting up break point in my code and click on Run>Debug, but the code has not been stopped at the breakpoint and has run through till the end.
In the Interpreter - Python setting, I have included the following in my libraries > System PYTHONPATH:
C:\Python33\DLLs
C:\Python33\lib
C:\Python33
C:\Python33\lib\site-packages
Thanks for any help.
|
Does Aptana Studio 3.3.1 support debugging of Python 3.3?
| 1.2 | 0 | 0 | 260 |
14,924,586 |
2013-02-17T18:35:00.000
| 1 | 0 | 0 | 1 |
python,memcached,amazon-elasticache
| 14,924,764 | 3 | true | 1 | 0 |
As far as I know, ElastiCache cluster is just a bunch of memcached servers, so you need to give your memcached client the list of all of your servers and have the client do the relevant load balancing.
For Python, you have a couple of options:
pylibmc - which is a wrapper around libmemcached - one of the best and fastest memcached clients there is
python-memcached - a native Python client - very basic, but easy to work with, install and use
They haven't provided a client yet in python to deal with the new auto-discovery feature unfortunately.
| 1 | 2 | 0 |
Recently, AWS announced ElastiCache's auto-discovery feature, although they only officially released a client for Java. Does anyone know of a Python Memcached library with support for this feature?
|
Is there a Python Memcached library with support for AWS ElastiCache's auto-discovery feature?
| 1.2 | 0 | 0 | 2,949 |
14,928,820 |
2013-02-18T02:56:00.000
| 15 | 0 | 1 | 0 |
python,regex
| 14,928,828 | 2 | true | 0 | 0 |
You use brace notation. For instance, the regex a{10,} would match 10 or more a characters. a{10,20} would match at least 10 and no more than 20 as.
| 1 | 3 | 0 |
How do you indicate a regex that you want to require more than 10 characters? I know that '*' is more than 0, and that '+' is more than 1 but what is syntax for requiring more than 10? Thanks all!!!!
|
regex - more than 10 characters
| 1.2 | 0 | 0 | 7,545 |
14,931,793 |
2013-02-18T08:04:00.000
| 0 | 0 | 0 | 0 |
python,django,ubuntu,django-templates
| 14,931,827 | 6 | false | 1 | 0 |
Should be here: /usr/lib/python2.7/site-packages/django/contrib/admin/templates
| 3 | 14 | 0 |
I have trouble to see django/contrib/admin/templates folder. It seems like it is hidden in /usr/lib/python2.7/dist-packages/ folder, ctrl+h wont help ( appearencely all django files are hidden).
"locate django/contrib/admin/templates" in terminal shows bunch of files, but how can i see those files in GUI? I use Ubuntu 12.10
Thanks in advance
|
find django/contrib/admin/templates
| 0 | 0 | 0 | 15,440 |
14,931,793 |
2013-02-18T08:04:00.000
| 0 | 0 | 0 | 0 |
python,django,ubuntu,django-templates
| 14,931,847 | 6 | false | 1 | 0 |
Since, everyone is posting my comment's suggestion, might as well post it myself. Try looking at:
/usr/lib/python2.6/site-packages/django/
| 3 | 14 | 0 |
I have trouble to see django/contrib/admin/templates folder. It seems like it is hidden in /usr/lib/python2.7/dist-packages/ folder, ctrl+h wont help ( appearencely all django files are hidden).
"locate django/contrib/admin/templates" in terminal shows bunch of files, but how can i see those files in GUI? I use Ubuntu 12.10
Thanks in advance
|
find django/contrib/admin/templates
| 0 | 0 | 0 | 15,440 |
14,931,793 |
2013-02-18T08:04:00.000
| 0 | 0 | 0 | 0 |
python,django,ubuntu,django-templates
| 53,451,127 | 6 | false | 1 | 0 |
If you are using Python3, Django is located at your venv. In my case templates are located at <project_root>/venv/lib/python3.5/site-packages/django/contrib/admin/templates/.
| 3 | 14 | 0 |
I have trouble to see django/contrib/admin/templates folder. It seems like it is hidden in /usr/lib/python2.7/dist-packages/ folder, ctrl+h wont help ( appearencely all django files are hidden).
"locate django/contrib/admin/templates" in terminal shows bunch of files, but how can i see those files in GUI? I use Ubuntu 12.10
Thanks in advance
|
find django/contrib/admin/templates
| 0 | 0 | 0 | 15,440 |
14,933,700 |
2013-02-18T10:04:00.000
| 2 | 1 | 0 | 0 |
python,arrays,file,file-io,multidimensional-array
| 14,934,694 | 1 | true | 0 | 0 |
The time it takes for a seek operation would be measured in low milliseconds, probably less than 10 in most cases. So that wouldn't be a bottleneck.
However, if you have to retrieve and save all the records from the database either way, you may end up with roughly the same IO load and perhaps greater. The IO time for writing a file is certainly greater than reading into memory.
Time for a small-ish experiment :) Try it with a few arrays and time the performance, then you can do the math to see how it would scale.
| 1 | 0 | 0 |
I have a few thousand of very big radio-telemetry array fields of the same area in a database. The georeference of the pixels is the same for all of the array fields. An array can be loaded into memory in an all or nothing approach.
I want to extract the pixel for a specific geo-coordinate from all the array fields. Currently I query for the index of the specific pixel for a specific geocoordinate and then load all array fields from the database into memory. However that is very IO intensive and overloads our systems.
I'd imagine the following: I save the arrays to disk and then sequentially open them and seek to the byte-position corresponding to the pixel. I imagine that this is far less wasteful and much speedier than loading them all to memory.
Is seeking to a position considered a fast operation or would one not do such a thing?
|
Save byte-arrays to disk to reduce memory consumption and increase speed?
| 1.2 | 0 | 0 | 95 |
14,934,084 |
2013-02-18T10:26:00.000
| 3 | 0 | 1 | 0 |
python-3.x,wolfram-mathematica,ipython,ipython-notebook
| 14,941,337 | 2 | true | 0 | 0 |
First, when you open an IPython notebook, this does not mean the state of the kernel is lost,
unless you restarted the server or explicitly stop the kernel.
Otherwise, there are no marked cell, but there is a "run until here" on dev version.
Also if you are using dev version, using Cell Toolbar /metadata and I would say ~30 line of javascript it should be doable.
I suggest you open an enhancement request on main issue tracker. This could typically be made as an extension during a sprint and/or a blog post to explain internal of notebook.
| 1 | 8 | 0 |
When I open a saved IPython Notebook, I need to evaluate all the cells with imports, function definitions etc. to continue working on the session. It is convenient to click Cell > Run All to do this. But what If I do not want to re-evaluate all calculations? Do I need to pick the cells to evaluate by hand each time?
For this problem, Mathematica has the concept of "initialization cells". You can mark some cells in the notebook as initialization cell, and then perform "evaluate initialization cells" after opening the notebook.
Does the IPython Notebook have a similar solution?
|
Does the IPython Notebook have "initialization cells"?
| 1.2 | 0 | 0 | 3,769 |
14,935,137 |
2013-02-18T11:24:00.000
| 0 | 0 | 1 | 0 |
python,egg,pypi
| 14,935,778 | 1 | true | 0 | 0 |
You'll need to manually register these.
You can, however, use the PyPI web interface to do this. The central PyPI server has a Package submission link in the left-hand menu bar, leading to http://pypi.python.org/pypi?%3Aaction=submit_form, presumably your local installation has the same.
Your .eggs are either directories, or zip files. If it is not a directory, you need to unzip the file to look inside (create a copy that you rename to have a .zip extension if that'll make it easier for your tools to recognize it as a zip file).
You'll find a EGG-INFO subdirectory inside the egg, and inside of that you'll find a PKG-INFO file. You can upload this file to the package submission form to replace the setup.py register command.
Once registered, the web UI lets you navigate to the package, from there to the files section of a specific package and upload the egg file.
| 1 | 1 | 0 |
Apologies if this has been asked before but I couldn't for the life of me find an answer to what seems (to me) like a very basic question.
I have a set of .egg packages that do not contain the source (e.g. there is no setup.py file). I need to register and upload these packages to our inhouse pypi repository. Is there any way to do this, short of manually copying the package into the pypi repository directory and manually inserting the entries into the pypi db?
|
Upload package without source to pypi repo
| 1.2 | 0 | 0 | 365 |
14,940,443 |
2013-02-18T16:06:00.000
| 0 | 1 | 0 | 0 |
python,aptana,aptana3
| 56,855,673 | 1 | false | 0 | 0 |
I accidentally somehow set the project as PyDev project. To disable, right click on the project > PyDev > Remove PyDev Project Config
| 1 | 3 | 0 |
Aptana Studio 3 keeps adding .pydevproject file, how can I disable Python or whatever it's doing this
|
Aptana Studio 3 keeps adding .pydevproject file, how can I disable Python?
| 0 | 0 | 0 | 459 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.