Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,987,162 |
2012-06-11T20:35:00.000
| 0 | 0 | 0 | 0 |
php,python,mysql,orm,redbean
| 13,714,374 | 2 | true | 0 | 0 |
Short answer, there is a proof-of-concept called PyBean as answered by Gabor de Mooij, but it barely offers any features and cannot be used. There are no other Python libraries that work like PyBean.
| 1 | 1 | 0 |
When I say 'equivalent', I mean an ORM that allows for the same work-style. That is;
Setting up a database
Dispensing and editing 'beans' (table rows) as if the table was already ready, while the table is being created behind the scenes
Reviewing, indexing and polishing the table structure before production
Thanks for any leads
|
Is there a RedBeanPHP equivalent for Python?
| 1.2 | 1 | 0 | 700 |
10,987,246 |
2012-06-11T20:42:00.000
| 13 | 0 | 1 | 0 |
python,parallel-processing,rendering,xvfb
| 10,987,416 | 1 | true | 0 | 0 |
One Xvfb server should be able to handle lots of connections quite well. One thing you want to make sure you do is run the server with the -noreset option. Without it, it has a memory leak every time a client disconnects.
The only time multiple Xvfb servers is helpful is if you have more than one processor available in the machine (e.g. 8 cores) and your script is graphics-heavy. To see if this is the case, connect many instances of your script and check top to see what the CPU usage of Xvfb is. If it's at 100%, you might benefit from additional Xvfb instances.
| 1 | 9 | 0 |
Curious about running multiple xvfb displays: I have between 10-50 instances of a script running in parallel that connect to an xvfb display. Is is advantageous to run the same number of xvfb displays and connect 1 to 1? Or can multiple processes share the same display? RAM is not an issue, neither is processing power.
|
Xvfb multiple displays for parallel processing?
| 1.2 | 0 | 0 | 6,765 |
10,987,834 |
2012-06-11T21:33:00.000
| 1 | 0 | 1 | 0 |
python,django,opencv,pycharm
| 19,462,559 | 2 | false | 1 | 0 |
I'm not quite sure if this works for you guys but it works for me. In my case, it seems to me that I installed OpenCV to work with the default Python arriving with OS X. I remember I tried to install Python 2.7.5 and Python 3 in my Mac as well, I see them when I chose my Python interpreter for Pycharm. And all of them didn't let me import module cv2. So I change to the default Python2.7.2 (/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python). Then, in File/DefaultSettings/Project Interpreter/Python Interpreter, click on the Python interpreter that's been added (Python 2.7.2), click on Paths and locate to "/usr/local/bin/python2.7/site-packages"and add it. Click the blue refresh button, apply and ok. Then it works, both with import and autocompletion.
Regards,
| 2 | 3 | 0 |
I'm running a Django project from PyCharm with the configuration set up to use the Python interpreter from a virtualenv which has a dependency on opencv. The site works fine locally when I run django-admin.py runserver, however I keep getting an "ImportError: No module named cv2" error when I try to run the project directly from the PyCharm IDE.
Has anyone else had this issue with PyCharm and opencv?
|
"ImportError: No module named cv2" when running Django project from PyCharm IDE
| 0.099668 | 0 | 0 | 20,383 |
10,987,834 |
2012-06-11T21:33:00.000
| 10 | 0 | 1 | 0 |
python,django,opencv,pycharm
| 10,992,173 | 2 | true | 1 | 0 |
In the end I ended up having to set an environment variable directly in the Pycharm Edit Configurations -> Run/Debug Configurations -> Environment Variables panel. I added the following option after you hit the edit button: set name to PYTHONPATH and value to /usr/local/lib/python2.7/site-packages:$PYTHONPATH which should display in the input box after editing as PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH. Also, I made sure to log out and log back in of osx which also worked for a couple other path related issues.
| 2 | 3 | 0 |
I'm running a Django project from PyCharm with the configuration set up to use the Python interpreter from a virtualenv which has a dependency on opencv. The site works fine locally when I run django-admin.py runserver, however I keep getting an "ImportError: No module named cv2" error when I try to run the project directly from the PyCharm IDE.
Has anyone else had this issue with PyCharm and opencv?
|
"ImportError: No module named cv2" when running Django project from PyCharm IDE
| 1.2 | 0 | 0 | 20,383 |
10,989,098 |
2012-06-12T00:18:00.000
| 6 | 0 | 1 | 0 |
python,syntax,jython
| 25,375,422 | 4 | false | 0 | 0 |
There are 3 major implementations available for Python language. Jython is a java implementation, Cython is C implementation and IronPython is c# implementation. As far as Python language syntax is concerned, it remains consistent in all implementations. Regarding the last part of your question, I dont think Jython version 3.x is released or in use yet, probably you meant python 3.x - if so, yes it is.
| 1 | 15 | 0 |
I know Jython converts Python code into Java byte code, but are there any syntax changes between the two? and as a side question is Jython 3.x usable yet or is it still being ported?
|
Differences between Jython and Python
| 1 | 0 | 0 | 19,385 |
10,990,496 |
2012-06-12T04:22:00.000
| 0 | 0 | 0 | 0 |
python,mysql,unicode,encoding,utf-8
| 10,992,555 | 1 | true | 0 | 0 |
You most probably need to set your mysql shell client to use utf8.
You can set it either in mysql shell directly by running set character set utf8.
Or by adding default-character-set=utf8 to your ~/.my.cnf.
| 1 | 0 | 0 |
Here's the scenario:
I have a url in a MySQL database that contains Unicode. The database uses the Latin-1 encoding. Now, when I read the record from MySQL using Python, it gets converted to Unicode because all strings follow the Unicode format in Python.
I want to write the URL into a text file -- to do so, it needs to be converted to bytes (UTF-8). This was done successfully.
Now, given the URLS that are in the text file, I want to query the db for these SAME urls in the database. I do so by calling the source command to execute a few select queries.
Result: I get no matches.
I suspect that the problem stems from my conversion to UTF-8, which somehow is messing up the symbols.
|
Unicode to UTF-8 encoding issue when importing SQL text file into MySQL
| 1.2 | 1 | 0 | 1,752 |
10,992,976 |
2012-06-12T08:27:00.000
| 0 | 1 | 1 | 0 |
python,module
| 10,993,067 | 2 | false | 1 | 0 |
Technically for any Python module to be "installed" you just have to add it to the sys.path variable, so that Python can find and import it.
Same goes with django apps, which are Python modules. As long as Python can find and import the django application, you just have to add it to INSTALLED_APPS in settings (and maybe few more steps usually described in the application, e.g. adding urls etc).
| 1 | 2 | 0 |
I'm experienced in PHP and recently started studying Python, and right now I'm on creating a small web project using django. And I have a conceptual question about approach to installing modules in Python and django.
Example: based on expected needs for my project I've googled and downloaded django_openid module. And obviously I want to install it to my project.
However, when I do it the prescribed way (python setup.py install) it installs it to python dir as a python module. Thus this module becomes not project specific, but system-wide.
So, what is generally right approach to install project-specific modules in python?
Based on my PHP experience it looks strange to install high level functional modules into the python itself. I'd rather expect it to be installed in the project library and included in the project on runtime.
Or do I loose something important here?
I've googled around, but as long as this is rather a conceptual approach question - keywords search doesnt work good in this case.
|
python project-specific modules installation approach
| 0 | 0 | 0 | 223 |
10,993,612 |
2012-06-12T09:12:00.000
| 27 | 0 | 1 | 0 |
python,python-2.7,unicode,beautifulsoup,utf-8
| 31,550,233 | 14 | false | 0 | 0 |
Try using .strip() at the end of your line
line.strip() worked well for me
| 3 | 323 | 0 |
I am currently using Beautiful Soup to parse an HTML file and calling get_text(), but it seems like I'm being left with a lot of \xa0 Unicode representing spaces. Is there an efficient way to remove all of them in Python 2.7, and change them into spaces? I guess the more generalized question would be, is there a way to remove Unicode formatting?
I tried using: line = line.replace(u'\xa0',' '), as suggested by another thread, but that changed the \xa0's to u's, so now I have "u"s everywhere instead. ):
EDIT: The problem seems to be resolved by str.replace(u'\xa0', ' ').encode('utf-8'), but just doing .encode('utf-8') without replace() seems to cause it to spit out even weirder characters, \xc2 for instance. Can anyone explain this?
|
How to remove \xa0 from string in Python?
| 1 | 0 | 0 | 379,578 |
10,993,612 |
2012-06-12T09:12:00.000
| 0 | 0 | 1 | 0 |
python,python-2.7,unicode,beautifulsoup,utf-8
| 65,968,947 | 14 | false | 0 | 0 |
You can try string.strip()
It worked for me! :)
| 3 | 323 | 0 |
I am currently using Beautiful Soup to parse an HTML file and calling get_text(), but it seems like I'm being left with a lot of \xa0 Unicode representing spaces. Is there an efficient way to remove all of them in Python 2.7, and change them into spaces? I guess the more generalized question would be, is there a way to remove Unicode formatting?
I tried using: line = line.replace(u'\xa0',' '), as suggested by another thread, but that changed the \xa0's to u's, so now I have "u"s everywhere instead. ):
EDIT: The problem seems to be resolved by str.replace(u'\xa0', ' ').encode('utf-8'), but just doing .encode('utf-8') without replace() seems to cause it to spit out even weirder characters, \xc2 for instance. Can anyone explain this?
|
How to remove \xa0 from string in Python?
| 0 | 0 | 0 | 379,578 |
10,993,612 |
2012-06-12T09:12:00.000
| 4 | 0 | 1 | 0 |
python,python-2.7,unicode,beautifulsoup,utf-8
| 10,996,267 | 14 | false | 0 | 0 |
0xA0 (Unicode) is 0xC2A0 in UTF-8. .encode('utf8') will just take your Unicode 0xA0 and replace with UTF-8's 0xC2A0. Hence the apparition of 0xC2s... Encoding is not replacing, as you've probably realized now.
| 3 | 323 | 0 |
I am currently using Beautiful Soup to parse an HTML file and calling get_text(), but it seems like I'm being left with a lot of \xa0 Unicode representing spaces. Is there an efficient way to remove all of them in Python 2.7, and change them into spaces? I guess the more generalized question would be, is there a way to remove Unicode formatting?
I tried using: line = line.replace(u'\xa0',' '), as suggested by another thread, but that changed the \xa0's to u's, so now I have "u"s everywhere instead. ):
EDIT: The problem seems to be resolved by str.replace(u'\xa0', ' ').encode('utf-8'), but just doing .encode('utf-8') without replace() seems to cause it to spit out even weirder characters, \xc2 for instance. Can anyone explain this?
|
How to remove \xa0 from string in Python?
| 0.057081 | 0 | 0 | 379,578 |
10,994,405 |
2012-06-12T10:04:00.000
| 2 | 1 | 0 | 0 |
python,caching,timeit
| 10,994,548 | 1 | false | 0 | 0 |
I think the memory allocation is the problem.
the python interpreter itself holds a memory pool, which starts with no (or little?) memory pooling. after the first run of your program, much memory is allocated (from the system) and free (to the pool), and then the following runs get memory from the pool, which is much faster than asking memory from system.
but this makes sense only if your algorithm will consume much memory.
| 1 | 1 | 0 |
I am testing several different algorithms (for solving 16x16 sudokus)
against each other, measuring their performance using the timeit module.
However, it appears that only the first of the timeit.repeat() iterations is
actually calculated, because the other iterations are gotten much faster.
Testing a single algorithm with t.repeat(repeat=10, number=1) the following
results are gotten:
[+] Results for......: solve1 (function 1/1)
[+] Fastest..........: 0.00003099
[+] Slowest..........: 32.38717794
[+] Average*.........: 0.00003335 (avg. calculated w/o min/max values)
The first out of 10 results always takes an much larger time to complete,
which seems to be explainable only by the fact that iterations 2 to 10 of the
timeit.repeat() loop somehow use the cached results of the loop's previous
iterations. When actually using timeit.repeat() in a for loop to compare
several algorithms against each other, again it appears that the solution
to the puzzle is calculated only once:
[+] Results for......: solve1 (function 1/3)
[+] Fastest..........: 0.00003099
[+] Slowest..........: 16.33443809
[+] Average*.........: 0.00003263 (avg. calculated w/o min/max values)
[+] Results for......: solve2 (function 2/3)
[+] Fastest..........: 0.00365305
[+] Slowest..........: 0.02915907
[+] Average*.........: 0.00647599 (avg. calculated w/o min/max values)
[+] Results for......: solve3 (function 3/3)
[+] Fastest..........: 0.00659299
[+] Slowest..........: 0.02440906
[+] Average*.........: 0.00717765 (avg. calculated w/o min/max values)
The really weird thing is that relative speed (in relation to each other)
of the algorithms is consistent throughout measurements, which would indicate
that all algorithms are calculating their own results. Is this extreme increase in performance due to the fact that a large part of the intermediate results (gotten
when computing the first solution) are still in some sort of cache, reserved by the
python proces?
Any help/insights would be greatly appreciated.
|
Python timeit: results cached instead of calculated?
| 0.379949 | 0 | 0 | 1,635 |
10,997,254 |
2012-06-12T13:02:00.000
| 0 | 0 | 0 | 0 |
python,matlab,numpy
| 69,231,502 | 9 | false | 0 | 0 |
In latest R2021a, you can pass a python numpy ndarray to double() and it will convert to a native matlab matrix, even when calling in console the numpy array it will suggest at the bottom "Use double function to convert to a MATLAB array"
| 1 | 44 | 1 |
I am looking for a way to pass NumPy arrays to Matlab.
I've managed to do this by storing the array into an image using scipy.misc.imsave and then loading it using imread, but this of course causes the matrix to contain values between 0 and 256 instead of the 'real' values.
Taking the product of this matrix divided by 256, and the maximum value in the original NumPy array gives me the correct matrix, but I feel that this is a bit tedious.
is there a simpler way?
|
"Converting" Numpy arrays to Matlab and vice versa
| 0 | 0 | 0 | 75,113 |
10,997,651 |
2012-06-12T13:24:00.000
| 3 | 0 | 0 | 0 |
python,graphics
| 10,997,736 | 1 | true | 0 | 1 |
Two that come to mind for me are Tic-Tac-Toe and Higher or Lower
| 1 | 1 | 0 |
I have only started learning python recently. I would still be considered a beginner. Does anyone know any simple games I could make using only python or python turtle. I have no experience with pygame or tkinter yet. The game does not need to use graphics. For example one of my recent games was trying to guess the letters of a random word. Simple game. Kind of like hangman. I will consider all answers. Thank you :)
|
simple game to make only using python
| 1.2 | 0 | 0 | 665 |
10,999,627 |
2012-06-12T15:14:00.000
| 0 | 0 | 0 | 0 |
python,twisted
| 11,001,717 | 1 | true | 0 | 0 |
Your factory's buildProtocol can return anything you want it to return. That's up to you.
However, you might find that things are a lot simpler if you just use two different factories. That does not preclude sharing state. Just have them share a bunch of attributes, or collect all your state together onto a single new object and have the factories share that object.
| 1 | 0 | 0 |
I am trying to implement a network protocol that listens on 2 separate TCP ports. One is for control messages and one is for data messages.
I understand that I need two separate protocol classes since there are two ports involved.
I would like to have one factory that creates both of these protocols since there is state information and data that is shared between them and they essential implement one protocol.
Is this possible? If yes, how?
If not, how can I achieve something similar?
I understand that it is unusal to divide a protocol between 2 ports but that is the given situation.
Thanks
|
A single factory for multiple protocols?
| 1.2 | 0 | 1 | 153 |
11,006,553 |
2012-06-13T00:07:00.000
| 6 | 1 | 1 | 0 |
python,python-sphinx
| 11,006,581 | 1 | true | 0 | 0 |
Sphinx evaluates everything in the global scope because the autodoc plugin imports modules, and importing a module evaluates everything in the global scope.
To stop this, either:
Disable the autodoc plugin (search for autodoc in the sphinx config file), or
Guard the code you don't want executed with something like if __name__ == "__main__": do_stuff()
| 1 | 5 | 0 |
I am trying to do python documentation generation with Sphinx. The problem is that sphinx-build ends up executing the module/evaluating anything in global scope. Is there a reason it does this? And does anyone know of a flag that can be set to disable this?
It seems like Sphinx is trying to do code-coverage or something equivalent, which is definitely not what I want it doing. Normally this wouldn't be an issue, but a particular set of modules are very specific to an environment.
|
Prevent Sphinx from executing the module
| 1.2 | 0 | 0 | 1,993 |
11,006,997 |
2012-06-13T01:14:00.000
| 5 | 0 | 1 | 0 |
python,python-2.7,exe
| 11,007,013 | 2 | true | 0 | 0 |
Use py2exe (for Windows), py2app (for Mac), or cx_freeze (for Linux) to bundle the Python interpreter, your program, and the standard library into an executable you can use on a machine with no Python at all.
PS: If your friend's computer isn't on the Internet, however you'd get him your program, you can also get him the kits for Python, etc.
| 1 | 1 | 0 |
I have written a python GUI application.I want to run the code on my friend's computer who doesn't have python interpreter in his computer and that he can't download since he can't connect to the internet.How do I make that happen?
|
Making your python code execute on all computers
| 1.2 | 0 | 0 | 144 |
11,007,169 |
2012-06-13T01:42:00.000
| 6 | 0 | 0 | 0 |
python,numpy,floating-point,compression
| 11,027,196 | 3 | true | 0 | 0 |
It is unlikely that a simple transformation will reduce error significantly, since your distribution is centered around zero.
Scaling can have effect in only two ways: One, it moves values away from the denormal interval of single-precision values, (-2-126, 2-126). (E.g., if you multiply by, say, 2123 values that were in [2-249, 2-126) are mapped to [2-126, 2-3), which is outside the denormal interval.) Two, it changes where values lie in each “binade” (interval from one power of two to the next). E.g., your maximum value is 20, where the relative error may be 1/2 ULP / 20, where the ULP for that binade is 16*2-23 = 2-19, so the relative error may be 1/2 * 2-19 / 20, about 4.77e-8. Suppose you scale by 32/20, so values just under 20 become values just under 32. Then, when you convert to float, the relative error is at most 1/2 * 2-19 / 32 (or just under 32), about 2.98e-8. So you may reduce the error slightly.
With regard to the former, if your values are nearly normally distributed, very few are in (-2-126, 2-126), simply because that interval is so small. (A trillion samples of your normal distribution almost certainly have no values in that interval.) You say these are scientific measurements, so perhaps they are produced with some instrument. It may be that the machine does not measure or calculate finely enough to return values that range from 2-126 to 20, so it would not surprise me if you have no values in the denormal interval at all. If you have no values in the single-precision denormal range, then scaling to avoid that range is of no use.
With regard to the latter, we see a small improvement is available at the end of your range. However, elsewhere in your range, some values are also moved to the high end of a binade, but some are moved across a binade boundary to the small end of a new binade, resulting in increased relative error for them. It is unlikely there is a significant net improvement.
On the other hand, we do not know what is significant for your application. How much error can your application tolerate? Will the change in the ultimate result be unnoticeable if random noise of 1% is added to each number? Or will the result be completely unacceptable if a few numbers change by as little as 2-200?
What do you know about the machinery producing these numbers? Is it truly producing numbers more precise than single-precision floats? Perhaps, although it produces 64-bit floating-point values, the actual values are limited to a population that is representable in 32-bit floating-point. Have you performed a conversion from double to float and measured the error?
There is still insufficient information to rule out these or other possibilities, but my best guess is that there is little to gain by any transformation. Converting to float will either introduce too much error or it will not, and transforming the numbers first is unlikely to alter that.
| 3 | 4 | 1 |
This is a particular kind of lossy compression that's quite easy to implement in numpy.
I could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error.
Other than looking at the maximum error for my actual data, does anybody have a good idea what type of distortions this creates, e.g. as a function of the magnitude of the original value?
Would I be better off mapping all values (in 64-bits) onto say [-1,1] first (as a fraction of extreme values, which could be preserved in 64-bits) to take advantage of greater density of floats near zero?
I'm adding a specific case I have in mind. Let's say I have 500k to 1e6 values ranging from -20 to 20, that are approximately IID ~ Normal(mu=0,sigma=4) so they're already pretty concentrated near zero and the "20" is ~5-sigma rare. Let's say they are scientific measurements where the true precision is a whole lot less than the 64-bit floats, but hard to really know exactly. I have tons of separate instances (potentially TB's worth) so compressing has a lot of practical value, and float32 is a quick way to get 50% (and if anything, works better with an additional round of lossless compression like gzip). So the "-20 to 20" eliminates a lot of concerns about really large values.
|
What should I worry about if I compress float64 array to float32 in numpy?
| 1.2 | 0 | 0 | 1,086 |
11,007,169 |
2012-06-13T01:42:00.000
| 2 | 0 | 0 | 0 |
python,numpy,floating-point,compression
| 11,007,250 | 3 | false | 0 | 0 |
The exponent for float32 is quite a lot smaller (or bigger in the case of negative exponents), but assuming all you numbers are less than that you only need to worry about the loss of precision. float32 is only good to about 7 or 8 significant decimal digits
| 3 | 4 | 1 |
This is a particular kind of lossy compression that's quite easy to implement in numpy.
I could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error.
Other than looking at the maximum error for my actual data, does anybody have a good idea what type of distortions this creates, e.g. as a function of the magnitude of the original value?
Would I be better off mapping all values (in 64-bits) onto say [-1,1] first (as a fraction of extreme values, which could be preserved in 64-bits) to take advantage of greater density of floats near zero?
I'm adding a specific case I have in mind. Let's say I have 500k to 1e6 values ranging from -20 to 20, that are approximately IID ~ Normal(mu=0,sigma=4) so they're already pretty concentrated near zero and the "20" is ~5-sigma rare. Let's say they are scientific measurements where the true precision is a whole lot less than the 64-bit floats, but hard to really know exactly. I have tons of separate instances (potentially TB's worth) so compressing has a lot of practical value, and float32 is a quick way to get 50% (and if anything, works better with an additional round of lossless compression like gzip). So the "-20 to 20" eliminates a lot of concerns about really large values.
|
What should I worry about if I compress float64 array to float32 in numpy?
| 0.132549 | 0 | 0 | 1,086 |
11,007,169 |
2012-06-13T01:42:00.000
| 7 | 0 | 0 | 0 |
python,numpy,floating-point,compression
| 11,019,850 | 3 | false | 0 | 0 |
The following assumes you are using standard IEEE-754 floating-point operations, which are common (with some exceptions), in the usual round-to-nearest mode.
If a double value is within the normal range of float values, then the only change that occurs when the double is rounded to a float is that the significand (fraction portion of the value) is rounded from 53 bits to 24 bits. This will cause an error of at most 1/2 ULP (unit of least precision). The ULP of a float is 2-23 times the greatest power of two not greater than the float. E.g., if a float is 7.25, the greatest power of two not greater than it is 4, so its ULP is 4*2-23 = 2-21, about 4.77e-7. So the error when double in the interval [4, 8) is converted to float is at most 2-22, about 2.38e-7. For another example, if a float is about .03, the greatest power of two not greater than it is 2-6, so the ULP is 2-29, and the maximum error when converting to double is 2-30.
Those are absolute errors. The relative error is less than 2-24, which is 1/2 ULP divided by the smallest the value could be (the smallest value in the interval for a particular ULP, so the power of two that bounds it). E.g., for each number x in [4, 8), we know the number is at least 4 and error is at most 2-22, so the relative error is at most 2-22/4 = 2-24. (The error cannot be exactly 2-24 because there is no error when converting an exact power of two from float to double, so there is an error only if x is greater than four, so the relative error is less than, not equal to, 2-24.) When you know more about the value being converted, e.g., it is nearer 8 than 4, you can bound the error more tightly.
If the number is outside the normal range of a float, errors can be larger. The maximum finite floating-point value is 2128-2104, about 3.40e38. When you convert a double that is 1/2 ULP (of a float; doubles have finer ULP) more than that or greater to float, infinity is returned, which is, of course, an infinite absolute error and an infinite relative error. (A double that is greater than the maximum finite float but is greater by less than 1/2 ULP is converted to the maximum finite float and has the same errors discussed in the previous paragraph.)
The minimum positive normal float is 2-126, about 1.18e-38. Numbers within 1/2 ULP of this (inclusive) are converted to it, but numbers less than that are converted to a special denormalized format, where the ULP is fixed at 2-149. The absolute error will be at most 1/2 ULP, 2-150. The relative error will depend significantly on the value being converted.
The above discusses positive numbers. The errors for negative numbers are symmetric.
If the value of a double can be represented exactly as a float, there is no error in conversion.
Mapping the input numbers to a new interval can reduce errors in specific situations. As a contrived example, suppose all your numbers are integers in the interval [248, 248+224). Then converting them to float would lose all information that distinguishes the values; they would all be converted to 248. But mapping them to [0, 224) would preserve all information; each different input would be converted to a different result.
Which map would best suit your purposes depends on your specific situation.
| 3 | 4 | 1 |
This is a particular kind of lossy compression that's quite easy to implement in numpy.
I could in principle directly compare original (float64) to reconstructed (float64(float32(original)) and know things like the maximum error.
Other than looking at the maximum error for my actual data, does anybody have a good idea what type of distortions this creates, e.g. as a function of the magnitude of the original value?
Would I be better off mapping all values (in 64-bits) onto say [-1,1] first (as a fraction of extreme values, which could be preserved in 64-bits) to take advantage of greater density of floats near zero?
I'm adding a specific case I have in mind. Let's say I have 500k to 1e6 values ranging from -20 to 20, that are approximately IID ~ Normal(mu=0,sigma=4) so they're already pretty concentrated near zero and the "20" is ~5-sigma rare. Let's say they are scientific measurements where the true precision is a whole lot less than the 64-bit floats, but hard to really know exactly. I have tons of separate instances (potentially TB's worth) so compressing has a lot of practical value, and float32 is a quick way to get 50% (and if anything, works better with an additional round of lossless compression like gzip). So the "-20 to 20" eliminates a lot of concerns about really large values.
|
What should I worry about if I compress float64 array to float32 in numpy?
| 1 | 0 | 0 | 1,086 |
11,007,460 |
2012-06-13T02:27:00.000
| 0 | 0 | 0 | 0 |
c++,python,excel,node.js,read-write
| 11,008,175 | 1 | true | 0 | 0 |
Basically you have 2 possibilities:
node.js does not support C++ libraries but it is possible to write bindings for node.js that interact with a C/C++ library. So you need to get your feet wet on writing a C++ addon for the V8 (the JavaScript engine behind node.js)
find a command line program which does what you want to do. (It does not need to be Python.) You could call this from your JavaScript code by using a child-process.
First option is more work, but would be result in faster executing time (when done right). Second possibility is easier to realise.
P.S.: To many question for one question. I've no idea about the xls-whatever stuff, besides it's "actually" only XML.
| 1 | 0 | 0 |
I'm looking for a way of editing and save a specified cell in Excel 2010 .xlsx file from Node.JS. I realize, that maybe there are no production-ready solutions for NodeJS at this time. However, NodeJS supports C++ libraries, so could you suggest me any suitable lib compatible with Node?
Also, I had an idea to process this task via Python (xlrd, xlwt) and call it with NodeJS. What do you think of this? Are there any more efficient methods to edit XLSX from NodeJS? Thanks.
|
Node.JS/C++/Python - edit Excel .xlsx file
| 1.2 | 1 | 0 | 3,505 |
11,008,337 |
2012-06-13T04:47:00.000
| 0 | 0 | 0 | 1 |
python,rabbitmq,celery
| 72,374,505 | 2 | false | 0 | 0 |
To get resolve connection with rabbitmq need to inspect below points:
Connectivity from client machine to rabbitmq server machine [in case if client and server are running on separate machine], need to check
along with port as well.
Credential (username and password), a user must be onboarded into RabbitMQ which will be used to connect with RabbitMQ
Permission to User must be given (permission may be attached to VHOST as well so need to provide permission carefully)
| 1 | 2 | 0 |
I am trying to start some background processing through rabbitmq, but when I send the request, I get the below error in the rabbitmq log. But, I think I am providing the right credentials, as my celery works are able to connect to rabbitmq server using the same username/password combination.
=ERROR REPORT==== 12-Jun-2012::20:50:29 ===
exception on TCP connection from 127.0.0.1:41708
{channel0_error,starting,
{amqp_error,access_refused,
"AMQPLAIN login refused: user 'guest' - invalid credentials",
'connection.start_ok'}}
|
Rabbitmq connection issue when using a username and password
| 0 | 0 | 1 | 3,691 |
11,009,083 |
2012-06-13T06:05:00.000
| 2 | 0 | 0 | 0 |
python,django,database-schema,database-migration,django-south
| 11,009,395 | 1 | true | 1 | 0 |
As long as you are not getting any errors, this is fine. There are two ways to create a table in Django/South:
Running syncdb which automatically creates the initial tables of Django.
Running an initial migration of an app which also creates the tables of that app.
These are different approaches: tables that were 'synced' are not created with a migration or vice versa. So if South has made the tables with an initial migration then it is correct that they are not 'synced'.
To check whether it has worked correctly, you need: an entry in the south_migrationhistory table (i.e., South knows that the migration has been done) and the table(s) with the proper structure in the database. If that's the case then there's nothing to worry about.
| 1 | 2 | 0 |
I ran the convert_to_south command on my app. Everything seems to have gone fine: the migration is in south_migrationhistory table, migrate --list show the migration as applied BUT when I do syncdb, the app is still shows as "Not Synced". It suggests I migrate those (which does nothing, since there is nothing to migrate)
Is this behaviour expected?
|
App shows up as "not synced" after convert_to_south
| 1.2 | 0 | 0 | 236 |
11,010,953 |
2012-06-13T08:29:00.000
| 24 | 0 | 0 | 1 |
python,django,google-app-engine,python-2.7,webapp2
| 11,020,530 | 1 | true | 1 | 0 |
Choosing between Django and webapp2 really depends on what you're using it for. In your question you haven't given any of the parameters for your decision making, so it's impossible to tell which is "better". Describing them both as "web frameworks" shows you haven't done much research into what they are.
Webapp2 is essentially a request handler. It directs HTTP requests to handlers that you write. It's also very small.
Django has a request handler. It also has a template engine. It also has a forms processor. It also has an ORM, which you may choose to use, or not. Note that you can use the ORM on CloudSQL, but you'll need to use Django-nonrel if you want to use the ORM on the HRD. It also has a library of plugins that you can use, but they'll only work if you're using the Django ORM. It also has bunch of 3rd party libraries, which will also require the Django ORM.
If you have portability in mind the Django ORM would help a lot.
You'll have to make your decision comparing what you actually need.
| 1 | 11 | 0 |
I would like to know your opinion of which of these two web frameworks (Django & webapp2) is better for using on App Engine Platform, and why?
Please don't say that both are completely different, because Django is much more complete. Both are the "web frameworks" you can use in App Engine.
|
Django vs webapp2 on App Engine
| 1.2 | 0 | 0 | 9,275 |
11,013,911 |
2012-06-13T11:28:00.000
| -2 | 0 | 1 | 0 |
python,google-app-engine,python-2.7
| 11,013,947 | 3 | false | 1 | 0 |
This is impossible (without an external service). DBs are made for this to store data longer than one request. What you could do is to safe the dict "in" the users session, but I don't recommend that. Unless you have millions of entries every DB is fast enough even sqlite.
| 1 | 3 | 0 |
I am new to Python and have been studying its fundementals for 3 months now, learning types, functions and algorithms. Now I started practiciging web app development with GAE framework.
Goal: have a very large dictionary, which can be accessed from all .py files throughout the web app without having it stored more than once or re-created each time when someone visits a URL of the app.
I want to render a simple DB table to a dictionary, with hopes of speed gain as it will be in memory.
Also I am planing on creating an in memory DAWG - TRIE
I don't want this dictionary to be created each time a page is called, I want it to be stored in memory once, kept there and used and accessed by all sessions and if possible modified too.
How can I achieve this? Like a simple in memory DB but actually a Python dictionary?
Thank you.
|
python: how to have a dictionary which can be accessed from all the app
| -0.132549 | 0 | 0 | 576 |
11,013,976 |
2012-06-13T11:31:00.000
| 1 | 0 | 0 | 0 |
python,mysql,ruby-on-rails,database,triggers
| 11,014,025 | 1 | true | 1 | 0 |
Yes, refactor the code to put a data web service in front of the database and let the Ruby and Python apps talk to the service. Let it maintain all integrity and business rules.
"Don't Repeat Yourself" - it's a good rule.
| 1 | 2 | 0 |
Okay., We have Rails webapp which stores data in a mysql data base. The table design was not read efficient. So we resorted to creating a separate set of read only tables in mysql and made all our internal API calls use that tables for read. We used callbacks to keep the data in sync between both the set of tables. Now we have a another Python app which is going to mess with the same database - now how do we proceed maintaining the data integrity?
Active record callbacks can't be used anymore. We know we can do it with triggers. But is there a any other elegant way to do this? How to people achieve to maintain the integrity of such derived data.
|
Maintaining data integrity in mysql when different applications are accessing it
| 1.2 | 1 | 0 | 296 |
11,015,320 |
2012-06-13T12:56:00.000
| 14 | 0 | 1 | 0 |
python,trie,dawg
| 11,015,381 | 15 | false | 0 | 0 |
There's no "should"; it's up to you. Various implementations will have different performance characteristics, take various amounts of time to implement, understand, and get right. This is typical for software development as a whole, in my opinion.
I would probably first try having a global list of all trie nodes so far created, and representing the child-pointers in each node as a list of indices into the global list. Having a dictionary just to represent the child linking feels too heavy-weight, to me.
| 1 | 148 | 0 |
I'm interested in tries and DAWGs (direct acyclic word graph) and I've been reading a lot about them but I don't understand what should the output trie or DAWG file look like.
Should a trie be an object of nested dictionaries? Where each letter is divided in to letters and so on?
Would a lookup performed on such a dictionary be fast if there are 100k or 500k entries?
How to implement word-blocks consisting of more than one word separated with - or space?
How to link prefix or suffix of a word to another part in the structure? (for DAWG)
I want to understand the best output structure in order to figure out how to create and use one.
I would also appreciate what should be the output of a DAWG along with trie.
I do not want to see graphical representations with bubbles linked to each other, I want to know the output object once a set of words are turned into tries or DAWGs.
|
How to create a trie in Python
| 1 | 0 | 0 | 138,151 |
11,016,661 |
2012-06-13T14:07:00.000
| 0 | 0 | 1 | 0 |
python,comments
| 11,017,248 | 2 | false | 0 | 0 |
Does the syntax have to look like that? Could you just use a character delimited file (like csv or tab-delimited) with each predefined field in a separate column? Python has well defined modules to handle csv data.
If you specifically want input files that present blocks of code, then aix's suggestion of importing would also work.
| 1 | 1 | 0 |
I need to write some input data files for a python program, and I need the full thing:
comments, spacing, variable = value, etc.
Is there any library (line argparser for command line arguments) for python or should I write my own?
Thanks!
|
How to write input files with comments in Python
| 0 | 0 | 0 | 169 |
11,019,169 |
2012-06-13T16:21:00.000
| 4 | 0 | 1 | 0 |
python,postgresql,plpython
| 11,019,395 | 2 | false | 0 | 0 |
As it turns out, one must add the path to where the libraries are found to the PYTHONPATH environment variable in postgres. Don't forget to quote your value eg:
PYTHONPATH='path to libraries'
| 1 | 2 | 0 |
I am writing function in postgres in python using the PL/Pythonu extension in postgres. I would like postgres to use my virutal environment (I am using virtualenv) instead of the global install. How do I go about doing this?
|
Using PL/Pythonu with virtualenv
| 0.379949 | 0 | 0 | 614 |
11,020,919 |
2012-06-13T18:16:00.000
| -1 | 0 | 1 | 0 |
python,excel,file-io,scripting
| 11,020,968 | 3 | false | 0 | 0 |
Python is beginner-friendly and is good with string manipulation so it's a good choice. I have no idea how easy awk is to learn without programming experience but I would consider that as it's more or less optimized for processing csv's.
| 1 | 1 | 0 |
I have an excel spreadsheet (version 1997-2003) and another nonspecific database file (a .csy file, I am assuming it can be parsed as a text file as that is what it appears to be). I need to take information from both sheets, match them up, put them on one line, and print it to a text file. I was going to use python for this as usuing the python plugins for Visual Studio 2010 alongside the xlrd package seems to be the best way I could find for excel files, and I'd just use default packages in python for the other file.
Would python be a good choice of language to both learn and program this script in? I am not familiar with scripting languages other then a little bit of VBS, so any language will be a learning experience for me.
Converting the xls to csv is not an option, there are too many excel files, and the wonky formatting of them would make fishing through the csv more difficult then using xlrd.
|
First time writing a script, not sure what language to use (parsing excel and other files)
| -0.066568 | 1 | 0 | 2,962 |
11,021,405 |
2012-06-13T18:48:00.000
| 0 | 0 | 1 | 0 |
python,language-agnostic,machine-learning,nlp,oov
| 18,041,168 | 4 | false | 0 | 0 |
I do not see a reason to use Levenshtein distance to find a word similar in meaning. LD looks at form (you want to map "bus" to "truck" not to "bush").
The correct solution depends on what you want to do next.
Unless you really need the information in those unknown words, I would simply map all of them to a single generic "UNKNOWN_WORD" item.
Obviously you can cluster the unknown words by their context and other features (say, do they start by a capital letter). For context clustering: since you are interested in meaning, I would use a larger window for those words (say +/- 50 words) and probably use a simple bag of words model. Then you simply find a known word whose vector in this space is closest to the unknown word using some distance metrics (say, cosine). Let me know if you need more information about this.
| 1 | 3 | 0 |
I am designing a text processing program that will generate a list of keywords from a long itemized text document, and combine entries for words that are similar in meaning. There are metrics out there, however I have a new issue of dealing with words that are not in the dictionary that I am using.
I am currently using nltk and python, but my issues here are of a much more abstracted nature. Given a word that is not in a dictionary, what would be an efficient way of resolving it to a word that is within your dictionary? My only current solution involves running through the words in the dictionary and picking the word with the shortest Levenshtein distance (editing distance) from the inputted word.
Obviously this is a very slow and impractical method, and I don't actually need the absolute best match from within the dictionary, just so long as it is a contained word and it is pretty close. Efficiency is more important for me in the solution, but a basic level of accuracy would also be needed.
Any ideas on how to generally resolve some unknown word to a known one in a dictionary?
|
Efficient way of resolving unknown words to known words?
| 0 | 0 | 0 | 1,411 |
11,023,530 |
2012-06-13T21:25:00.000
| 4 | 0 | 0 | 0 |
python,html,directory,ip-address
| 11,023,595 | 5 | false | 0 | 0 |
HTTP does not work with "files" and "directories". Pick a different protocol.
| 2 | 17 | 0 |
How can I list files and folders if I only have an IP-address?
With urllib and others, I am only able to display the content of the index.html file. But what if I want to see which files are in the root as well?
I am looking for an example that shows how to implement username and password if needed. (Most of the time index.html is public, but sometimes the other files are not).
|
Python to list HTTP-files and directories
| 0.158649 | 0 | 1 | 57,862 |
11,023,530 |
2012-06-13T21:25:00.000
| 13 | 0 | 0 | 0 |
python,html,directory,ip-address
| 11,024,116 | 5 | false | 0 | 0 |
You cannot get the directory listing directly via HTTP, as another answer says. It's the HTTP server that "decides" what to give you. Some will give you an HTML page displaying links to all the files inside a "directory", some will give you some page (index.html), and some will not even interpret the "directory" as one.
For example, you might have a link to "http://localhost/user-login/": This does not mean that there is a directory called user-login in the document root of the server. The server interprets that as a "link" to some page.
Now, to achieve what you want, you either have to use something other than HTTP (an FTP server on the "ip address" you want to access would do the job), or set up an HTTP server on that machine that provides for each path (http://192.168.2.100/directory) a list of files in it (in whatever format) and parse that through Python.
If the server provides an "index of /bla/bla" kind of page (like Apache server do, directory listings), you could parse the HTML output to find out the names of files and directories. If not (e.g. a custom index.html, or whatever the server decides to give you), then you're out of luck :(, you can't do it.
| 2 | 17 | 0 |
How can I list files and folders if I only have an IP-address?
With urllib and others, I am only able to display the content of the index.html file. But what if I want to see which files are in the root as well?
I am looking for an example that shows how to implement username and password if needed. (Most of the time index.html is public, but sometimes the other files are not).
|
Python to list HTTP-files and directories
| 1 | 0 | 1 | 57,862 |
11,024,844 |
2012-06-13T23:34:00.000
| 2 | 0 | 1 | 0 |
python,cmd
| 11,223,067 | 2 | true | 0 | 0 |
I solved it. Just had to modify the set_completer_delims attributes.
| 1 | 2 | 0 |
I'm trying to implement a python cmd, using the cmd module. I want to autocomplete files, so I've implemented some methods, however, I've seen that the text parameter from "complete_put(self, text, line, begidx, endidx):" strips all the '/' characters. Anyone knows why, and how can I avoid this behaviour? Thanks :)
|
Slash and python cmd
| 1.2 | 0 | 0 | 363 |
11,024,993 |
2012-06-13T23:49:00.000
| 1 | 0 | 1 | 1 |
python,windows-7,path,cmd,python-2.7
| 30,681,229 | 3 | false | 0 | 0 |
I hope, your problem really was the problem I think it is, because I (hopefully) had the same. I'm very sure, Levon's answer was right, so this is the n00b solution.
For the CMD to recognize "python", you need to add something to the environment variable "Path". When you're done with the insturctions you can type "echo %PATH%" into the cmd and it should show you the variable value you just changed.
Go to Computer > System Properties > Advanced Settings > Environment Variables
Click the variable "Path" and add ;C:\Python27 to the variable value. Don't forget the ";" to separate the values.
Confirm with OK in both windows and you're done.
| 1 | 10 | 0 |
I am running python 2.7, I can run a program fine when I open the *.py file.
But when I go to cmd and type "python *.py any other args", it doesn't work, it says that python is not recognised. This is hard because I am trying to do things like sys.argv[], any help is great.
Thanks
|
Run Python in cmd
| 0.066568 | 0 | 0 | 74,076 |
11,025,222 |
2012-06-14T00:25:00.000
| 1 | 0 | 0 | 0 |
python,shopify
| 11,042,475 | 1 | false | 1 | 0 |
Your best bet is to start by pulling down all products and variants from each shop into a db on your side. After that, you can listen for products/update webhooks and orders/paid webhooks to be alerted of any changes you should make.
| 1 | 1 | 0 |
We are a small-scale fair-trade textiles importer and recently made the internal switch to OpenERP for our inventory management. We have two shops on Shopify (in two different languages).
In a longer term, I have two goals: 1) to synchronise the inventory of the two shops and 2) to build a Shopify plugin for OpenERP that imports a sale upon reception of an email from Shopify.
Since OpenERP itself is written in Python, I would like to work with the Shopify Python API.
And since we're working with textiles wich usually have different styles and size, we're working with SKUs and variants in Shopify.
As a start, I would like to be able to sync the inventories between the two shops at midnight each day. If the inventory count of Shop A is lower than in Shop B, Shop B should get the count of Shop A, and the other way around.
My biggest problem in the moment seems to be to get a simple list of SKUs and inventory count with the Python API. Ideally, I would like to get two simple lists of SKUs and their inventory count, check if the variant from Shop A exists in Shop B and then check the inventory and propagate needed changes between the two.
However, I can't seem to get such a list and the documentation is extremely limited. Is the only possibility really to get all products first, then, for each product, to get the variants, and then to list these variants individually? So I would actually need to construct a whole database organisation around a task that I considered quite simple?
Does somebody have any experience with such a task? Is there any further documentation or examples that I could have a look at?
Thank you very much,
Knut-Otto
|
Shopify Python API and textiles inventory
| 0.197375 | 0 | 0 | 579 |
11,025,297 |
2012-06-14T00:37:00.000
| 0 | 0 | 0 | 0 |
python,indexing,spatial-index,spatial-query,r-tree
| 11,028,685 | 1 | false | 0 | 0 |
Actually it does not need to have a threshold to handle ties. They just happen.
Assuming you have the data points (1.,0.) and (0.,1.) and query point (0.,0.), any implementation I've seen of Euclidean distance will return the exact same distance for both, without any threshold.
| 1 | 2 | 1 |
In rtree, how can I specify the threshold for float equality testing?
When checking nearest neighbours, rtree can return more than the specified number of results, as if two points are equidistant, it returns both them. To check this equidistance, it must have some threshold since the distances are floats. I want to be able to control this threshold.
|
In rtree, how can I specify the threshold for float equality testing?
| 0 | 0 | 0 | 236 |
11,025,748 |
2012-06-14T02:02:00.000
| 0 | 0 | 1 | 0 |
python,search,loops,filter,stop-words
| 11,025,767 | 6 | false | 0 | 0 |
Put your original list of words in a dictionary.
Iterate through the characters in the given string, using space as a delimiter for a word. Look up each word in the dictionary.
| 1 | 6 | 0 |
As title says, I have a list of words, Like stopWords = ["the", "and", "with", etc...] and I'm receiving text like "Kill the fox and dog". I want the output like "Kill fox dog" very efficiently and fast. How can I do this (I know I can iterate using a for loop, but thats not very efficient)
|
If I have a list of words, how can I check if string does not contain any of the words in the list, and efficiently?
| 0 | 0 | 0 | 6,859 |
11,027,009 |
2012-06-14T05:13:00.000
| 2 | 0 | 1 | 0 |
python,django,installation,package,virtualenv
| 11,027,060 | 3 | true | 1 | 0 |
After you switch to the virtual environment with the activate script. Just use pip install Django==1.4 no sudo needed.
Alternately you can use pip install -E=/path/to/my/virtual/env Django==1.4 in which case you don't need to switch to the virtual environment first.
| 1 | 2 | 0 |
I needed a virtual environment with all the global packages included. I created one, and the global Django version is 1.3.1. Now, I need to upgrade the Django version to 1.4 only in my virtual environment. I switched to my environment by activating it, and tried
sudo pip install Django=1.4
It was installed,not in the virtual env but in the global dist-packages.
How to install a package only in the virtual environment?
|
How to upgrade a django package only in a python virtual environment?
| 1.2 | 0 | 0 | 3,260 |
11,027,366 |
2012-06-14T05:49:00.000
| 2 | 0 | 0 | 0 |
python,user-interface,wxpython,alignment,screen-resolution
| 11,036,132 | 1 | false | 0 | 1 |
Sizers automatically adjust your application widgets for screen resolution, resize, etc. If they aren't doing this automatically, then there's probably something buggy in your code. Since I don't have your code to look at, try going over the wxPython tutorials on sizers very carefully. I found the book wxPython In Action very useful on this topic.
| 1 | 0 | 0 |
So, here I'm happy that I wrote the whole code for a awesome looking GUI using wxPython in a day but it evaporated when I found that the panels are getting out of the way leaving a lot of empty space on the sides or getting congested (you know how!) on a different screen resolution.
What I want to ask is that what all properties of a GUI should I adjust or care about if I want to see that the GUI's aspect ratio, frame alignment, panel alignments, sizer ratios etc. should remain intact or if there're any methods to do so, suggest me.
Thanks in advance. :)
|
wxPython : Adjusting the panels and sizers for different screen resolutions
| 0.379949 | 0 | 0 | 284 |
11,028,318 |
2012-06-14T07:11:00.000
| 0 | 0 | 1 | 0 |
python,deployment,egg
| 16,770,213 | 2 | false | 1 | 0 |
I find that PYTHONPATH can be set with the egg files. And now I can just put the egg files in a directory and add these egg files to PYTHONPATH.
| 1 | 3 | 0 |
I have a project that using some third-party libraries. My questions is how to deploy my project to an environment that has not install these third-party libraries. In Java, I can just put all jars in the "lib" directory and write a bootstrap shell script that setting the CLASSPATH to contain the jars. I want a clean solution like this so that makes little influence on the environment.
|
How to deploy a python project to an environment that has not install some third-party libraries?
| 0 | 0 | 0 | 698 |
11,029,721 |
2012-06-14T08:52:00.000
| 1 | 1 | 1 | 0 |
python,parsing,unicode,utf-8,ascii
| 11,029,801 | 2 | false | 0 | 0 |
Every ASCII file is also a valid UTF-8. Don't worry about treating your ASCII files as UTF-8 files, no conversion necessary, no increase in size.
| 1 | 1 | 0 |
I am currently parsing large text files with Python 2.7, some of which were originally encoded in Unicode or UTF-8.
For modules containing functions which directly interact with strings in UTF-8, I included # -*- coding: utf-8 -*- at the top of the file, but for functions which work with only ascii, I did not bother.
Eventually, these modules lead to larger modules, and all the parsed strings gets mixed together. Is it good practice to include # -*- coding: utf-8 -*- at top of every file?
Is there a benefit to this?
|
Mixed usage of UTF-8 and ASCII encodings?
| 0.099668 | 0 | 0 | 2,205 |
11,030,274 |
2012-06-14T09:29:00.000
| 1 | 0 | 1 | 0 |
python,memory,memory-management,garbage-collection,duplicate-data
| 11,030,443 | 3 | false | 0 | 0 |
Python variables are references, so pass the name of the array - although strictly you should call it a list (it's what arrays are called in Python). However, I'm disturbed by your wording which implies that you can have it global AND pass it as a parameter. Do one thing or the other. If it is a global then use it as a global, don't mix the two methods (sorry if I misunderstood).
For garbage collection, a good start would be to look at the standard library documentation for the gc module.
There is a bit about memory management in the Python/C API standard doc. Search for "memory Management".
| 2 | 0 | 0 |
The program I'm coding now makes a pretty huge list of data items.
Now, I can make this list to be global (make available for other functions in other modules) and can be used in all other modules. Or, I can also pass them as a function arguments for the functions in the modules.
Note that, this huge array I'm talking about is not going to get modified in the functions in other modules, they just read data and use it for calculations and data stats etc.
So, of the two methods which has least memory consumption?
If by passing into the functions, if the language makes a local duplicates of the huge list even if the functions doesn't modify it .. that'll be doubling of the memory consumption which is not good thing. If this happens, I can make it global and use it. I got this doubt on memory management of python because when I once wrote a toy language, I included this particular issue.. i.e the argument data gets duplicated only if its edited .. else, it'll always be pointed to the original data.
|
Which technique has the lowest memory consumption: global variables or function arguments?
| 0.066568 | 0 | 0 | 1,100 |
11,030,274 |
2012-06-14T09:29:00.000
| 6 | 0 | 1 | 0 |
python,memory,memory-management,garbage-collection,duplicate-data
| 11,030,429 | 3 | true | 0 | 0 |
Firstly, there's no such thing as a 'global' variable in Python (in the sense that it's automatically available to all modules).
Secondly, Python doesn't duplicate objects when passing to a function. Python variables are really just names that point to objects - when you pass a variable to a function, all that happens is that the function creates a new name that points to the original object. You can read or modify the contents of that object without any copies being made. (Note that if you rebind the name to a different object, the original reference is not changed.)
| 2 | 0 | 0 |
The program I'm coding now makes a pretty huge list of data items.
Now, I can make this list to be global (make available for other functions in other modules) and can be used in all other modules. Or, I can also pass them as a function arguments for the functions in the modules.
Note that, this huge array I'm talking about is not going to get modified in the functions in other modules, they just read data and use it for calculations and data stats etc.
So, of the two methods which has least memory consumption?
If by passing into the functions, if the language makes a local duplicates of the huge list even if the functions doesn't modify it .. that'll be doubling of the memory consumption which is not good thing. If this happens, I can make it global and use it. I got this doubt on memory management of python because when I once wrote a toy language, I included this particular issue.. i.e the argument data gets duplicated only if its edited .. else, it'll always be pointed to the original data.
|
Which technique has the lowest memory consumption: global variables or function arguments?
| 1.2 | 0 | 0 | 1,100 |
11,031,039 |
2012-06-14T10:17:00.000
| 5 | 0 | 1 | 0 |
python,coding-style
| 11,031,667 | 4 | true | 0 | 0 |
I'm going to go against the flow here:
Use the dict() method if it suits you, but keep the limitations outlined in other answers in mind. There are some advantages to this method:
It looks less cluttered in some circumstances
It's 3 characters less per key-value pair
On keyboard layouts where { and } are awkward to type (AltGr-7 and AltGr-0 here) it's faster to type out
| 2 | 6 | 0 |
This is just a trivial question of what convention you suggest. Recently, I have seen many examples of people writing dict(key1=val1, key2=val2) instead of what I think is the more idiomatic {"key1": val1, "key2": val2}. I think the reason is to avoid using "" for the keys, but I am not sure. Perhaps the dict()-syntax looks closer to other languages?
|
Should I write dict or {} in Python when constructing a dictionary with string keys?
| 1.2 | 0 | 0 | 489 |
11,031,039 |
2012-06-14T10:17:00.000
| 2 | 0 | 1 | 0 |
python,coding-style
| 11,031,261 | 4 | false | 0 | 0 |
Definitely, use {}, keep code simple & type less is always my target
| 2 | 6 | 0 |
This is just a trivial question of what convention you suggest. Recently, I have seen many examples of people writing dict(key1=val1, key2=val2) instead of what I think is the more idiomatic {"key1": val1, "key2": val2}. I think the reason is to avoid using "" for the keys, but I am not sure. Perhaps the dict()-syntax looks closer to other languages?
|
Should I write dict or {} in Python when constructing a dictionary with string keys?
| 0.099668 | 0 | 0 | 489 |
11,033,753 |
2012-06-14T13:09:00.000
| 11 | 0 | 1 | 0 |
python
| 11,033,849 | 4 | false | 0 | 0 |
If you have the same Python version on both PCs, you can just copy the content of Lib\site-packages and Scripts to the new one. But note that it must be the same minor version (e.g. 2.6 does not work with 2.7).
| 2 | 24 | 0 |
My scenario is I have two laptops with fresh installation of windows. Now I use both of them for programming.
So, lets suppose I install various python modules/packages in one of the laptop. So is there a way I can clone this complete python setup on my other laptop. The reason for this is my internet connection currently is very slow so I don't want to download the same module or packages twice and than install them again.
I know I can download the modules in zip file, transfer them on other and than run python setup.py install but I am going to use pip to install modules.
Anyways, I was wondering if cloning of python setup is possible.
|
Transfer Python setup across different PC
| 1 | 0 | 0 | 35,315 |
11,033,753 |
2012-06-14T13:09:00.000
| 1 | 0 | 1 | 0 |
python
| 27,567,347 | 4 | false | 0 | 0 |
I was updating Python 2.7.3 --> 2.7.9 on my Windows 7 PC. Normally this would be fine however the new install accidentally went onto the C: instead of where my previous version of python was located, on the D: drive. To get it to work was simply a matter of copying the new install straight over the top of the old. Worked like a charm and all my old modules that I had installed were present.
| 2 | 24 | 0 |
My scenario is I have two laptops with fresh installation of windows. Now I use both of them for programming.
So, lets suppose I install various python modules/packages in one of the laptop. So is there a way I can clone this complete python setup on my other laptop. The reason for this is my internet connection currently is very slow so I don't want to download the same module or packages twice and than install them again.
I know I can download the modules in zip file, transfer them on other and than run python setup.py install but I am going to use pip to install modules.
Anyways, I was wondering if cloning of python setup is possible.
|
Transfer Python setup across different PC
| 0.049958 | 0 | 0 | 35,315 |
11,033,892 |
2012-06-14T13:15:00.000
| 3 | 0 | 0 | 0 |
python,postgresql,web-applications,sqlalchemy,turbogears2
| 11,034,199 | 2 | true | 0 | 0 |
If two transactions try to set the same value at the same time one of them will fail. The one that loses will need error handling. For your particular example you will want to query for the number of parts and update the number of parts in the same transaction.
There is no race condition on sequence numbers. Save a record that uses a sequence number the DB will automatically assign it.
Edit:
Note as Limscoder points out you need to set the isolation level to Repeatable Read.
| 1 | 2 | 0 |
We are writing an inventory system and I have some questions about sqlalchemy (postgresql) and transactions/sessions. This is a web app using TG2, not sure this matters but to much info is never a bad.
How can make sure that when changing inventory qty's that i don't run into race conditions. If i understand it correctly if user on is going to decrement inventory on an item to say 0 and user two is also trying to decrement the inventory to 0 then if user 1s session hasn't been committed yet then user two starting inventory number is going to be the same as user one resulting in a race condition when both commit, one overwriting the other instead of having a compound effect.
If i wanted to use postgresql sequence for things like order/invoice numbers how can I get/set next values from sqlalchemy without running into race conditions?
EDIT: I think i found the solution i need to use with_lockmode, using for update or for share. I am going to leave open for more answers or for others to correct me if I am mistaken.
TIA
|
SQLAlchemy(Postgresql) - Race Conditions
| 1.2 | 1 | 0 | 2,935 |
11,034,268 |
2012-06-14T13:34:00.000
| 0 | 0 | 1 | 0 |
python
| 11,034,398 | 2 | false | 0 | 0 |
I think you can use mongodb where you can set list field with all possible names of author. For example you have handwriten name "black" and you cant recognize what letter in name for example "c" or "e" and you can set origin name as "black" and add to list of possible names "blaek"
| 1 | 0 | 0 |
We have scanned thousands of old documents and entered key data into a database. One of the fields is author name.
We need to search for documents by a given author but the exact name might have been entered incorrectly as on many documents the data is handwritten.
I thought of searching for only the first few letters of the surname and then presenting a list for the user to select from. I don't know at this stage how many distinct authors there are, I suspect it will be in the hundreds rather than hundreds of thousands. There will be hundreds of thousands of documents.
Is there a better way? Would an SQL database handle it better?
The software is python, and there will be a list of documents each with an author.
|
Search unreliable author names
| 0 | 0 | 0 | 73 |
11,034,733 |
2012-06-14T13:59:00.000
| 3 | 1 | 1 | 1 |
python,eclipse
| 11,192,462 | 1 | true | 0 | 0 |
You probably still have that selected in your project or launch configuration... You can try to delete your existing launch configurations (run > run configurations) so that they get recreated on a new run and if that's not it, take a look at your project properties > pydev - interpreter/grammar and see if an old interpreter was selected.
| 1 | 2 | 0 |
I cannot compile python in pydev in eclipse. I get the following error:
"unable to make launch because launch configuration is not valid
Reason:
Interpreter: Python32 not found"
I am actually runnning Python26 and have configured Python26 as the interpreter in "Windows->Preferences"
I have deleted and replaced my copy of eclipse and this persists. Any help would be appreciated. I think that at one time I had Python32 running and then switched to Python26.
|
Eclipse and pyddev: Error can't find Python32
| 1.2 | 0 | 0 | 820 |
11,038,531 |
2012-06-14T17:44:00.000
| 0 | 0 | 1 | 0 |
python,google-app-engine,python-2.7,webapp2
| 11,040,555 | 6 | false | 1 | 0 |
To be honest I can't see why you would want to try to do this so can't come up with an idea that might help.
Can you clarify what your trying to do instead of what your wanting to do?
Although if I am understanding you correctly what your wanting to do is get around resource usage. There is no way to avoid using GAE resources if your using the platform. No matter what you do your going to hit some type of resource usage on App Engine. You either put the dictionary in the datastore, blobstore, or memcache. You can send the data to another url, you can download and upload the data but you are still using resources.
| 1 | 1 | 0 |
Using a DB, I want to create a very large dictionary. If I save it to disk, when pickled, it takes about 10 MB of space.
What I want to do is:
Save this dictionary as it is to disk, in order to open up that text document and copy it to another py file so that I won't have to re-generate it each time and whenever the py document is called via web app, it is iterable.
How can I do this?
PS. My app is running on Google app engine and I want to solve this issue like this to refrain from DB et al resource usage.
|
Print a dictionary and save it to a file, then copy it to a py file to use?
| 0 | 0 | 0 | 441 |
11,039,287 |
2012-06-14T18:35:00.000
| 3 | 0 | 0 | 0 |
python,python-2.7,flask,webapp2
| 11,040,602 | 2 | false | 1 | 0 |
In my experience with flask, you cannot declare route configurations in a central file. Route handling is done via the use of route declarations. In my experience with python frameworks route handling is done at a more granular function level rather than a file level. Even in the frameworks that do have a more central route configuration setup, the routes are defined as being tied to a specific view/controller function not simply a python file.
As stated, each framework handles it differently. The three frameworks I have looked at in any detail, django, pyramid, and flask, all handle it differently. The closest to what you are looking for is django which has a urls.py file that you place all of your url configurations in, but again it points to function level items, not higher level .py files. Pyramid does a mix with part of a url declaration being put into the __init__.py file of the main module and the use of a decorator to correlate that named route to a function. And then you have flask, which you mentioned having looked at, which appears to use just decorators for "simplicity sake" as they are trying to reduce the number of overall files and configuration files that need to be used or edited to get an application from concept into served space.
| 1 | 3 | 0 |
I have been studying python for quite sometime now and very recently I decided to get into learning web development side of things. I have experience with PHP and PHP frameworks, along with ruby, where:
Routes are defined in a (single) file and then in that file, each route is assigned to a model (py file) which will uniquely handle incoming requests matching that route.
How can I achieve this with flask AND webapp2?
I read the documentation and tutorial in full but it got me very confused. I just want A file where all routes and how should they be handled are set, and then each route request to be handled by its own model (python file).
All the examples lead to single file apps.
Thank you VERY MUCH, really. Please teach, kindly, in a simple way.
|
How to have different py files to handle different routes?
| 0.291313 | 0 | 0 | 3,072 |
11,040,321 |
2012-06-14T19:48:00.000
| 3 | 0 | 1 | 0 |
python
| 11,040,830 | 1 | true | 0 | 0 |
I doubt you'd be able to get someone interested in the mechanics of programming. It's all horribly sadistic.
What you can do with programming, on the other hand, is awesome. I was introduced to it in the context of game programming, and I currently program little physics sims (almost always some kind of numerical integrator) to help me visualize and toy around with the concepts we learn in lecture. I find that these are great examples of what you can do long term with programming. I know that they're all simple examples (I wonder how many of you laughed at 'long term'), but they're complex enough to be interesting and "non-obvious" to someone with no knowledge of the subject.
What I would recommend more are things that the average beginner programmer will actually be capable of, such as:
basic web programming, even in python if you want. Lots of people like making their own webpage, and including some php or python functionality to give some more interesting interactivity is always nice.
I also recommend small automated scripts for certain tedious things. My favourite example is an automated login script for my university's course selection process, which has a horrible capacity. Saved myself a lot of carpal tunnel and tears.
I find that the last one really works for people. The light that clicks the first time they have a problem and then go "I can actually do something about this now", or "I don't have to waste my day on something that can be done in seconds" is the insight I think early programmers (especially reluctant ones in an intro class) really need.
As for language, I second python. It's beautifully easy to use, and lets you focus on the actual problem at hand without getting wrapped up in syntax, which really simplifies the learning process and lets you get to the good stuff faster. People who want to dive deeper into programming can always apply what they've learned to lower-level language later.
| 1 | 1 | 0 |
This question is not technical, but is still about programming.
What is the funniest, most interesting part of programming for someone who have never programmed?
What do you do to light the programming interest on a adult person? I don't feel like talking about print, functions and loops will be a good way when trying to get someone interested.
Python is probably the best language, but where do you start so it wont be boring?
Learning the other person to solve a specific problem might work, but I feel like I would need more than that.
|
One chance to get other interested in python
| 1.2 | 0 | 0 | 82 |
11,042,172 |
2012-06-14T22:22:00.000
| 0 | 0 | 0 | 0 |
python,qt,mvvm,pyside,architectural-patterns
| 18,681,903 | 3 | false | 0 | 1 |
I don't know how far do you want to take MVVM, but at a basic level it comes with Qt, and I've been using it for a long time. You have a business-specific model, say tied to a database. Then you create view-specific viewmodel as a proxy model. You can stack a few layers of those, depending on what you need. Then you show that using a view. As long as everything is set up right, it will "just work". Now if you want to use a model to configure a view, Qt doesn't provide anything directly for you. You'd need to write a factory class that can use viewmodel data to instantiate and set up the view for you. Everything depends on how far do you want to take it, and what architectural benefits does it give you.
| 2 | 16 | 0 |
I've been trying to find a way to implement MVVM with PySide but haven't been able to. I think that there should be a way to create Views from ViewModels with QItemEditorFactory, and to do data binding I think I can use QDataWidgetMapper.
Do you have any ideas on how MVVM may be implemented with Qt and PySide? Even if there are some resources in C++ I'll try to translate them to python.
Thanks.
|
MVVM pattern with PySide
| 0 | 0 | 0 | 4,531 |
11,042,172 |
2012-06-14T22:22:00.000
| -2 | 0 | 0 | 0 |
python,qt,mvvm,pyside,architectural-patterns
| 17,227,321 | 3 | false | 0 | 1 |
An obvious answer for me is that MVVM is suitable for WPF and some other techs that welcome this pattern, and so you have to find out whether it's possible to apply this pattern on other technologies. Please, read on MVVM in wiki.
| 2 | 16 | 0 |
I've been trying to find a way to implement MVVM with PySide but haven't been able to. I think that there should be a way to create Views from ViewModels with QItemEditorFactory, and to do data binding I think I can use QDataWidgetMapper.
Do you have any ideas on how MVVM may be implemented with Qt and PySide? Even if there are some resources in C++ I'll try to translate them to python.
Thanks.
|
MVVM pattern with PySide
| -0.132549 | 0 | 0 | 4,531 |
11,042,970 |
2012-06-14T23:57:00.000
| 1 | 0 | 0 | 0 |
python,django
| 11,044,764 | 1 | false | 1 | 0 |
Django is not responsible for your site being slow in IE.The following might be the reason:
1) You might have heavy images/javascript in you page.use YSLOW/PSO to debug it.
2) Try seving a webserver like apache and not with django.
| 1 | 0 | 0 |
Our Django application is working without problems in Chrome but it is tiresome when using IE.
Running the application using manage.py runserver works fine but in our production site, it is very slow. Navigating from page to page is very slow.
How can we improve the app's performance in IE? We've already tried reducing our js and css lines and optimizing our js and css code but that hasn't helped.
|
django application loading very slow in IE
| 0.197375 | 0 | 0 | 383 |
11,043,508 |
2012-06-15T01:21:00.000
| 5 | 0 | 0 | 0 |
python,django,django-admin,django-authentication
| 11,046,189 | 1 | true | 1 | 0 |
I got it-- I had set SESSION_COOKIE_SECURE to True in my settings.py, but since I'm using the development server SSL isn't enabled it would just redirect to the same page. Thanks for the help guys you got me searching around.
| 1 | 3 | 0 |
I have a really weird issue here. I'm using my local development server right now, and I'm working on the user account aspect of my site.
I thought I had it worked out, but when I try to access @login_required views I fill in the login information, and am redirected back to the login page everytime. When I try to login to the admin site (to verify everything is good on the backend) the same thing happens: I put in a correct username and password, and am redirected back to the login page.
I verified via the shell that the username I'm using for the admin site is a super user, is staff, and is active. In my settings I have Authentication and Session middleware enabled, as well as django.contrib.auth and django.contrib.sessions in my installed apps. Any ideas? Thanks in advance!
|
Django login not working
| 1.2 | 0 | 0 | 2,498 |
11,046,283 |
2012-06-15T07:22:00.000
| 0 | 0 | 0 | 0 |
python,wxpython
| 11,046,421 | 2 | false | 0 | 1 |
I would put your feed inside a sizer with space for two elements. Have the feed expand to fill your window unless the second space in the sizer is available. Then, when you click the feed, add a panel with the detailed information to the empty part of the sizer.
You can add a close button which would simply close and remove the panel in the same way as your currently do.
| 2 | 0 | 0 |
My application consists of looking into a live feed from a spectroscope (optical instrument) and extracting frames from it. Clicking a point in the feed launches a new Frame where the image is analysed. Each frame handles a single panel.
The action of creating a new frame is very easy for a programmer to do (immediate showing and focus capture, discrete objects, no complex layout management, easy meaning for close button).
I now want a design that works in a single window. What is the easiest design pattern that replaces the practice of creating new frames? It should offer the same advantages (see previous paragraph) as far as possible. I am thinking of using tabs to manage the panels as they can capture focus, hide/show panels, destroy themselves elegantly etc.
|
Design substitute to popping up new frames on each click in WxPython
| 0 | 0 | 0 | 69 |
11,046,283 |
2012-06-15T07:22:00.000
| 0 | 0 | 0 | 0 |
python,wxpython
| 11,052,052 | 2 | false | 0 | 1 |
Create a frame with two main elements:
Your current feed
A navigation control such as a wx.ListBox, wx.ListCtrl, or wx.TreeCtrl to allow you to scroll through and click on the feed you want displayed in element (1).
| 2 | 0 | 0 |
My application consists of looking into a live feed from a spectroscope (optical instrument) and extracting frames from it. Clicking a point in the feed launches a new Frame where the image is analysed. Each frame handles a single panel.
The action of creating a new frame is very easy for a programmer to do (immediate showing and focus capture, discrete objects, no complex layout management, easy meaning for close button).
I now want a design that works in a single window. What is the easiest design pattern that replaces the practice of creating new frames? It should offer the same advantages (see previous paragraph) as far as possible. I am thinking of using tabs to manage the panels as they can capture focus, hide/show panels, destroy themselves elegantly etc.
|
Design substitute to popping up new frames on each click in WxPython
| 0 | 0 | 0 | 69 |
11,046,836 |
2012-06-15T08:03:00.000
| 5 | 0 | 1 | 0 |
python,tkinter,python-multithreading
| 11,049,545 | 2 | true | 0 | 1 |
It is not multithreaded.
Tkinter works by pulling objects off of a queue and processing them. Usually what is on this queue are events generated by the user (mouse movements, button clicks, etc).
This queue can contain other things, such as job created with after. So, to Tkinter, something submitted with after is just another event to be processed at a particular point in time.
| 1 | 1 | 0 |
I'm writing a physics simulating program, and found after() useful.
I once would like to create a thread for physics calculation and simulation. But when I finally noticed that function, I used it instead.
So, I'm curious about how Tkinter implements that function. Is it multi-threading?
|
Python: Does after() in Tkinter have a multi-threading approach?
| 1.2 | 0 | 0 | 718 |
11,047,821 |
2012-06-15T09:16:00.000
| 1 | 0 | 0 | 0 |
python,user-interface,pygtk,pyqt4
| 11,803,878 | 5 | false | 0 | 1 |
I also have no experience with GTK, but can offer some answers nevertheless:
Qt is designed from the ground up to be object-oriented; almost everything in it has excellent support for subclassing. PyQt likewise.
Qt explicitly does NOT support modification of the GUI by any threads other than the main GUI thread. You're likely to cause crashes this way. As BlaXpirit mentioned, though, there are a variety of very easy inter-thread communication mechanisms such as signal passing.
| 4 | 12 | 0 |
I have an application whose GUI is to be remade for ergonomic reasons.
It was written in PyGTK and I am wondering if I should switch to PyQt to ease future developments or not.
This application has a mostly classical UI with buttons, toolbars, dialogs etc. but also has some specific requirements : I will certainly need to create a custom widget based on treeview/tableview (to make a spreadsheet-like widget) and this application has a lot of worker threads which update the GUI.
I am seeking advice on these two points :
As regards the creation custom widgets, does PyQt provide better mechanisms than PyGTK, especially to slightly modify existing widgets.
I had problems with (even when properly using threads_init() and threads_enter()) the updating of the GUI by worker threads while using PyGTK. Is PyQt any better on that point ?
|
What are the advantages of PyQt over PyGTK and vice-versa?
| 0.039979 | 0 | 0 | 8,254 |
11,047,821 |
2012-06-15T09:16:00.000
| 3 | 0 | 0 | 0 |
python,user-interface,pygtk,pyqt4
| 15,823,008 | 5 | false | 0 | 1 |
I like GTK+ best, since (at least to me) it looks nicer. PyQt and variants (e.g. PySide), however, do have an immensely large set of extras, including a WebKit engine, an XML parser, SQL support, and more.
If you just want looks, I'd say GTK+/PyGObject. If you are plannning on using anything PyQt has, use PyQt.
As a side note, if you stick with GTK+, I'd advise you to upgrade to PyGObject and GTK+ 3.0, since PyGtk+ is no longer maintained.
| 4 | 12 | 0 |
I have an application whose GUI is to be remade for ergonomic reasons.
It was written in PyGTK and I am wondering if I should switch to PyQt to ease future developments or not.
This application has a mostly classical UI with buttons, toolbars, dialogs etc. but also has some specific requirements : I will certainly need to create a custom widget based on treeview/tableview (to make a spreadsheet-like widget) and this application has a lot of worker threads which update the GUI.
I am seeking advice on these two points :
As regards the creation custom widgets, does PyQt provide better mechanisms than PyGTK, especially to slightly modify existing widgets.
I had problems with (even when properly using threads_init() and threads_enter()) the updating of the GUI by worker threads while using PyGTK. Is PyQt any better on that point ?
|
What are the advantages of PyQt over PyGTK and vice-versa?
| 0.119427 | 0 | 0 | 8,254 |
11,047,821 |
2012-06-15T09:16:00.000
| 1 | 0 | 0 | 0 |
python,user-interface,pygtk,pyqt4
| 11,050,753 | 5 | false | 0 | 1 |
I can't compare, because I don't use GTK, but I'd suggest Qt.
Qt definitely has "treeview/tableview" you're talking about and you can make the "cells" your custom widgets (I'm just studying this topic right now). Qt was made with a lot of thought about threads, so worker threads can use the signal/slot mechanism with ease. And yes, you can modify the existing widgets by applying stylesheets or subclassing.
Now about PyQt, I wouldn't recommend it because of licensing issues. PySide seems like a better Qt→Python binding to me: it can be used in commercial applications freely and has a few tiny advantages in the API (but otherwise it's fully compatible with PyQt).
Qt is cross-platform and deployment of PySide applications is very easy with cx_Freeze; users of your application won't have to install anything at all.
| 4 | 12 | 0 |
I have an application whose GUI is to be remade for ergonomic reasons.
It was written in PyGTK and I am wondering if I should switch to PyQt to ease future developments or not.
This application has a mostly classical UI with buttons, toolbars, dialogs etc. but also has some specific requirements : I will certainly need to create a custom widget based on treeview/tableview (to make a spreadsheet-like widget) and this application has a lot of worker threads which update the GUI.
I am seeking advice on these two points :
As regards the creation custom widgets, does PyQt provide better mechanisms than PyGTK, especially to slightly modify existing widgets.
I had problems with (even when properly using threads_init() and threads_enter()) the updating of the GUI by worker threads while using PyGTK. Is PyQt any better on that point ?
|
What are the advantages of PyQt over PyGTK and vice-versa?
| 0.039979 | 0 | 0 | 8,254 |
11,047,821 |
2012-06-15T09:16:00.000
| 1 | 0 | 0 | 0 |
python,user-interface,pygtk,pyqt4
| 11,899,802 | 5 | false | 0 | 1 |
Definitely PyQt... There are a lot of advanced applications using it... Personally, I'm using KDE, so, even my system's GUI uses Qt! I'm also creating a spreadsheet application and I find that it's far easier that what I thought at first... But, BiaXpirit is also right: except if you're developing an open-source app, maybe you should use PySide or something else...
| 4 | 12 | 0 |
I have an application whose GUI is to be remade for ergonomic reasons.
It was written in PyGTK and I am wondering if I should switch to PyQt to ease future developments or not.
This application has a mostly classical UI with buttons, toolbars, dialogs etc. but also has some specific requirements : I will certainly need to create a custom widget based on treeview/tableview (to make a spreadsheet-like widget) and this application has a lot of worker threads which update the GUI.
I am seeking advice on these two points :
As regards the creation custom widgets, does PyQt provide better mechanisms than PyGTK, especially to slightly modify existing widgets.
I had problems with (even when properly using threads_init() and threads_enter()) the updating of the GUI by worker threads while using PyGTK. Is PyQt any better on that point ?
|
What are the advantages of PyQt over PyGTK and vice-versa?
| 0.039979 | 0 | 0 | 8,254 |
11,052,200 |
2012-06-15T14:05:00.000
| 1 | 0 | 0 | 0 |
php,python,redirect,flask
| 11,052,322 | 3 | false | 1 | 0 |
If using apache, putting rewrite rules into either the directory section of the httpd.conf file or into an .htaccess file would probably be the easiest way to do this.
| 1 | 2 | 0 |
I had a website which was written in php containing urls such as example.com/index.php?q=xyz .
Now I have rewritten it in flask ( python ) which has urls such as example.com/search?q=xyz .
What is the best way to do it?
One approach I think of is to write an index.php and code in php to redirect. But is it possible achieving same thing using application ( flask) only?
Update: sending parameters are not exactly same, there is some customization with that too.
|
redirect old php urls to new flask urls
| 0.066568 | 0 | 0 | 735 |
11,052,952 |
2012-06-15T14:45:00.000
| 6 | 0 | 1 | 0 |
python,json,malformed
| 11,053,817 | 2 | true | 0 | 0 |
Grab PyYAML. JSON is a subset of YAML, so a YAML parser should parse most JSON. YAML's grammar allows trailing commas in sequences.
| 1 | 4 | 0 |
Are there any Python JSON parsers that will cope with trailing commas?
(I'm consuming the "JSON" from an external source and have no control over it.)
|
Parsing "JSON" containing trailing commas
| 1.2 | 0 | 0 | 2,981 |
11,052,999 |
2012-06-15T14:48:00.000
| 2 | 0 | 1 | 1 |
python,eclipse,pydev
| 11,062,473 | 1 | true | 0 | 0 |
Just right-click on the file, then hit "Open With" -> "Other", then choose "Python editor" and hit OK. Eclipse will remember your choice and from then on will open that particular file in the Python editor when you double-click it.
| 1 | 1 | 0 |
If I have a python file that has no suffix. Can pydev read that file as a python file using the first line of the file if it includes a #!/usr/bin/python? I'm not really concerned specifically about using that first line, just that that line exists and might be useable. If there is a manual way to mark a file as a python file without mucking with its suffix that'd be fine as well.
|
Can eclipse pydev interpret a file as a python file without a suffix
| 1.2 | 0 | 0 | 123 |
11,053,550 |
2012-06-15T15:18:00.000
| 1 | 0 | 0 | 0 |
java,php,python,setter,getter
| 11,053,683 | 1 | true | 1 | 0 |
In my opinion, there shouldn't be any undocumented attributes in a class. PHP and other languages allow you to just stick attributes on a class from anywhere, whether they've been defined in the class or not. I think that's bad practice for the reasons you describe and more:
It's hard to read.
It makes it harder for other programmers (including your future self) to understand what's going on.
It prevents auto-complete functionality in IDEs from working.
It often makes the domain layer too dependent on the persistence layer.
Whether you use getters and setters to access the defined attributes of a class is a little more fungible to me. I like things to be consistent, so if I have a class that has a getChildren() method to lazy load some array of objects, then I don't make the $children attribute public, and I tend to make other attributes private as well. I think that's a little more a matter of taste, but I find it annoying to access some attributes in a class directly ($object->name;) and others by getters/setters.
| 1 | 0 | 0 |
In Java I use getters/setters when I have simple models/pojos. I find that the code becomes self-documenting when you do it this way. Calling getName() will return a name, I don't need to care how it's mapped to some database and so on.
Problems rise when using languages where getters and setters start feeling clunky, like in Python, and I often hear people saying that they are bad. For example some time a go I had a PHP project in which some of the data was just queried from the database and column values mapped to the objects/dictionaries. What I found out was that code like this was annoyingly hard to read, you can't really just read the code, you read the code, then you notice that the values are fetched from the database and now you have to look through the database schema all the time to understand it. When all you could do is just look at the class definition and knowing that there won't be any undocumented magic keys there.
So my question is how do you guys document code without getters and setters?
|
Documentation when not using getters and setters
| 1.2 | 0 | 0 | 162 |
11,054,131 |
2012-06-15T15:50:00.000
| 5 | 0 | 1 | 0 |
python
| 11,054,226 | 1 | true | 0 | 0 |
Usually all of the different path modules are included, os.path is just the one for your local machine. Import ntpath if you want to do Windows path manipulation, and posixpath for Unix path manipulation.
| 1 | 3 | 0 |
Background - I am using paramiko to put files on a bunch of remote servers, running several different operating systems, and with no Python installed on the remote systems. I need to specify remote directories for where the file should be put. Because different operating systems specify paths differently, I wanted to use some module.
I wanted to use os.path.join, but that gets its configuration from my local machine. Is there any way to specify the platform in one of the os module's methods, or something similar?
EDIT: Also during ssh sessions with paramiko.
|
Join paths in Python given operating system
| 1.2 | 0 | 1 | 173 |
11,055,921 |
2012-06-15T18:01:00.000
| -1 | 0 | 1 | 0 |
python,image,mongodb,csv
| 11,058,611 | 2 | false | 0 | 0 |
Depending how you stored the data, it may be prefixed with 4 bytes of size. Are the corrupt exports 4 bytes/GridFS chunk longer than you'd expect?
| 2 | 1 | 0 |
I am using mongoexport to export mongodb data which also has Image data in Binary format.
Export is done in csv format.
I tried to read image data from csv file into python and tried to store as in Image File in .jpg format on disk.
But it seems that, data is corrupt and image is not getting stored.
Has anybody come across such situation or resolved similar thing ?
Thanks,
|
Can mongoexport be used to export images stored in binary format in mongodb
| -0.099668 | 1 | 0 | 930 |
11,055,921 |
2012-06-15T18:01:00.000
| 0 | 0 | 1 | 0 |
python,image,mongodb,csv
| 11,056,533 | 2 | false | 0 | 0 |
One thing to watch out for is an arbitrary 2MB BSON Object size limit in several of 10gen's implementations. You might have to denormalize your image data and store it across multiple objects.
| 2 | 1 | 0 |
I am using mongoexport to export mongodb data which also has Image data in Binary format.
Export is done in csv format.
I tried to read image data from csv file into python and tried to store as in Image File in .jpg format on disk.
But it seems that, data is corrupt and image is not getting stored.
Has anybody come across such situation or resolved similar thing ?
Thanks,
|
Can mongoexport be used to export images stored in binary format in mongodb
| 0 | 1 | 0 | 930 |
11,056,964 |
2012-06-15T19:25:00.000
| 1 | 0 | 1 | 0 |
python
| 11,057,030 | 7 | false | 0 | 0 |
Here's my take:
New learners are likely to prefer entering commands one by one into Python at first, because they get instant feedback and are limited to small programs. Coding into an editor or IDE is for more advanced users. "Both" is the eventual, long term answer.
Could be that LPTHW was written for an earlier version of Python (e.g. 2.6). I think we're at 3.2 now. I'd say that if you don't know Python at all, and are just learning about it for the first time, even a tutorial that's not on the bleeding edge will help you. It might be that a few details will change here and there, but the base language will still be good.
| 5 | 2 | 0 |
It will probably astound you how basic these questions are, but please bear with me! And if there is a better place to ask, I would be appreciative for a migration.
I am looking at two Python tutorials, one of which is "Learn Python the hard way". I am in no condition to evaluate the quality of the tutorials, so I have a few questions. (I have only just started LPTHW so I apologize if the answer comes 20 exercises later.)
In LPTHW, the exercises so far have been coding into Notepad++ and executing the txt document from a command line. In the other one, it was an "enter commands one by one into Python" tutorial. Question: which is more practical for a learner? "Both" is an acceptable answer.
In LPTHW, the first explanation of variables, the format character commands %s %d and %r are used. The exercise says "search the web to learn about all of them." I did a websearch and found someone saying "Don't use those, use the new ones." Question: is LPTHW out of date in this way, and should I be using "new ones"?
|
Two basic Python programming questions
| 0.028564 | 0 | 0 | 483 |
11,056,964 |
2012-06-15T19:25:00.000
| 1 | 0 | 1 | 0 |
python
| 11,057,057 | 7 | false | 0 | 0 |
1) Both. Creating scripts is what you would do with Python on a large scale. Using a Python shell is also good to show you that you can do simple scripting with tons of options via a command line and don't need to build/compile entire programs, etc like you do in other languages.
2) Formats change, but its not a big deal. Many people still use Python 2.x because Python3 introduced some unnecessary changes. Just look it up.
| 5 | 2 | 0 |
It will probably astound you how basic these questions are, but please bear with me! And if there is a better place to ask, I would be appreciative for a migration.
I am looking at two Python tutorials, one of which is "Learn Python the hard way". I am in no condition to evaluate the quality of the tutorials, so I have a few questions. (I have only just started LPTHW so I apologize if the answer comes 20 exercises later.)
In LPTHW, the exercises so far have been coding into Notepad++ and executing the txt document from a command line. In the other one, it was an "enter commands one by one into Python" tutorial. Question: which is more practical for a learner? "Both" is an acceptable answer.
In LPTHW, the first explanation of variables, the format character commands %s %d and %r are used. The exercise says "search the web to learn about all of them." I did a websearch and found someone saying "Don't use those, use the new ones." Question: is LPTHW out of date in this way, and should I be using "new ones"?
|
Two basic Python programming questions
| 0.028564 | 0 | 0 | 483 |
11,056,964 |
2012-06-15T19:25:00.000
| 3 | 0 | 1 | 0 |
python
| 11,057,054 | 7 | true | 0 | 0 |
I'd say "both". When you write "real programs" you're going to edit them in text files and run them from the command line, but the interactive environment is a great way to learn, explore, and test. I keep an interactive python session around as I'm coding as a place to check my assumptions.
You should absolutely learn the old formatting syntax. It's based on the C language's formatted print facilities, and many programming languages have adopted similar systems, so it's important to know. It can't hurt to learn the new stuff as well, and it's a good exercise to try writing the same formatting functionality in both the old and the new style.
| 5 | 2 | 0 |
It will probably astound you how basic these questions are, but please bear with me! And if there is a better place to ask, I would be appreciative for a migration.
I am looking at two Python tutorials, one of which is "Learn Python the hard way". I am in no condition to evaluate the quality of the tutorials, so I have a few questions. (I have only just started LPTHW so I apologize if the answer comes 20 exercises later.)
In LPTHW, the exercises so far have been coding into Notepad++ and executing the txt document from a command line. In the other one, it was an "enter commands one by one into Python" tutorial. Question: which is more practical for a learner? "Both" is an acceptable answer.
In LPTHW, the first explanation of variables, the format character commands %s %d and %r are used. The exercise says "search the web to learn about all of them." I did a websearch and found someone saying "Don't use those, use the new ones." Question: is LPTHW out of date in this way, and should I be using "new ones"?
|
Two basic Python programming questions
| 1.2 | 0 | 0 | 483 |
11,056,964 |
2012-06-15T19:25:00.000
| 0 | 0 | 1 | 0 |
python
| 11,057,032 | 7 | false | 0 | 0 |
Typing Python code into the interactive interpreter is a good way to test things out, in particular if you don't want to create a file for it. It's useful to see what results functions return and to try anything out. But any programs you write will be stored in files of course. Both is indeed the answer because they're both used during development, just for different purposes.
The new method of formatting string is "thestring".format(...)", where ... are all kinds of formatting options. This is indeed the new way of doing things and you should use that instead. The old formatting options make the code less readable (as you'd have to know the abbreviations with % in them) and it's just a lot easier to write "string with values: {0} and {1}".format(3, 4).
| 5 | 2 | 0 |
It will probably astound you how basic these questions are, but please bear with me! And if there is a better place to ask, I would be appreciative for a migration.
I am looking at two Python tutorials, one of which is "Learn Python the hard way". I am in no condition to evaluate the quality of the tutorials, so I have a few questions. (I have only just started LPTHW so I apologize if the answer comes 20 exercises later.)
In LPTHW, the exercises so far have been coding into Notepad++ and executing the txt document from a command line. In the other one, it was an "enter commands one by one into Python" tutorial. Question: which is more practical for a learner? "Both" is an acceptable answer.
In LPTHW, the first explanation of variables, the format character commands %s %d and %r are used. The exercise says "search the web to learn about all of them." I did a websearch and found someone saying "Don't use those, use the new ones." Question: is LPTHW out of date in this way, and should I be using "new ones"?
|
Two basic Python programming questions
| 0 | 0 | 0 | 483 |
11,056,964 |
2012-06-15T19:25:00.000
| 0 | 0 | 1 | 0 |
python
| 11,057,027 | 7 | false | 0 | 0 |
Idle would be a little quicker, or the pydev plugin for eclipse(would also give code completion etc), and you could write and run your code from one place either of these way, and out of date, really depends on your environment, also you can't go wrong with thenewboston tutorials on youtube
| 5 | 2 | 0 |
It will probably astound you how basic these questions are, but please bear with me! And if there is a better place to ask, I would be appreciative for a migration.
I am looking at two Python tutorials, one of which is "Learn Python the hard way". I am in no condition to evaluate the quality of the tutorials, so I have a few questions. (I have only just started LPTHW so I apologize if the answer comes 20 exercises later.)
In LPTHW, the exercises so far have been coding into Notepad++ and executing the txt document from a command line. In the other one, it was an "enter commands one by one into Python" tutorial. Question: which is more practical for a learner? "Both" is an acceptable answer.
In LPTHW, the first explanation of variables, the format character commands %s %d and %r are used. The exercise says "search the web to learn about all of them." I did a websearch and found someone saying "Don't use those, use the new ones." Question: is LPTHW out of date in this way, and should I be using "new ones"?
|
Two basic Python programming questions
| 0 | 0 | 0 | 483 |
11,058,409 |
2012-06-15T21:27:00.000
| 3 | 0 | 0 | 0 |
python,database,matlab
| 11,058,566 | 3 | false | 0 | 0 |
IMO simply use the file system with a file format that can you read/write in both MATLAB and Python. Databases usually imply a relational model (excluding the No-SQL ones), which would only add complexity here.
Being more MATLAB-inclined, you can directly manipulate MAT-files in SciPy with scipy.io.loadmat/scipy.io.savemat functions. This is the native MATLAB format for storing data, with save/load functions.
Unless of course you really need databases, then ignore my answer :)
| 1 | 4 | 0 |
I need to manipulate a large amount of numerical/textual data, say total of 10 billion entries which can be theoretically organized as 1000 of 10000*1000 tables.
Most calculations need to be performed on a small subset of data each time (specific rows or columns), such that I don't need all the data at once.
Therefore, I am intersted to store the data in some kind of database so I can easily search the database, retrieve multiple rows/columns matching defined criteria, make some calculations and update the database.The database should be accessible with both Python and Matlab, where I use Python mainly for creating raw data and putting it into database and Matlab for the data processing.
The whole project runs on Windows 7. What is the best and mainly the simplest database I can use for this purpose? I have no prior experience with databases at all.
|
What the simplest database to use with both Python and Matlab?
| 0.197375 | 1 | 0 | 3,289 |
11,059,191 |
2012-06-15T23:02:00.000
| 1 | 0 | 0 | 0 |
python,django
| 11,063,081 | 2 | false | 1 | 0 |
If it is that important that people can only vote once, consider creating a basic registration / login system anyway. A guest can always use multiple computers to skew the voting while account registration at least allows you to track which e-mail addresses are being used to vote. It also takes a bit more effort to skew the voting that way. If it's important but not of life-saving importance then I would use the cookie approach for anonymous guests.
| 1 | 5 | 0 |
I'm new to Django but am working on the tutorial on the Django website for creating a poll.
What is the best way to make it so guests (no registration / login) can only vote once on a poll?
IP (Don't want IP because people sharing a network can only vote once).
Cookie (User can delete the cookie but seems like the best approach).
Session (If the user closes the browser the session will change).
I'm guessing that Cookie would be the best approach but is there a better way for Django?
|
Django guests vote only once poll
| 0.099668 | 0 | 0 | 1,201 |
11,061,135 |
2012-06-16T05:41:00.000
| 7 | 0 | 0 | 0 |
python,html,screen-scraping,web-scraping
| 11,061,160 | 2 | true | 1 | 0 |
Using the website's public API, when it exists, is by far the best solution. That is quite why the API exists, it is the way that the website administrators say "use our content". Scraping may work one day and break the next, and it does not imply the website administrator's consent to have their content reused.
| 1 | 1 | 0 |
I am building a web application as college project (using Python), where I need to read content from websites. It could be any website on internet.
At first I thought of using Screen Scrapers like BeautifulSoup, lxml to read content(data written by authors) but I am unable to search content based upon one logic as each website is developed on different standards.
Thus I thought of using RSS/ Atom (using Universal Feed Parser) but I could only get content summary! But I want all the content, not just summary.
So, is there a way to have one logic by which we can read a website's content using lib's like BeautifulSoup, lxml etc?
Or I should use API's provided by the websites.
My job becomes easy if its a blogger's blog as I can use Google Data API but the trouble is, should I need to write code for every different API for the same job?
What is the best solution?
|
Should I use Screen Scrapers or API to read data from websites
| 1.2 | 0 | 1 | 356 |
11,063,697 |
2012-06-16T13:09:00.000
| 0 | 0 | 0 | 0 |
python,image-comparison
| 11,065,365 | 1 | true | 0 | 0 |
Not sure I entirely understand your question, but I'll give it a shot..
Assuming:
we just want to know if there is some object in a box.
the empty box is always the same
perfect box alignment etc.
You can do this:
subtract the query image from your empty box image.
sum all pixels
if the value is zero the images are identical, therefore no change, so no object.
Obviously there actually is some difference between the box parts of the two images, but the key thing is that the non-object part of the images are as similar as possible for both pictures, if this is the case, then we can use the above method but with a threshold test as the 3rd step. Provided the threshold is set reasonably, it should give a decent prediction of whether the box is empty or not..
| 1 | 0 | 0 |
I have an image find- and "blur-compare"-task. I could not figure out which methods I should use.
The setup is this: A, say, 100x100 box either is mostly filled by an object or not. To the human eye this object is always almost the same, but might change by blur, slight rescaling, tilting 3-dimensionally, moving to the side or up/down by a or two pixel or other very small graphical changes.
What is a simple quick robust and reliable way to check if the transformed object is there or not? Points to python packages as well as code would be nice.
|
Simple quick robust image comparison
| 1.2 | 0 | 0 | 341 |
11,065,308 |
2012-06-16T16:49:00.000
| 0 | 0 | 0 | 0 |
python,django,django-models,tags,django-admin
| 11,065,539 | 1 | false | 1 | 0 |
Both are good. I have used Django-tagging in over 30 projects, and am yet to find an issue. Is there a specific challenge
| 1 | 1 | 0 |
I am working on a django app and need to add a tags field to one of my models.
In the admin interface i need it to work like the wordpress tagging.. (comma separated entry, auto create new tags and autcomplete)
There are two tagging libraries I found, django-tagging and django-taggit, both also have an autcomplete extension.
The problem is that both of them are very old (last update was 2 years ago), unmaintained and needs some work to bring them up to speed.
Is there any good, recent tagging module i didn't find?
|
Django model tags
| 0 | 0 | 0 | 404 |
11,065,582 |
2012-06-16T17:28:00.000
| 1 | 0 | 1 | 0 |
java,scheduling,python-stackless,stackless
| 11,284,527 | 3 | false | 1 | 0 |
Scala actors framework like Akka do this. Each thread handles many actors that's how they created so efficiently. I recommend taking look at their source code.
| 2 | 3 | 0 |
For some academic research I need to simulate several threads running on a single processor.
I want to be able to insert *call_scheduler()* calls inside my code, in which the current "thread" will pause (remembering in which code line it is) and some scheduling function will decide which thread to let go.
In python, this could be implemented neatly using stackless python. Is there a java alternative?
I could implement it using real threads and some messaging queues (or pipes) that will force only one thread to run at a time - but this is an ugly and problematic solution.
|
Simulating threads scheduling in java (stackless java?)
| 0.066568 | 0 | 0 | 865 |
11,065,582 |
2012-06-16T17:28:00.000
| 0 | 0 | 1 | 0 |
java,scheduling,python-stackless,stackless
| 11,065,738 | 3 | false | 1 | 0 |
Your question:
I could implement it using real threads and some messaging queues (or pipes) that will force only one thread to run at a time - but this is an ugly and problematic solution
Well if you want only a single thread to run at a time, by controlling the access of the thread on the object in a cleaner way, then use the Semaphores in java.util.concurrent package.
Semaphores sem = new Semaphores(1); // 1 here will mark that only one thread can have access
use sem.acquire() to get the key of the object, and when its done, use sem.release() then only another thread will get the access to this object.
| 2 | 3 | 0 |
For some academic research I need to simulate several threads running on a single processor.
I want to be able to insert *call_scheduler()* calls inside my code, in which the current "thread" will pause (remembering in which code line it is) and some scheduling function will decide which thread to let go.
In python, this could be implemented neatly using stackless python. Is there a java alternative?
I could implement it using real threads and some messaging queues (or pipes) that will force only one thread to run at a time - but this is an ugly and problematic solution.
|
Simulating threads scheduling in java (stackless java?)
| 0 | 0 | 0 | 865 |
11,065,607 |
2012-06-16T17:31:00.000
| 1 | 0 | 0 | 1 |
python,installation,tornado
| 21,272,652 | 2 | false | 0 | 0 |
Try running it like this:
sudo easy_install tornado
When you are using stock python on OSX the easy_install command will install tornado system wide and it therefore needs admin rights. When using homebrew python (e.g. installed brew and python with "brew install python") then you can install python packages without having to do the sudo.
One word of advice: when working on a lot of python projects it's better to use virtualenv for installing python deps; that way you can have multiple isolated python environments AND you don't need the sudo.
| 2 | 0 | 0 |
I'm new to using mac and tornado. I have installed easy_install and tried installing tornado but I am keep getting "Permission denied"
easy_install tornado
Searching for tornado
Best match: tornado 2.3
Processing tornado-2.3-py2.7.egg
Adding tornado 2.3 to easy-install.pth file
error: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/easy-install.pth: Permission denied
What is going wrong?
|
Setting up tornado in mac
| 0.099668 | 0 | 0 | 5,106 |
11,065,607 |
2012-06-16T17:31:00.000
| 0 | 0 | 0 | 1 |
python,installation,tornado
| 11,065,625 | 2 | false | 0 | 0 |
You might want to try running that command as root if you want to install tornado system-wide or take a look at virtualenv for installing python packages in a sandboxed environment. Also, I recommend pythonbrew if you want to experiment with various versions of Python.
| 2 | 0 | 0 |
I'm new to using mac and tornado. I have installed easy_install and tried installing tornado but I am keep getting "Permission denied"
easy_install tornado
Searching for tornado
Best match: tornado 2.3
Processing tornado-2.3-py2.7.egg
Adding tornado 2.3 to easy-install.pth file
error: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/easy-install.pth: Permission denied
What is going wrong?
|
Setting up tornado in mac
| 0 | 0 | 0 | 5,106 |
11,065,828 |
2012-06-16T18:03:00.000
| 3 | 0 | 0 | 0 |
javascript,jquery,python,django
| 11,065,909 | 2 | true | 1 | 0 |
Here's how I'd do it. But note that there will be no right answer to this question.
Lets call the page where you click on foods, index.php.
On index, I'd have all the foods that could be clicked, as well as an empty div with the id "ingredients" -- eg. <div id="ingredients"></div>
When foods were clicked, they'd call jQuery handlers in main.js.
The onClick handlers would:
Keep track of the currently selected foods
Send a json or otherwise formatted list to a get_ingredients.php page via AJAX.
(This page would return the HTML I'd want to display in the "Ingedients" section)
Set the content of the ingredients div with the html returned by the AJAX call.
More explicitly, get_ingredients.php would:
Parse the GET/POST list of foods that was sent (via AJAX) by index
Query the database to see what ingredients were necessary for the selected foods
Construct HTML corresponding to the query results that should be put into the "ingredients" div on the user-facing page.
"display it" via echo/print/printf/etc, this isn't really displayed with AJAX, but rather sent as the AJAX response.
This way, you only have to keep track of the selected foods, and not deal with adding/subtracting individual ingredient quantities when foods are selected and deselected.
This has the downside of re-transmitting things "you already know", namely the ingredients of foods that were previously selected, however it eliminates a lot of the work that would otherwise be required.
| 2 | 0 | 0 |
I'm learning Python, Django, Javascript and jQuery by trying to build a simple shopping list web application. I'd like to hear from experienced developers if my approach is correct:
Functionality:
Web page will display a list of foods. When I click on a certain food I want a list of ingredients to get displayed (this will be my shopping list). Clicking another food will add more ingredients to the shopping list.
My approach:
I have built a simple page with django. I'm able to display a list of all foods that have been entered to the database. What I need now is to start building a list of ingredients as I click on food items.
I was thinking I would load a list of ingredients from the database on the first page load but hide them with css. When I click on a food item I would un-hide the ingredients associated with the food item by using jquery/css.
This approach seems quite clumsy to me. Could you give me some advice how I could create my shopping list application using the technologies mentioned above? Is my approach correct?
|
Creating shopping list web application
| 1.2 | 0 | 0 | 1,212 |
11,065,828 |
2012-06-16T18:03:00.000
| 0 | 0 | 0 | 0 |
javascript,jquery,python,django
| 11,065,889 | 2 | false | 1 | 0 |
@jedwards' comment is correct. Instead of loading everything for every item at the beginning(which could cause slow initial page loads). You should do an on-click event in jquery that does an ajax call back to the server and passes the name or id of the object in your db and returns its ingredients, then you populate the div that would contain the ingredients with them.
| 2 | 0 | 0 |
I'm learning Python, Django, Javascript and jQuery by trying to build a simple shopping list web application. I'd like to hear from experienced developers if my approach is correct:
Functionality:
Web page will display a list of foods. When I click on a certain food I want a list of ingredients to get displayed (this will be my shopping list). Clicking another food will add more ingredients to the shopping list.
My approach:
I have built a simple page with django. I'm able to display a list of all foods that have been entered to the database. What I need now is to start building a list of ingredients as I click on food items.
I was thinking I would load a list of ingredients from the database on the first page load but hide them with css. When I click on a food item I would un-hide the ingredients associated with the food item by using jquery/css.
This approach seems quite clumsy to me. Could you give me some advice how I could create my shopping list application using the technologies mentioned above? Is my approach correct?
|
Creating shopping list web application
| 0 | 0 | 0 | 1,212 |
11,066,646 |
2012-06-16T20:05:00.000
| 0 | 0 | 1 | 0 |
python,multithreading,concurrency,process,threadpool
| 11,067,104 | 1 | false | 0 | 0 |
A common approach to this is to not allocate resources to threads and queue the appropriate resource in with the data, though I appreciate that this is not always possible if a resource is bound to a particular thread.
The idea of using a queue per resource with threads only popping objects from the queues containing objects it can handle may work.
It may be possible to use a semaphore+concurrentQueue array, indexed by resource, for signaling such threads and also providing a priority system, so eliminating most of the polling and wasteful requeueing. I will have to think a bit more about that - it kinda depends on how the resources map to the threads.
| 1 | 1 | 0 |
I have a lot of tasks that I'd like to execute a few at a time. The normal solution for this is a thread pool. However, my tasks need resources that only certain threads have. So I can't just farm a task out to any old thread; the thread has to have the resource the task needs.
It seems like there should be a concurrency pattern for this, but I can't seem to find it. I'm implementing this in Python 2 with multiprocessing, so answers in those terms would be great, but a generic solution is fine. In my case the "threads" are actually separate OS processes and the resources are network connections (and no, it's not a server, so (e)poll/select is not going to help). In general, a thread/process can hold several resources.
Here is a naive solution: put the tasks in a work queue and turn my thread pool loose on it. Have each thread check, "Can I do this task?" If yes, do it; if no, put it back in the queue. However, if each task can only be done by one of N threads, then I'm doing ~2N expensive, wasted accesses to a shared queue just to get one unit of work.
Here is my current thought: have a shared work queue for each resource. Farm out tasks to the matching queue. Each thread checks the queue(s) it can handle.
Ideas?
|
Worker pool where certain tasks can only be done by certain workers
| 0 | 0 | 0 | 171 |
11,066,684 |
2012-06-16T20:11:00.000
| 2 | 0 | 0 | 1 |
emacs,python-mode
| 11,602,461 | 2 | false | 0 | 1 |
python-mode.el comes with a command py-execute-buffer-dedicated,
opening a new and reserved process for it
| 1 | 1 | 0 |
I have a small GTK python application that imports a package (Twisted) that may not be loaded twice.
If I run my application in emacs with python-mode.el and press C-c C-c, the application gets executed in a python shell window.
If I now close the application, the python shell stays up and running. If I now press C-c C-c again, emacs "reuses" the old python process and thus I run into problems because I'm installing a Twisted reactor twice.
Is it possible to have python-mode.el open a new shell window each time I execute a buffer?
|
Open new python shell on C-c C-c in python-mode.el
| 0.197375 | 0 | 0 | 1,178 |
11,070,805 |
2012-06-17T11:02:00.000
| 1 | 1 | 0 | 0 |
python
| 11,070,833 | 1 | false | 1 | 0 |
I'd stick with one server side language for now - and Python (or any of the other languages you listed) is a perfectly good choice for that.
Basic notions of what Javascript would be important I think, and how and what Ajax type technology can do.
However, stretching the definition of "language" a little, I think you develop a reasonable understanding of html and css as these are integral to web-development.
| 1 | 0 | 0 |
I'm a liberal arts major and have a few ideas for some web apps. I've saved up enough money to hire someone to do the coding for me, but I want to pick up at least basic coding skills on my own. I'd rather not be the clueless founder.
I've started off with Python. So far, so good. Despite my liberal arts background, I've always been pretty mathematically inclined, even taking some advanced calculus classes in college.
My question is: if my goal is to make web apps and not actually land a job, is it really necessary to learn more than one programming language? I'm starting off with Python and I've found it flexible and powerful enough to meet most of my needs. Do I need to expand my oeuvre to PHP, Ruby, Java, etc.?
|
Do I really need to know more than one language if i want to make webapps?
| 0.197375 | 0 | 0 | 96 |
11,070,842 |
2012-06-17T11:09:00.000
| 2 | 0 | 0 | 0 |
python,web-services,monitoring,status,cherrypy
| 11,071,410 | 2 | false | 1 | 0 |
I would break it apart further:
script A, on port a
script B, on port b
web script C which checks on A and B (by making simple requests to them)
and returns the results in a machine-friendly format, ie JSON or XML
web page D which calls C and formats the results for people, ie an HTML table
There are existing programs which do this - Nagios springs to mind.
| 1 | 0 | 0 |
I would like to have a web server displaying the status of 2 of my python scripts.
These scripts listen for incoming data on a specific port. What I would like to do is have it so that when the script is running the web server will return a HTTP200 and when the script is not running a 500. I have had a look at cherrypy and other such python web servers but I could not get them to run first and then while the web server is running continue with the rest of my code. I would like it so that if the script crashes so does the web server. Or a way for the web server to display say a blank webpage with just a 1 in the HTML if the script is running or a 0 if it is not.
Any suggestions?
Thanks in advance.
|
Calling a python web server within a script
| 0.197375 | 0 | 1 | 268 |
11,071,287 |
2012-06-17T12:28:00.000
| 0 | 0 | 0 | 0 |
python,tornado,pyjamas
| 11,453,702 | 2 | true | 1 | 1 |
The route I have chosen is to combine pyjs (old pyjamas) with web2py, via JSONRPC. So far it is working fine.
| 1 | 0 | 0 |
Is it possible to write an application using the pyjamas widgets, together with the tornado server model? What I have in mind is to provide a desktop-like frontend for my web application with pyjamas, and do server side logic with tornado.
Specifically, I want to trigger events in the web application generated in the server side, and be able to display those events using the pyjamas widgets.
Does somebody have a working example of this?
|
Integration of pyjamas and tornado
| 1.2 | 0 | 0 | 178 |
11,071,701 |
2012-06-17T13:38:00.000
| 1 | 0 | 1 | 0 |
python,emacs,python-3.x,emacs23
| 11,617,685 | 7 | false | 0 | 0 |
start an python-interpreter
M-x python RET
(the default interpreter)
M-x pythonVERSION
where VERSION means any installed version
| 2 | 16 | 0 |
I changed two days ago to Emacs 23, which lately gave me a lot of headache, especially, as I have two Python versions installed, the older 2.7 and 3. As I generally want to start the python 3 interpreter, it would be nice if I could tell Emacs in some way to use python 3 instead of 2.7.
Besides, I could not find a module which helps to highlight python3 syntax. I am currently using python-mode.el for highlighting.
Also, if somebody had a good tip for which module would be best to show the pydoc, I would be very thankful.
Thanks in advance!
|
python 3 in emacs
| 0.028564 | 0 | 0 | 14,288 |
11,071,701 |
2012-06-17T13:38:00.000
| 0 | 0 | 1 | 0 |
python,emacs,python-3.x,emacs23
| 11,611,901 | 7 | false | 0 | 0 |
Note python-mode.el knows a hierarchy how to detect the version needed
a shebang precedes setting of py-shell-name
while py-execute-THING-PYTHONVERSION
would precede also shebang for the command being
see menu PyExec
| 2 | 16 | 0 |
I changed two days ago to Emacs 23, which lately gave me a lot of headache, especially, as I have two Python versions installed, the older 2.7 and 3. As I generally want to start the python 3 interpreter, it would be nice if I could tell Emacs in some way to use python 3 instead of 2.7.
Besides, I could not find a module which helps to highlight python3 syntax. I am currently using python-mode.el for highlighting.
Also, if somebody had a good tip for which module would be best to show the pydoc, I would be very thankful.
Thanks in advance!
|
python 3 in emacs
| 0 | 0 | 0 | 14,288 |
11,072,138 |
2012-06-17T14:42:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine
| 11,094,276 | 1 | true | 1 | 0 |
No, this is not due to the HRD -- auto_now is implemented purely in the client library. After you write the entity, the property's value does not correspond to what's written to the datastore, but to what was last read. I'm not sure what you'll see for a brand new entity but it's probably still not the same as what was written.
If you switch to NDB you'll find that auto_now behaves much more reasonably. :-)
| 1 | 1 | 0 |
I use a last_touch_date DateTimeProperty as a means for revisioning entities in my application's datastore using the auto_now=True flag.
When a user posts an entity it receives its last_touch_date as a reference for future updates.
However, when I check the entity's last_touch_date afterwards I always find a slight delta between this property as read right after writing and soon afterwards. I have a feeling this is a result of the high consistency model.
Is this known behavior? Is there a workaround besides managing this property my self?
|
appengine DateTimeProperty auto_now=True unexpected behavior
| 1.2 | 0 | 0 | 327 |
11,072,145 |
2012-06-17T14:43:00.000
| 0 | 0 | 1 | 0 |
python,python-3.x,set
| 11,072,213 | 3 | false | 0 | 0 |
If you only have myset and b, then from that perspective, you won't have access to a because it's not there. If you create multiple mutable objects and add one of them to myset then the others are not 'known' when you're dealing with just myset or the object that you added.
If you want to modify a and b then you need to keep track of both objects somewhere.
| 1 | 6 | 0 |
Say I have a set myset of custom objects that may be equal although their references are different (a == b and a is not b). Now if I add(a) to the set, Python correctly assumes that a in myset and b in myset even though there is only len(myset) == 1 object in the set.
That is clear. But is it now possible to extract the value of a somehow out from the set, using b only? Suppose that the objects are mutable and I want to change them both, having forgotten the direct reference to a. Put differently, I am looking for the myset[b] operation, which would return exactly the member a of the set.
It seems to me that the type set cannot do this (faster than iterating through all its members). If so, is there at least an effective work-around?
|
Python: Access members of a set
| 0 | 0 | 0 | 3,012 |
11,073,553 |
2012-06-17T17:56:00.000
| 3 | 0 | 1 | 0 |
python
| 11,073,562 | 9 | false | 0 | 0 |
The default location is the CWD (Current Working Directory), so if you have your Python script in c:\directory and run it from there, if you call open() it will attempt to open the file specified in that location.
| 4 | 12 | 0 |
I'm new and I have no idea where the default directory for the open() function is.
For example open('whereisthisdirectory.txt','r')
Can someone advise me? I've tried googling it (and looking on stackoverflow) and even putting a random txt file in so many folders but I still can't figure it out. Since I'm beginning, I want to learn immediately rather than type "c:/directory/whatevevr.txt" every time I want to open a file. Thanks!
Ps my python directory has been installed to C:\Python32 and I'm using 3.2
|
open() function python default directory
| 0.066568 | 0 | 0 | 35,890 |
11,073,553 |
2012-06-17T17:56:00.000
| 1 | 0 | 1 | 0 |
python
| 11,077,614 | 9 | false | 0 | 0 |
create the .txt file in the directory where u have kept .py file(CWD) and run the .py file.
| 4 | 12 | 0 |
I'm new and I have no idea where the default directory for the open() function is.
For example open('whereisthisdirectory.txt','r')
Can someone advise me? I've tried googling it (and looking on stackoverflow) and even putting a random txt file in so many folders but I still can't figure it out. Since I'm beginning, I want to learn immediately rather than type "c:/directory/whatevevr.txt" every time I want to open a file. Thanks!
Ps my python directory has been installed to C:\Python32 and I'm using 3.2
|
open() function python default directory
| 0.022219 | 0 | 0 | 35,890 |
11,073,553 |
2012-06-17T17:56:00.000
| 1 | 0 | 1 | 0 |
python
| 69,862,470 | 9 | false | 0 | 0 |
If you’re running your script through an interpreter (i.e pycharm, VSCode etc) your Python file will be saved, most likely, in my documents (at least in VSCode, in my personal experience) unless you manually save it to a directory of your choosing before you run it. Once it is saved, the interpreter will then use that as you current directory so any saves your Python script will create will also automatically go there unless you state otherwise.
| 4 | 12 | 0 |
I'm new and I have no idea where the default directory for the open() function is.
For example open('whereisthisdirectory.txt','r')
Can someone advise me? I've tried googling it (and looking on stackoverflow) and even putting a random txt file in so many folders but I still can't figure it out. Since I'm beginning, I want to learn immediately rather than type "c:/directory/whatevevr.txt" every time I want to open a file. Thanks!
Ps my python directory has been installed to C:\Python32 and I'm using 3.2
|
open() function python default directory
| 0.022219 | 0 | 0 | 35,890 |
11,073,553 |
2012-06-17T17:56:00.000
| 27 | 0 | 1 | 0 |
python
| 11,073,565 | 9 | true | 0 | 0 |
os.getcwd()
Shows the current working directory, that's what open uses for for relative paths.
You can change it with os.chdir.
| 4 | 12 | 0 |
I'm new and I have no idea where the default directory for the open() function is.
For example open('whereisthisdirectory.txt','r')
Can someone advise me? I've tried googling it (and looking on stackoverflow) and even putting a random txt file in so many folders but I still can't figure it out. Since I'm beginning, I want to learn immediately rather than type "c:/directory/whatevevr.txt" every time I want to open a file. Thanks!
Ps my python directory has been installed to C:\Python32 and I'm using 3.2
|
open() function python default directory
| 1.2 | 0 | 0 | 35,890 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.