Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,492,337 |
2012-09-19T09:50:00.000
| 0 | 0 | 1 | 0 |
python
| 12,492,724 | 3 | false | 0 | 0 |
I think the most important thing is to denote your hexagons on the map in a way which makes checking neighbours easy...
One sensible choice could be to use 2D tuples, so that the hexagon (1,1)'s 6 neighbours are (1,0),(2,0),(2,2),(1,2),(0,2) and (1,1) - starting from north/up and going clockwise.
To populate the map you could then just iterate over all squares picking a random choice from the set of allowable tiles (based on it's current neighbours).
| 2 | 0 | 0 |
I would like to write for myself a simple map generator and I do not know how to bite. field will have to draw lots hexagonal.
When I generate random tile I must watch for neighbor. Then I have to take into account the already two neighbors, etc. Recursion? I determined that the field may be the water, the earth, the mountains - but in one field may be the transition from water to land with one of the sides.
An array will consist of a number specifying the type of tile.
I want to do it in python - for learning.
Some advice please.
|
How to write own map generator?
| 0 | 0 | 0 | 296 |
12,493,934 |
2012-09-19T11:37:00.000
| 0 | 0 | 1 | 1 |
python,string,input,copy-paste
| 12,494,019 | 3 | false | 0 | 0 |
Use :
input = raw_input("Enter text")
These gets in input as a string all the input. So if you paste a whole text, all of it will be in the input variable.
EDIT: Apparently, this works only with Python Shell on Windows.
| 1 | 6 | 0 |
Is there any easy way to handle multiple lines user input in command-line Python application?
I was looking for an answer without any result, because I don't want to:
read data from a file (I know, it's the easiest way);
create any GUI (let's stay with just a command line, OK?);
load text line by line (it should pasted at once, not typed and not pasted line by line);
work with each of lines separately (I'd like to have whole text as a string).
What I would like to achieve is to allow user pasting whole text (containing multiple lines) and capture the input as one string in entirely command-line tool. Is it possible in Python?
It would be great, if the solution worked both in Linux and Windows environments (I've heard that e.g. some solutions may cause problems due to the way cmd.exe works).
|
Multiple lines user input in command-line Python application
| 0 | 0 | 0 | 6,926 |
12,494,465 |
2012-09-19T12:12:00.000
| 1 | 0 | 0 | 1 |
python,ubuntu
| 12,494,482 | 1 | true | 0 | 0 |
It looks like you need the python 2.5 header files. You might be able to find them in synaptic under python2.5-dev or something similar.
| 1 | 0 | 0 |
easy_install-2.5 python-ldap
Searching for python-ldap
Reading http://pypi.python.org/simple/python-ldap/
Reading http://www.python-ldap.org/
Best match: python-ldap 2.4.10
Downloading http://pypi.python.org/packages/source/p/python-ldap/python-ldap-2.4.10.tar.gz#md5=a15827ca13c90e9101e5e9405c1d83be
Processing python-ldap-2.4.10.tar.gz
Running python-ldap-2.4.10/setup.py -q bdist_egg --dist-dir /tmp/easy_install-dplmGE/python-ldap-2.4.10/egg-dist-tmp-ZlXBub
defines: HAVE_SASL HAVE_TLS HAVE_LIBLDAP_R
extra_compile_args:
extra_objects:
include_dirs: /opt/openldap-RE24/include /usr/include/sasl /usr/include
library_dirs: /opt/openldap-RE24/lib /usr/lib
libs: ldap_r
file Lib/ldap.py (for module ldap) not found
file Lib/ldap/controls.py (for module ldap.controls) not found
file Lib/ldap/extop.py (for module ldap.extop) not found
file Lib/ldap/schema.py (for module ldap.schema) not found
warning: no files found matching 'Makefile'
warning: no files found matching 'Modules/LICENSE'
file Lib/ldap.py (for module ldap) not found
file Lib/ldap/controls.py (for module ldap.controls) not found
file Lib/ldap/extop.py (for module ldap.extop) not found
file Lib/ldap/schema.py (for module ldap.schema) not found
file Lib/ldap.py (for module ldap) not found
file Lib/ldap/controls.py (for module ldap.controls) not found
file Lib/ldap/extop.py (for module ldap.extop) not found
file Lib/ldap/schema.py (for module ldap.schema) not found
In file included from Modules/LDAPObject.c:4:0:
Modules/common.h:10:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: Setup script exited with error: command 'gcc' failed with exit status 1
aptitude search python2.5
i python2.5 - An interactive high-level object-oriented language (version 2.5)
v python2.5-celementtree -
v python2.5-cjkcodecs -
v python2.5-ctypes -
v python2.5-dialog -
v python2.5-elementtree -
v python2.5-iplib -
i A python2.5-minimal - A minimal subset of the Python language (version 2.5)
v python2.5-plistlib -
v python2.5-profiler -
v python2.5-reverend -
v python2.5-wsgiref
|
easy_install2.5 fails to install python-ldap on Ubuntu 12.04
| 1.2 | 0 | 0 | 1,640 |
12,497,545 |
2012-09-19T15:09:00.000
| 3 | 0 | 0 | 0 |
python,numpy,pyramid
| 12,497,790 | 2 | false | 0 | 0 |
If the array is something that can be shared between threads then you can store it in the registry at application startup (config.registry['my_big_array'] = ??). If it cannot be shared then I'd suggest using a queuing system with workers that can always have the data loaded, probably in another process. You can hack this by making the value in the registry be a threadlocal and then storing a new array in the variable if one is not there already, but then you will have a copy of the array per thread and that's really not a great idea for something that large.
| 2 | 1 | 1 |
I'd like to perform some array calculations using NumPy for a view callable in Pyramid. The array I'm using is quite large (3500x3500), so I'm wondering where the best place to load it is for repeated use.
Right now my application is a single page and I am using a single view callable.
The array will be loaded from disk and will not change.
|
Using NumPy in Pyramid
| 0.291313 | 0 | 0 | 409 |
12,497,545 |
2012-09-19T15:09:00.000
| 2 | 0 | 0 | 0 |
python,numpy,pyramid
| 12,497,850 | 2 | false | 0 | 0 |
I would just load it in the obvious place in the code, where you need to use it (in your view, I guess?) and see if you have performance problems. It's better to work with actual numbers than try to guess what's going to be a problem. You'll usually be surprised by the reality.
If you do see performance problems, assuming you don't need a copy for each of multiple threads, try just loading it in the global scope after your imports. If that doesn't work, try moving it into its own module and importing that. If that still doesn't help... I don't know what then.
| 2 | 1 | 1 |
I'd like to perform some array calculations using NumPy for a view callable in Pyramid. The array I'm using is quite large (3500x3500), so I'm wondering where the best place to load it is for repeated use.
Right now my application is a single page and I am using a single view callable.
The array will be loaded from disk and will not change.
|
Using NumPy in Pyramid
| 0.197375 | 0 | 0 | 409 |
12,498,011 |
2012-09-19T15:34:00.000
| 2 | 0 | 1 | 0 |
python-2.7,virtualenv,pip,ubuntu-12.04
| 12,583,628 | 1 | true | 1 | 0 |
Following @HugoTavares's suggestion I found I needed to install python-dev. I don't know why this helped but it seems to have solved this particular problem. I'm putting this answer on for now but Hugo, if you read this, please post an identical one and I'll remove acceptance on this and accept yours, since you deserve the credit.
| 1 | 2 | 0 |
Whenever I try to pip install anything in my virtualenvs I am told it is Downloading/Unpacking. My terminal then stays on that line indefinitely. The longest I have left this running was 2 hours (trying to install iPython) without success.
Most recently, I tried installing django in one virtualenv using pip. Once it said Downloading/Unpacking I created another virtualenv in another terminal window and used easy-install to install django and mezzanine. Both installed with their dependencies before there was any movement on the terminal using pip. I left the pip window running for an hour before giving up. I have tried pip install, pip install -v --use-mirrors and their sudo equivalents without much change in the results (-v --use-mirrors spews out a list of urls before stalling at Downloading/Unpacking).
I am using Python 2.7 on Ubuntu 12.04.1 64-bit. I use Virtuanlenvwrapper to create and manage my virtualenvs, if that helps.
I can't find any references to other people having this problem so I expect it's a mistake of mine. Does anyone have any idea what I'm doing wrong?
|
Why won't pip install anything?
| 1.2 | 0 | 0 | 2,355 |
12,498,694 |
2012-09-19T16:11:00.000
| 1 | 0 | 0 | 0 |
php,javascript,python
| 12,498,821 | 2 | false | 1 | 0 |
I personally love Socket.IO and I would to it with it.
Because it would be simpler a way. But that may be a bit too much work to set up just for that. Especially since it is not that simple in Python from what I heard compare to node where it really is about 10 lines server side.
Without Socket.IO could do a long polling to get the status of the image processing, and get the url at the end, of the image in base64 if that is what you want.
| 1 | 0 | 0 |
Scenario: User loads a page, image is being generated, show loading bar, notification event sent to browser.
I am using python code to generate the image. Would it be ideal to have a web server that launches the script or embed a webserver code into the python script? Once the image is finished rendering, the client should receive a message saying it's successful and display the image.
How can this be architected to support concurrent users as well? Would simply launching the python script for each new user that navigates to the web page suffice?
Would it be overkill to have real-time web application for this scenario? Trying to decide whether simple jQuery AJAX will suffice or Socket.io should be used to have a persistent connection between server and client.
Any libraries out there that fit my needs?
|
Javascript-Python: serve dynamically generated images to client browser?
| 0.099668 | 0 | 1 | 88 |
12,499,465 |
2012-09-19T17:00:00.000
| 0 | 0 | 0 | 0 |
python-2.7,tkinter,tix
| 38,350,158 | 3 | false | 0 | 1 |
Re the roles of Tix and ttk. They are different things. Tix adds widgets to the standard Tkinter set. ttk adds widgets that duplicate existing widgets but with a configurable look 'n feel. So if you want to make your GUI look like the native OS apps use ttk, but you only get a limited set of widgets. If you want more powerful widgets and don't care about the look as much, then use Tix.
| 1 | 6 | 0 |
I recently have become interested in GUI programming in Python.
I have already had plenty of experience with Pygame, but find that it would be easier just to use the interface that Tkinter, Tix, etc... provide.
However, I'm having difficulty finding any decent documentation or tutorials on Tix for Python. (Unlike Pygame, which there are several guides/tutorials that I find quite nice)
Where can I find a nice tutorial? (That only assumes knowledge of Python, and hopefully no knowledge of Tk)
|
Tix Tutorials in Python
| 0 | 0 | 0 | 8,495 |
12,500,623 |
2012-09-19T18:22:00.000
| 1 | 0 | 0 | 0 |
python,django,file-upload,django-file-upload
| 12,623,702 | 2 | false | 1 | 0 |
You can't actually do both of these at once:
I'd like to upload directly these files to the file server.
I'd like to make the file server simple as possible. The file server just serve files.
Under your requirements, the file server needs to both Serves Files and Accepts Uploads of files.
There are a few ways to get the files onto the FileServer
the easiest way, is just to upload to the AppServer and then have that upload to another server. this is what most AmazonS3 implementations are like.
if the two machines are on the same LAN , you can mount a volume of the FileServer onto the AppServer using NFS or something similar. Users upload to the AppServer, but the data is saved to a partition that is really on the FileServer.
You could have a file upload script work on the FileServer. However, you'd need to do a few complex things:
have a mechanism to authenticate the ability to upload the file. you couldn't just use an authtkt, you'd need to have something that allows one-and-only-one file upload along with some sort of identifier and privilege token. i'd probably opt for an encrypted payload that is timestamped and has the upload permission credentials + an id for the file.
have a callback on successful upload from the FileServer to the AppServer, letting it know that the id in the payload has been successfully received.
| 1 | 2 | 0 |
I'm using Django 1.4.
There are two servers (app server and file server).
The app server provide a web service using django, wsgi, and apache.
User can upload files via the web service.
I'd like to upload directly these files to the file server.
"directly" means that the files aren't uploaded via the app server.
I'd like to make the file server simple as possible. The file server just serve files.
Ideally, transfer costs between the app server and the file server are zero.
Could somebody tell me how to do this?
|
Django: How to upload directly files submitted by a user to another server?
| 0.099668 | 0 | 0 | 1,077 |
12,502,187 |
2012-09-19T20:13:00.000
| 2 | 0 | 1 | 0 |
ipython,ipython-notebook
| 47,238,955 | 5 | false | 1 | 0 |
Click on file > Download > html
| 1 | 37 | 0 |
What is the best way to get an ipython notebook into html format for use in a blog post?
It is easy to turn an ipython notebook into a PDF, but I'd rather publish as an html notebook.
I've found that if I download the notebook as a .ipynb file, then load it onto gist, then look at it with the ipython notebook viewer (nbviewer.ipython.org), THEN grab the html source, I can paste it into a blog post (or just load it as html anywhere) and it looks about right. However, if I use the "print view" option directly from ipython, the source contains a bunch of javascript rather than the processed html, which is not useful since the images and text are not directly included.
The %pastebin magic is also not particularly helpful for this task, since it pastes the python code and not the ipython notebook formatted code.
EDIT: Note that this is under development; see the comments under the accepted answer.
EDIT May 2 2014: As per Nathaniel's comment, a new answer is needed for ipython 2.0
|
How to export an IPython notebook to HTML for a blog post?
| 0.07983 | 0 | 0 | 14,387 |
12,504,096 |
2012-09-19T23:06:00.000
| 0 | 0 | 1 | 0 |
python,python-2.7
| 71,479,624 | 5 | false | 0 | 0 |
As of python3 you need to use iter() instead of iteritems(), since this function got removed apparently. Just leaving this here, because I got an error using the above-mentioned code.
| 1 | 3 | 0 |
Given a large dictionary, (actually, a defaultdict) with tens of millions of key-value pairs (strings : integers).
I want to remove about half of the key/value pairs, based on a simple condition (e.g. value > 20) on the values.
What is the fastest way to do this?
|
Fastest way to remove a lot of keys from a dictionary
| 0 | 0 | 0 | 1,482 |
12,504,351 |
2012-09-19T23:41:00.000
| 3 | 0 | 1 | 0 |
python,arguments,maintainability
| 12,504,392 | 5 | false | 0 | 0 |
Using *args in every function is a bad idea because it can mask errors in your code (calling a function with the wrong number of arguments). I suppose the rule of thumb should be to only use *args if you need *args.
| 4 | 5 | 0 |
I've been looking at the source code for an open source package that I'm using. Nearly every function uses *args instead of named arguments. I'm finding it hard to follow and use the code, because every time I want to call a function I have to go back, pick through the source code, and identify what the arguments should be, and what order they should be in. The question I have is this: Is there a compelling reason to use *args in every function, or is this an abuse of the concept?
Thanks,
-b
|
Proper use vs. over use of *args in Python
| 0.119427 | 0 | 0 | 610 |
12,504,351 |
2012-09-19T23:41:00.000
| 2 | 0 | 1 | 0 |
python,arguments,maintainability
| 12,504,396 | 5 | false | 0 | 0 |
When there's no compelling reason to use *args, it's an abuse of the concept. Usually there are good names for arguments, which help comprehension. Even when there aren't, (x, y, z) tells you more than (*args).
And beyond making code more readable, it also helps to catch errors (e.g., if you call a (x, y, z) function with (2, 3) you'll get an error at the call, rather than deep inside the function), it's usually more concise, and it can even be more efficient.
But there are sometimes compelling reasons for widespread use of *args.
For example, if you're wrapping a lower-level (C or otherwise) module and want to do perfect forwarding, it's easier with *args. Even more so if you're automatically generating the wrapper code rather than writing it manually. Of course this is still a tradeoff—it's much easier for the developer of the wrapper module, but more difficult for the users—but sometimes the tradeoff is worth taking.
Without knowing the particular package you're referring to, it's impossible to guess whether it's a compelling use case or an abuse.
| 4 | 5 | 0 |
I've been looking at the source code for an open source package that I'm using. Nearly every function uses *args instead of named arguments. I'm finding it hard to follow and use the code, because every time I want to call a function I have to go back, pick through the source code, and identify what the arguments should be, and what order they should be in. The question I have is this: Is there a compelling reason to use *args in every function, or is this an abuse of the concept?
Thanks,
-b
|
Proper use vs. over use of *args in Python
| 0.07983 | 0 | 0 | 610 |
12,504,351 |
2012-09-19T23:41:00.000
| 4 | 0 | 1 | 0 |
python,arguments,maintainability
| 12,504,415 | 5 | true | 0 | 0 |
This might be a personal reasoning but I only use *args and **kwargs when I have a lot of optional fields where some fields are only used in a certain context.
The only other occasion I used args is when I was building an XML RPC client for a cloud API. Since I was just passing parameters to an underlying layer and since the function where generated dynamically, I had no other choices but to use *args as I had no way to know all the parameters in advance.
In most case, you won't even need it. I consider it to be mostly laziness than anything else.
Some people who comes from Java and C# might use this as a replacement for "params", but there are so many way to pass optional parameters in python.
And I agree that even if you use *args, you should have a very good documentation.
| 4 | 5 | 0 |
I've been looking at the source code for an open source package that I'm using. Nearly every function uses *args instead of named arguments. I'm finding it hard to follow and use the code, because every time I want to call a function I have to go back, pick through the source code, and identify what the arguments should be, and what order they should be in. The question I have is this: Is there a compelling reason to use *args in every function, or is this an abuse of the concept?
Thanks,
-b
|
Proper use vs. over use of *args in Python
| 1.2 | 0 | 0 | 610 |
12,504,351 |
2012-09-19T23:41:00.000
| 2 | 0 | 1 | 0 |
python,arguments,maintainability
| 12,506,414 | 5 | false | 0 | 0 |
If you're wrapping an unknown function (the functions returned by decorators often do this), then you often need to use (*args, **kwargs).
Some class hierarchies use (*args, **kwargs) in methods that can need different signatures at different classes in the hierarchies (__init__ is a prime culprit). It's really helpful if you can avoid that, but can be necessary to work with multiple inheritance hierarchies in a sane manner (or as sane as is possible with multiple inheritance, at least).
I sometimes end up using **kwargs when I have a large number of optional arguments, but this requires a lot of documentation.
In a function that's consuming the *args itself (rather than passing them to some other function with an unknown signature, as in the decorator or class inheritance cases), then I tend to think that *args should almost never be used except to mean "zero or more of the same kind of thing". And in that case you should be naming it *websites or *spaceships or *watermelons, not *args. A bunch of unrelated parameters shouldn't be squashed into *args. Even worse would be for the function to use *args to take "an x, a y, and a z, or else an x and a z", where the second parameter does different things depending on how many parameters are passed. At that point they should clearly all have names and defaults (even if it's just the standard None default then see in the function which ones are non-None pattern) and be passed by keyword rather than by position.
| 4 | 5 | 0 |
I've been looking at the source code for an open source package that I'm using. Nearly every function uses *args instead of named arguments. I'm finding it hard to follow and use the code, because every time I want to call a function I have to go back, pick through the source code, and identify what the arguments should be, and what order they should be in. The question I have is this: Is there a compelling reason to use *args in every function, or is this an abuse of the concept?
Thanks,
-b
|
Proper use vs. over use of *args in Python
| 0.07983 | 0 | 0 | 610 |
12,504,951 |
2012-09-20T01:20:00.000
| 1 | 0 | 1 | 0 |
python,ipython,pandas
| 42,903,054 | 5 | false | 0 | 0 |
There is also a magic command, history, that can be used to write all the commands/statements given by user.
Syntax : %history -f file_name.
Also %save file_name start_line-end_line, where star_line is the starting line number and end_line is ending line number. Useful in case of selective save.
%run can be used to execute the commands in the saved file
| 1 | 22 | 1 |
It would be useful to save the session variables which could be loaded easily into memory at a later stage.
|
Save session in IPython like in MATLAB?
| 0.039979 | 0 | 0 | 10,066 |
12,506,222 |
2012-09-20T04:50:00.000
| -1 | 0 | 1 | 0 |
python,variables,maya
| 12,507,015 | 2 | false | 0 | 0 |
Good suggestion. Something user1090427 should watch for is how you they're rounding, if that's an issue at all. Removing the sign before/after rounding can have unexpected results. floor(abs(-29.29)) is 29, and not the same as abs(floor(-29.29)), which is 30.
| 1 | 0 | 0 |
I'm writing a script for maya, it's in python, but this should relate to most things.
If I run my script on the left side of a setup the values for translation etc will be something like 29.292 or 68.215.
The problem is that the right side is a mirror, so it's -29.292 or -68.215.
I need to check if the symbol '-' exists within the variable and if so remove it.
How do I do this?
Thank you
|
Find a character in a variable and remove it
| -0.099668 | 0 | 0 | 99 |
12,508,243 |
2012-09-20T07:53:00.000
| 3 | 0 | 1 | 1 |
python,multithreading
| 12,508,287 | 5 | false | 0 | 0 |
check if you can import _posixsubprocess manually, subprocess tries to import this in it's code, if it produces an exception this warning is produced.
| 3 | 7 | 0 |
Hi im running a subprocess with threads trough a python wrapper and I get the following warning when I use the subprocess module.
"The _posixsubprocess module is not being used, Child process reliability may suffer if your program uses threads."
What dose this mean?
How can I get rid of it?
|
Python Error The _posixsubprocess module is not being used
| 0.119427 | 0 | 0 | 12,868 |
12,508,243 |
2012-09-20T07:53:00.000
| 3 | 0 | 1 | 1 |
python,multithreading
| 51,902,409 | 5 | false | 0 | 0 |
unsetting PYTHONHOME has fixed this issue for me.
| 3 | 7 | 0 |
Hi im running a subprocess with threads trough a python wrapper and I get the following warning when I use the subprocess module.
"The _posixsubprocess module is not being used, Child process reliability may suffer if your program uses threads."
What dose this mean?
How can I get rid of it?
|
Python Error The _posixsubprocess module is not being used
| 0.119427 | 0 | 0 | 12,868 |
12,508,243 |
2012-09-20T07:53:00.000
| 0 | 0 | 1 | 1 |
python,multithreading
| 62,709,088 | 5 | false | 0 | 0 |
It could be if you have more than a version of Python in use.
you need to specify the correct version of python to use for each programme.
For example, I need python 3.7 for miniconda, but mendeleydesktop claims for trouble with this version:
also problem with _posixsubproces and its location
so instead of run the program in a phyton enviroment only I use python2.7, and it solve the problem.
Hope it helps.
Cheers,
Flor
| 3 | 7 | 0 |
Hi im running a subprocess with threads trough a python wrapper and I get the following warning when I use the subprocess module.
"The _posixsubprocess module is not being used, Child process reliability may suffer if your program uses threads."
What dose this mean?
How can I get rid of it?
|
Python Error The _posixsubprocess module is not being used
| 0 | 0 | 0 | 12,868 |
12,508,796 |
2012-09-20T08:28:00.000
| 5 | 0 | 1 | 0 |
python,logging
| 12,508,963 | 1 | true | 0 | 0 |
The logger classes defined are stored in logging.Logger.manager.loggerDict.
| 1 | 3 | 0 |
I have several libraries which use the logging library. I want to access all loggers defined in those libraries and set the logging level i want without touching the code of those libraries. I need to that dynamically because i don't know in advance the loggers those libraries will define
How would you do that?
|
Access all defined loggers with logging library in Python
| 1.2 | 0 | 0 | 79 |
12,509,420 |
2012-09-20T09:07:00.000
| 1 | 0 | 1 | 0 |
python,regex
| 12,509,479 | 4 | false | 0 | 0 |
Something like this? r"(?<=XYZ)((?:ABC)+)". This will match only the occurrences of ABC when they follow XYZ, but will not include XYZ itself.
EDIT
Looks like I misunderstood OP's original question. The easiest way to do this would be to first find the string XYZ. Save the starting position of XYZ. Use the starting position as extra argument to p.finditer(string, startpos). Please note that this will only work with compiled regular expressions, so you need to compile your pattern first.
The pattern you need is simply r"(ABC)".
Alternatively, you can use p.sub(), which will also do the substitution, but for this to work on only a part of the string, you will need to create a substring first. p.sub() does not have a startpos parameter.
| 1 | 2 | 0 |
I am trying to write a regular expression which would match any occurrence of ABC following XYZ anywhere in the string :
Ex. text - "Some ABC text followed by XYZ followed by multiple ABC, more ABC, more ABC"
i.e., the regex should match three ABC's coming after XYZ.
Any clues?
|
Regex for matching any occurrence of ABC following XYZ anywhere in the string
| 0.049958 | 0 | 0 | 1,674 |
12,511,801 |
2012-09-20T11:37:00.000
| 2 | 0 | 1 | 0 |
python
| 12,511,861 | 5 | false | 0 | 0 |
What are you asking is not possible, since you cannot have two different keys with the same value in a Python dictionary.
The closest answer to your question is:
D3 = dict( D1.items() + D2.items() )
Note: if you have different values for the same key, the ones from D2 will be the ones in D3.
Example:
D1 = { 'a':1, 'b'=2 }
D2 = { 'c':3, 'b':3}
Then, D3 will be:
D3= { 'a':1, 'b':3, 'c':3 }
| 1 | 19 | 0 |
I have two dictionaries as follows:
D1={'a':1,'b':2,'c':3}
and
D2={'b':2,'c':3,'d':1}
I want to merge these two dictionaries and the result should be as follows:
D3={'a':1,'b':2,'c':3,'b':2,'c':3,'d':1}
how can I achieve this in python?
|
Merging of two dictionaries
| 0.07983 | 0 | 0 | 3,838 |
12,513,403 |
2012-09-20T13:13:00.000
| 10 | 0 | 1 | 0 |
python,regex,string
| 12,513,455 | 2 | true | 0 | 0 |
find only matches an exact sequence of characters, while a regular expression matches a pattern. Naturally only looking an for exact sequence is faster (even if your regex pattern is also an exact sequence, there is still some overhead involved).
As a consequence of the above, you should use find if you know the exact sequence, and a regular expression (or something else) when you don't. The exact approach you should use really depends on the complexity of the problem you face.
As a side note, the python re module provides a compile method that allows you to pre-compile a regex if you are going to be using it repeatedly. This can substantially improve speed if you are using the same pattern many times.
| 2 | 1 | 0 |
I am learning Python, and need to format "From" fields received from IMAP. I tried it using str.find() and str.strip(), and also using regex. With find(), etc. my function runs quite a bit faster than with re (I timed it). So, when is it better to use re? Does anybody have any good links/articles related to that? Python documentation obviously doesn't mention that...
|
Python: regex vs find(), strip()
| 1.2 | 0 | 0 | 2,085 |
12,513,403 |
2012-09-20T13:13:00.000
| 3 | 0 | 1 | 0 |
python,regex,string
| 12,513,461 | 2 | false | 0 | 0 |
If you intend to do something complex you should use re . It is more scalable than using string methods.
String methods are good for doing something simple and not worth bothering with regular expressions.
So, it depends on what are you doing, but usually you should use regular expressions since they are more powerful.
| 2 | 1 | 0 |
I am learning Python, and need to format "From" fields received from IMAP. I tried it using str.find() and str.strip(), and also using regex. With find(), etc. my function runs quite a bit faster than with re (I timed it). So, when is it better to use re? Does anybody have any good links/articles related to that? Python documentation obviously doesn't mention that...
|
Python: regex vs find(), strip()
| 0.291313 | 0 | 0 | 2,085 |
12,519,074 |
2012-09-20T18:56:00.000
| 1 | 0 | 0 | 0 |
python,screen-scraping,scraper
| 12,529,766 | 3 | false | 1 | 0 |
Finding the url of the ajax source will be the best option but it can be cumbersome for certain sites. Alternatively you could use a headless browser like QWebKit from PyQt and send keyboard events while reading the data from the DOM tree. QWebKit has a nice and simple api.
| 1 | 31 | 0 |
I have written many scrapers but I am not really sure how to handle infinite scrollers. These days most website etc, Facebook, Pinterest has infinite scrollers.
|
scrape websites with infinite scrolling
| 0.066568 | 0 | 1 | 29,452 |
12,519,610 |
2012-09-20T19:29:00.000
| 0 | 0 | 1 | 1 |
python,file,io,exe,.app
| 12,520,441 | 2 | false | 0 | 0 |
Use the __file__ variable. This will give you the filename of your module. Using the functions in os.path you can determine the full path of the parent directory of your module. The os.path module is in the standard python documentation, you should be able to find that.
Then you can combine the module path with your filename to open it, using os.path.join.
| 1 | 2 | 0 |
I have a python script which reads a text file in it's current working directory called "data.txt" then converts the data inside of it into a json format for another separate program to handle.
The problem i'm having is that i'm not sure how to read the .txt file (and write a new one) which is in the same directory as the .app when the python script is all bundled up. The current method i'm using doesn't work because of something to do with it using the fact that it's ran from the terminal instead of executed as a .app?
Any help is appreciated!
|
Python app which reads and writes into its current working directory as a .app/exe
| 0 | 0 | 0 | 1,641 |
12,521,189 |
2012-09-20T21:29:00.000
| 11 | 0 | 1 | 0 |
python,arrays
| 12,521,210 | 3 | false | 0 | 0 |
It just means a one element list containing just a 0. Multiplying by memloadsize gives you a list of memloadsize zeros.
| 1 | 11 | 0 |
I'm familiar with programming but new to python:
mem = [0] * memloadsize
what does the '[0]' represent?
Is it a built-in array?
|
What does '[0]' mean in Python?
| 1 | 0 | 0 | 40,845 |
12,522,080 |
2012-09-20T22:53:00.000
| 0 | 1 | 1 | 0 |
python,logging
| 12,523,536 | 2 | false | 1 | 0 |
I'm assuming you mean import logging imports a different logging module? In this case, there are many special attributes of modules/packages that can help, such as __path__. Printing logging.__path__ should tell you where python is importing it from.
| 1 | 4 | 0 |
How would I be able to find which module is overriding the Python root logger?
My Django project imports from quite a few external packages, and I have tried searching for all instances of logging.basicConfig and logging.root setups, however most of them are in tests and should not be overriding it unless specifically called.
Django's logging config does not specify a root logger.
|
Finding out which module is setting the root logger
| 0 | 0 | 0 | 205 |
12,522,136 |
2012-09-20T23:00:00.000
| 2 | 0 | 0 | 0 |
python,mysql,web
| 12,522,631 | 2 | false | 1 | 0 |
If you have MySQL installed on your machine along with Python, get a version of MySQLDb library for Python and have fun with it. Moreover, you can do almost any data operation with these combinations. If you want your website to go live (and do not wish to go through web frameworks) just look for a hosting plan that gives you a Python installed server access.
| 1 | 0 | 0 |
I currently simply have a local website on my Mac. I can view the webpage's HTMl and CSS and run the javascript functions in browser on my computer, but the next step I want to take is incorporating python scripts for accessing a MySQL database and returning results.
I am clearly new to this, and would love some guidance. Right now, on my computer, I have MySQL installed and I can run it in the terminal just fine. What else do I need as far as database and server equipment – if anything – to get some dynamic website running locally? My current, albeit incredibly limited, understanding is that I have a MySQL database stored on my machine that can be accessed through a Python script – also on my machine – and a link to this script in the HTML file. Is this even right, or do you recommend certain tutorials to fill in the gaps or teach me from the ground up?
I am sorry I am asking a lot; the few tutorials I have found have seemed to cover what I am hoping to do. Many thanks in advance.
|
What do I need to successfully run a website in my browser that executes Python scripts?
| 0.197375 | 0 | 0 | 132 |
12,522,160 |
2012-09-20T23:03:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,app-engine-ndb
| 12,522,188 | 3 | true | 1 | 0 |
You could add another field for the date. A ComputedProperty would probably make sense for that.
Or you could fetch from the start of the day, in batches, and stop fetching once you reach the end of your day. I'd imagine you could come up with a sensible default based on how many appointments you'd typically have in one day to keep this reasonably efficient.
| 2 | 1 | 0 |
I have an application on app engine, and this application has an entity called Appointment. An Appointment has a start_time and a end_time. I want to fetch appointments based on the time, so I want to get all appointments for a given day, for example.
Since app engine doesn't support inequality query based on two fields, what can I do?
|
How to fetch time-based entity from datastore in app engine
| 1.2 | 0 | 0 | 391 |
12,522,160 |
2012-09-20T23:03:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,app-engine-ndb
| 12,535,660 | 3 | false | 1 | 0 |
The biggest problem is that a "date" means a different start and end "time" depending on a time zone of a user. And you cannot force all of your users to stick to one time zone all of the lives, not to mention DST changes twice a year. So you cannot simply create a new property in your entity to store a "date" object as was suggested. (This is why GAE does not have a "date" type property.)
I built a scheduling app. When a user specifies the desired range for events (it can be a day, a week or a month), I retrieve all events that have an end time larger than the start time of the requested range, and then I loop through them until I find the last event which has a start time smaller than the end time of the range.
You can specify how many entities you want to fetch in one request depending on a requested range (more for month than for a day). For example, if a given calendar is likely to have 5-10 events per day, it's enough to fetch the first 10 entities (you can always fetch more if condition is not met). For a month you can set a batch size to 100, for example. This is a micro-optimization, however.
| 2 | 1 | 0 |
I have an application on app engine, and this application has an entity called Appointment. An Appointment has a start_time and a end_time. I want to fetch appointments based on the time, so I want to get all appointments for a given day, for example.
Since app engine doesn't support inequality query based on two fields, what can I do?
|
How to fetch time-based entity from datastore in app engine
| 0.066568 | 0 | 0 | 391 |
12,522,844 |
2012-09-21T00:38:00.000
| 0 | 1 | 0 | 0 |
python-3.x
| 12,522,892 | 3 | false | 0 | 0 |
Maybe you could create a temporary directory using tempfile.tempdir and generate the filenames manually such as file1, file2, ..., filen . This way you easily avoid "_" characters and you can just delete the temporary directory after you are finished with that.
| 1 | 3 | 0 |
I'm writing a python3 program that generates a text file that is post-procesed with asciidoc for the final report in html and pdf.
The python program generates thousands files with graphics to be included in the final report. The filenames for the files are generated with tempfile.NamedTemporaryFile
The problem it that the character set used by tempfile is defined as:
characters = "abcdefghijklmnopqrstuvwxyz0123456789_"
then I end with some files with names like "_6456_" and asciidoc interprets the "_" as formatting and inserts some html that breaks the report.
I need to either find a way to "escape" the filenames in asciidoc or control the characters in the temporary file.
My current solution is to rename the temporary file after I close it to replace the "_" with some other character (not in the list of characters used by tempfile to avoid a collision) but i have the feeling that there is a better way to do it.
I will appreciate any ideas. I'm not very proficient with python yet, i think overloading _RandomNameSequence in tempfile will work, but i'm not sure how to do it.
regards.
|
change character set for tempfile.NamedTemporaryFile
| 0 | 0 | 0 | 491 |
12,527,309 |
2012-09-21T08:57:00.000
| 0 | 1 | 0 | 0 |
python,port,gsm
| 12,527,528 | 1 | false | 0 | 0 |
Sorry I donot know the python syntaxes, just an idea to follow. You can use SerialPort.GetPortNames(); to get the list of available ports in your system.
And then send an AT command to each port. Which ever port responds with an OK , it means that your modem is connected to that port.
| 1 | 0 | 0 |
Is there any way to read GSM modem port number programmatically using Python, when I connect mobile to Windows XP machine?
|
Programmatically read GSM modem port number
| 0 | 0 | 0 | 894 |
12,532,465 |
2012-09-21T14:24:00.000
| 1 | 1 | 0 | 0 |
python,django,pydev
| 12,535,217 | 1 | true | 1 | 0 |
The Django test runner can be accessed by creating a new (Run or Debug) configuration for your project using the Django template. Set your main module as manage.py and under the Arguments tab enter "test" (or any other manage.py arguments you need).
| 1 | 0 | 0 |
It is a django project. I am using pydev 2.6. How do I make it to use the Django test runner?
|
pydev with eclipse does create test database when running test
| 1.2 | 0 | 0 | 234 |
12,532,631 |
2012-09-21T14:34:00.000
| 0 | 1 | 0 | 0 |
python,audio,loops,playback
| 31,437,197 | 1 | false | 1 | 0 |
I am using PyAudio for a lot of things and are quite happy with it. If it can do this, I do not know, but I think it can.
One solution is to feed sound buffer manually and control / set the needed latency. I have done this and it works quite well. If you have the latency high enough it will work.
An other solution, similar to this, is to manage the latency your self. You can queue up and or mix your small sound files manually to e.g. sizes of 0.5 -1 sec. This will greatly reduce the requirement to the "realtimeness" and alow you to do some pretty cool transitions between "speeds"
I do not know what sort of latency you can cope with, but if we are talking about train speeds, I guess they do not change instantaneous - hence latency of 500ms to several seconds is most likely acceptable.
| 1 | 4 | 0 |
I'm writing an application to simulate train sounds. I got very short (0.2s) audio samples for every speed of the train and I need to be able to loop up to 20 of them (one for every train) without gaps at the same time.
Gapless changing of audio samples (train speed) is also a Must-Have.
I've been searching for possible python-audio-solutions, including
PyAudio
PyMedia
pyaudiere
but I'm not sure which one suits best my use-case, so I do really appreciate any propositions and experiences!
PS: I did already try out gstreamer but since the 1.0 release is not there yet and I cant figure out how to get gapless playback to work with pygi, i thought there might be a better choice. I also tried pygame, but it seems like it's limited to 8 audio channels??
|
Python Audio library for fast, gapless looping of many short audio tracks
| 0 | 0 | 0 | 1,335 |
12,533,013 |
2012-09-21T14:56:00.000
| 0 | 0 | 1 | 0 |
python,search,queue
| 12,535,201 | 2 | false | 0 | 0 |
There is not a means of doing that. The point of a queue is that you just do puts and gets on it.
If you need to search a queue, you could get the first element, save a reference to it, put it back into the queue and then do gets and puts till you get back to the first element (assuming that you only have one thread putting things into the queue).
| 1 | 0 | 0 |
How one would search or browse messages stored in a queue.Queue instance?
Is it possible to do so without actually getting each message, checking its content, and putting it back?
|
python: search and browse messages in the queue
| 0 | 0 | 0 | 213 |
12,533,745 |
2012-09-21T15:40:00.000
| 3 | 0 | 0 | 0 |
python,web-services,web-applications,flask,blueprint
| 24,216,134 | 4 | false | 1 | 0 |
I wish the Blueprint object has a register_blueprint function just as the Flask object does. It would automatically place and registered blueprints under the current Blueprints' url.
| 1 | 6 | 0 |
I have a series of blueprints I'm using, and I want to be able to bundle them further into a package I can use as seamlessly as possible with any number of other applications. A bundle of blueprints that provides an entire engine to an application. I sort of created my own solution, but it is manual and requires too much effort to be effective. It doesn't seem like an extension, and it is more than one blueprint(several that provide a common functionality).
Is this done? How?
(Application dispatching methods of tying together several programs might work isn't what I'm looking for)
|
blueprint of blueprints (Flask)
| 0.148885 | 0 | 0 | 6,797 |
12,534,813 |
2012-09-21T16:54:00.000
| 1 | 0 | 0 | 0 |
python,r,3d,interpolation,splines
| 12,536,067 | 2 | true | 0 | 0 |
By "compact manifold" do you mean a lower dimensional function like a trajectory or a surface that is embedded in 3d? You have several alternatives for the surface-problem in R depending on how "parametric" or "non-parametric" you want to be. Regression splines of various sorts could be applied within the framework of estimating mean f(x,y) and if these values were "tightly" spaced you may get a relatively accurate and simple summary estimate. There are several non-parametric methods such as found in packages 'locfit', 'akima' and 'mgcv'. (I'm not really sure how I would go about statistically estimating a 1-d manifold in 3-space.)
Edit: But if I did want to see a 3D distribution and get an idea of whether is was a parametric curve or trajectory, I would reach for package:rgl and just plot it in a rotatable 3D frame.
If you are instead trying to form the convex hull (for which the word interpolate is probably the wrong choice), then I know there are 2-d solutions and suspect that searching would find 3-d solutions as well. Constructing the right search strategy will depend on specifics whose absence the 2 comments so far reflects. I'm speculating that attempting to model lower and higher order statistics like the 1st and 99th percentile as a function of (x,y) could be attempted if you wanted to use a regression effort to create boundaries. There is a quantile regression package, 'rq' by Roger Koenker that is well supported.
| 1 | 2 | 1 |
I have data points in x,y,z format. They form a point cloud of a closed manifold. How can I interpolate them using R-Project or Python? (Like polynomial splines)
|
How interpolate 3D coordinates
| 1.2 | 0 | 0 | 1,960 |
12,538,723 |
2012-09-21T22:00:00.000
| 0 | 0 | 1 | 0 |
python
| 12,538,960 | 3 | false | 0 | 0 |
A hash is way of calculating an unique code for an object, this code always the same for the same object. hash('test') for example is 2314058222102390712, so is a = 'test'; hash(a) = 2314058222102390712.
Internally a dictionary value is searched by the hash, not by the variable you specify. A list is mutable, a hash for a list, if it where defined, would be changing whenever the list changes. Therefore python's design does not hash lists. Lists therefore can not be used as dictionary keys.
Tuples are immutable, therefore tubles have hashes e.G. hash((1,2)) = 3713081631934410656. one could compare whether a tuple a is equal to the tuple (1,2) by comparing the hash, rather than the value. This would be more efficient as we have to compare only one value instead of two.
| 1 | 2 | 0 |
What does this mean?
The only types of values not acceptable as dictionary keys are values containing lists or dictionaries or other mutable types that are compared by value rather than by object identity, the reason being that the efficient implementation of dictionaries requires a key’s hash value to remain constant.
I think even for tuples, comparison will happen by value.
|
when compare by id is used in Python? Dictionary key comparison?
| 0 | 0 | 0 | 365 |
12,540,395 |
2012-09-22T03:19:00.000
| 1 | 0 | 1 | 0 |
python,string
| 12,540,618 | 1 | true | 0 | 0 |
You mean something like this?
re.sub(r'([aeoiu])', r'ab\1', 'program') -> 'prabograbam'
re.sub(r'([aeoiu])', r'\1b\1', 'dog') -> 'dobog'
or
re.sub(r'([aeoiu]+)', r'ab\1', 'tooth') -> 'tabooth'
re.sub(r'(([aeoiu])[aeoiu]*)', r'\2b\1', 'boat') -> 'boboat
| 1 | 0 | 0 |
I'm trying to replace vowels/syllables in words with other text..for example:
Word entered: program
Text to replace syllables/vowels with: ab
Result: pr**ab**ogr**ab**am
AND if there is a wildcard (*) entered such as:
Word entered: dog
Text to replace syllables/vowels with: *b
Result: d**ob**og, where * is replaced with the the first vowel in the word, in this case being "o" and then it is replaced after that with the word entered, in this case "b" making "ob" put in before the vowel "o" in dog.
Any ideas? I am trying to accomplish this with for, if, and while loops only.
|
How to Add a Character to an Existing String in Python
| 1.2 | 0 | 0 | 606 |
12,540,435 |
2012-09-22T03:28:00.000
| 3 | 0 | 1 | 1 |
python,python-2.7
| 14,970,445 | 3 | false | 0 | 0 |
For what it's worth, the answer to the fundamental problem here is that the pytz installation process didn't actually extract the ".egg" file (at least, this is what I noticed with a very similar issue.)
You may consider going into the site-packages folder and extracting it yourself.
| 2 | 6 | 0 |
Today is my first day at Python and have been going through problems. One that I was working on was, "Write a short program which extracts the current date and time from the operating system and prints it on screen in the following format: day, month, year, current time in GMT.
Demonstrate that it works."
I was going to use pytz, so used easy_install pytz
This installed it in my site-packages (pytz-2012d-py2.7.egg)
Is this the correct directory for me to be able to import the module?
In my python shell i use from pytz import timezone I get,
"ImportError: No module named pytz"
Any ideas? Thanks in advance
|
Import Error: No module named pytz after using easy_install
| 0.197375 | 0 | 0 | 21,691 |
12,540,435 |
2012-09-22T03:28:00.000
| 3 | 0 | 1 | 1 |
python,python-2.7
| 27,397,683 | 3 | false | 0 | 0 |
It is important if you are using python v2 or python v3 - it has separate easy_install package!
In debian there are:
python-pip
python3-pip
and then
easy_install
easy_install3
If you use wrong version of easy_install you will be updating wrong libraries.
| 2 | 6 | 0 |
Today is my first day at Python and have been going through problems. One that I was working on was, "Write a short program which extracts the current date and time from the operating system and prints it on screen in the following format: day, month, year, current time in GMT.
Demonstrate that it works."
I was going to use pytz, so used easy_install pytz
This installed it in my site-packages (pytz-2012d-py2.7.egg)
Is this the correct directory for me to be able to import the module?
In my python shell i use from pytz import timezone I get,
"ImportError: No module named pytz"
Any ideas? Thanks in advance
|
Import Error: No module named pytz after using easy_install
| 0.197375 | 0 | 0 | 21,691 |
12,542,111 |
2012-09-22T08:22:00.000
| 1 | 0 | 1 | 0 |
python
| 12,543,014 | 4 | false | 0 | 0 |
For some reason, many Python programmers combine the class and its implementation in the same file; I like to separate them, unless it is absolutely necessary to do so.
That's easy. Just create the implementation file, import the module in which the class is defined, and you can call it directly.
So, if the class - ShowMeTheMoney - is defined inside class1_file.py, and the file structure is:
/project
/classes
/__init__.py
/class1_file.py
/class2_file.py
/class1_imp_.py
(BTW, the file and class names must be different; the program will fail if the class and the file names are the same.)
You can implement it in the class1_imp_.py using:
# class1_imp_.py
import classes.class1_file as any_name
class1_obj = any_name.ShowMeTheMoney()
#continue the remaining processes
Hope this helps.
| 1 | 7 | 0 |
I am a Python beginner and my main language is C++. You know in C++, it is very common to separate the definition and implementation of a class. (How) Does Python do that? If not, how to I get a clean profile of the interfaces of a class?
|
Separating class definition and implementation in python
| 0.049958 | 0 | 0 | 3,065 |
12,545,418 |
2012-09-22T16:07:00.000
| 3 | 0 | 0 | 0 |
python,django,apache,webserver,openshift
| 12,546,511 | 1 | false | 1 | 0 |
How about this:
Use nginx to serve static files
Keep the files in some kind of predefined directory structure, build a django app as the dashbord with the filesystem as the backend. That is, moving, adding or deleting files from the dashboard changed them the filesystem and nginx doesn't have to be aware of this dashboard.
Do not use dynamic routing. Just layout and maintain the proper directory structure using the databoard.
Optionally, keep the directory structure and file metadata in some database server for faster searches and manipulation.
This should result in a very low overhead static file server.
| 1 | 0 | 0 |
I am thinking to design my own web app to serve static files. I just don't want to use Amazon services..
So, can anyone tell me how to start the project? I am thinking to develop in Python - Django on Openshift (Redhat's).
This is how ideas are going through in my mind:
A dashboard helps me to add/ delete/ manage static files
To setup API kind of thing (end point: JSON objects) so that I can use this project to serve my web apps!
As openshift uses Apache!, I am thinking to dynamically edit htaccess and serve the files.. but not sure whether it would be possible or not
Or, I can use django's urls.py to serve the files but I don't think djano is actually made for.
Any ideas and suggestion?
|
A web application to serve static files
| 0.53705 | 0 | 0 | 402 |
12,550,810 |
2012-09-23T08:27:00.000
| 5 | 0 | 1 | 0 |
python,flask,python-babel,flask-babel
| 12,559,877 | 1 | false | 1 | 0 |
Babel does support Japanese and indeed, the error comes because 'jp' is not a valid locale.
Babel uses language codes from CLDR (which I believe are the standardized language codes from ISO et al). In your case the confusion comes from the language/territory split ('de' for German language, 'AT' for Austrian territory, 'DE' for Germany, ...).
The language code for Japanese is 'ja', territory is 'JP'. So you should use just 'ja' or 'ja_JP'.
| 1 | 0 | 0 |
I am having issues with Flask-babel. I cant create a translation for Japanese.
pybabel: error: unknown locale 'jp'
Is this a Flask-Babel issue?
That is the same error when a language does not exists. But, german works. So.....babel does nit support Japanese?
Is there an alternative to Babel that support a major language like Japanese?
|
Flask-Babel -0 pybabel: error: unknown locale 'jp'
| 0.761594 | 0 | 0 | 1,091 |
12,552,890 |
2012-09-23T13:51:00.000
| 0 | 0 | 0 | 1 |
python,hadoop,hadoop-streaming
| 12,556,901 | 1 | true | 0 | 0 |
After reviewing sent_tokenize's source code, it looks like the nltk.sent_tokenize AND the nltk.tokenize.sent_tokenize methods/functions rely on a pickle file (one used to do punkt tokenization) to operate.
Since this is Hadoop-streaming, you'd have to figure out where/how to place that pickle file into the zip'd code module that is added into the hadoop job's jar.
Bottom line? I recommend using the RegexpTokenizer class to do sentence and word level tokenization.
| 1 | 0 | 0 |
I've seen a technique (on stackoverflow) for executing a hadoop streaming job using zip files to store referenced python modules.
I'm having some errors during the mapping phase of my job's execution. I'm fairly certain it's related to the zip'd module loading.
To debug the script, I have run my data set through sys.stdin/sys.stdout using command line pipes into my mapper and reducer so something like this:
head inputdatafile.txt | ./mapper.py | sort -k1,1 | ./reducer.py
the results look great.
When I run this through hadoop though, I start hitting some problems. ie: the mapper and reducer fail and the entire hadoop job fails completely.
My zip'd module file contains *.pyc files - is that going to impact this thing?
Also where can I find the errors generated during the map/reduction process using hadoop streaming?
I've used the -file command line argument to tell hadoop where the zip'd module is located and where my mapper and reducer scripts are located.
i'm not doing any crazy configuration options to increase the number of mappers and reducers used in the job.
any help would be greatly appreciated! thanks!
|
hadoop streaming with python modules
| 1.2 | 0 | 0 | 1,087 |
12,553,197 |
2012-09-23T14:36:00.000
| 0 | 0 | 0 | 0 |
python,perl,language-agnostic
| 12,553,211 | 2 | false | 1 | 0 |
If you know what each field is supposed to be, perhaps you could write a regular expression which would match that field type only (ignoring tildes) and capture the match, then replace the original string in the file?
| 1 | 1 | 0 |
I am provided with text files containing data that I need to load into a postgres database.
The files are structured in records (one per line) with fields separated by a tilde (~). Unfortunately it happens that every now and then a field content will include a tilde.
As the files are not tidy CSV, and the tilde's not escaped, this results in records containing too many fields, which cause the database to throw an exception and stop loading.
I know what the record should look like (text, integer, float fields).
Does anyone have suggestions on how to fix the overlong records? I code in per but I am happy with suggestions in python, javascript, plain english.
|
Messed up records - separator inside field content
| 0 | 1 | 0 | 111 |
12,556,309 |
2012-09-23T21:14:00.000
| 4 | 0 | 0 | 1 |
python,asynchronous,task,celery,aggregation
| 12,705,857 | 2 | false | 1 | 0 |
An easy way to accomplish this is to write all the actions a task should take on a persistent storage (eg. database) and let a periodic job do the actual process in one batch (with a single connection).
Note: make sure you have some locking in place to prevent the queue from being processes twice!
There is a nice example on how to do something similar at kombu level (http://ask.github.com/celery/tutorials/clickcounter.html)
Personally I like the way sentry does something like this to batch increments at db level (sentry.buffers module)
| 1 | 13 | 0 |
I'm planning to use Celery to handle sending push notifications and emails triggered by events from my primary server.
These tasks require opening a connection to an external server (GCM, APS, email server, etc). They can be processed one at a time, or handled in bulk with a single connection for much better performance.
Often there will be several instances of these tasks triggered separately in a short period of time. For example, in the space of a minute, there might be several dozen push notifications that need to go out to different users with different messages.
What's the best way of handling this in Celery? It seems like the naïve way is to simply have a different task for each message, but that requires opening a connection for each instance.
I was hoping there would be some sort of task aggregator allowing me to process e.g. 'all outstanding push notification tasks'.
Does such a thing exist? Is there a better way to go about it, for example like appending to an active task group?
Am I missing something?
Robert
|
Celery Task Grouping/Aggregation
| 0.379949 | 0 | 0 | 3,185 |
12,556,629 |
2012-09-23T22:00:00.000
| 1 | 0 | 1 | 0 |
python
| 12,556,648 | 3 | false | 0 | 0 |
Creating new local variables will not overwrite the local variables from previous calls. Every time you call the function, you get new local variables. If the function calls itself recursively, each call will get its own local variables. It's difficult to tell from your explanation if this is the answer to your question. You really need to post some code.
| 1 | 1 | 0 |
I've read that you should never test for the existence of a variable; if your program has to check whether a variable exists, you don't "know your variables" and it is a design error. However, I have a recursive function that adds values to a dictionary and a list during each call to the function. In order to avoid declaring global variables, I am trying to make the variables local to the function. But in order to do that, I have to declare myList and myDict as [] and {} in the beginning of the function. Of course, that erases the changes I made to the dict and list in the previous recursive calls, which I don't want. I thought about instating a try ... catch at the beginning, checking for the existence of the variables, and only declaring them as {} and [] if they do not yet exist, but I've read that is bad design. Is there a better way to approach this? I apologize for not attaching any actual code, but I'm still at the beginning stages of planning this function, so there is nothing much to attach.
|
Declaring empty variables at the beginning of a recursive function
| 0.066568 | 0 | 0 | 1,270 |
12,557,562 |
2012-09-24T00:47:00.000
| 0 | 0 | 1 | 0 |
python,algorithm,optimization,pygame,key-bindings
| 12,557,972 | 1 | true | 0 | 1 |
It is rather unlikely that dictionary search for response to a user event would cause any noticeable delay on the program. There is something going wrong in your code.
Btw, dict and set search in Python is O(log(1)) - but for 105 keys, or even, if you count modifiers applied, about 1000 different keybindngs could be searched in linearly (that is, if the search was O(N) ) without a noticeable delay, even on a 5 year old (desktop) CPU.
So, just post some of your code if you want a solution for your problem. (reading the comments I've noticed you found something else that seems to be responsible already)
| 1 | 0 | 0 |
I am making a drawing program in python with pygame right now. The interface is supposed to be vimesque, allowing the user to control most things with key presses and entering commands. I want to allow live binding of the buttons; the user should be able to change which keycode corresponds to which function. In my current structure, all bindings are stored in a dictionary of functions to keycodes, 'bindingsDict.' Whenever the main loop receives a KEY_DOWN event, I execute:
bindingDictkeyCode
Where keyCode is stored as an integer.
This works, but it seems to be taking a lot of time and I am having trouble thinking of ways I could optimize.
Does anyone know the big O run time of dict look ups? I assumed because it hashed it would run in ln(n) but there's a huge difference in performance between this solution and just writing a list of if statements in the mainloop (which does not allow for dynamic binding).
|
Dynamic key binding in python
| 1.2 | 0 | 0 | 280 |
12,560,963 |
2012-09-24T07:56:00.000
| 1 | 0 | 0 | 0 |
python,flask,jinja2
| 12,561,340 | 2 | false | 1 | 0 |
If you allow users to upload arbitrary jinja2 templates, you allow them arbitrary html and javascript and thus become a web hosting company, with all the consequences.
You also have to be careful with the variables you give them access to, so that private user data (if any) is kept separated from each other.
| 1 | 6 | 0 |
I am writing a web application where users can create their own designs. The easiest way to do this would be by allowing them to upload their own Jinja 2 templates. However, I’m concerned about the security.
What are things I should be cautious for? Should I set a custom Jinja 2 environment for this?
|
User-generated Jinja 2 templates with Flask
| 0.099668 | 0 | 0 | 576 |
12,565,351 |
2012-09-24T12:49:00.000
| 0 | 0 | 1 | 0 |
numpy,python-2.7,scipy
| 12,616,286 | 1 | true | 0 | 0 |
EPD distribution saved the day.
| 1 | 0 | 1 |
I've installed new instance of python-2.7.2 with brew. Installed numpy from pip, then from sources. I keep getting
numpy.distutils.npy_pkg_config.PkgNotFound: Could not find file(s) ['/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/lib/npy-pkg-config/npymath.ini']
when I try to install scipy, either from sources or by pip, and it drives me mad.
Scipy's binary installer tells me, that python 2.7 is required and that I don't have it (I have 2 versions installed).
|
Trouble installing scipy on Mac OSX Lion
| 1.2 | 0 | 0 | 253 |
12,566,049 |
2012-09-24T13:27:00.000
| 1 | 0 | 0 | 0 |
javascript,python,cgi
| 12,566,448 | 2 | false | 1 | 0 |
As Lajos Arpad wrote, you should send notification to the server when the page is unloaded (XmlHTTPRequest in onbeforeunload event for example); but beware, that it will not be bulletproof - for example if user resets his/her machine, or forces killing browser process ungracefully (unix kill -9 for example), the browser will cease to exist and will not send any notification to the server. Maybe in that case it would be the best to introduce some heartbeat, like the webpage sends XHR every 10 seconds, and if the server doesn't see any heartbeat in 5 minutes, it's likely that user died and the file should be deleted too.
| 2 | 0 | 0 |
Using a python cgi script and I have a form with both a Submit and Cancel button.
When a user tries to leave the web page by clicking Cancel, closing the window or hitting the back button, I want to delete a file that exists on the server. The file name is dependent on the values in the form.
When the user clicks the Submit button, no file will be deleted. The form action is to take the user to another python cgi script.
I can catch the user leaving the page with javascript onbeforeunload event, but I can't delete the files in javascript. How do I delete the files?
|
Delete Server File When Exiting from Web Page
| 0.099668 | 0 | 0 | 457 |
12,566,049 |
2012-09-24T13:27:00.000
| 0 | 0 | 0 | 0 |
javascript,python,cgi
| 12,566,229 | 2 | true | 1 | 0 |
You can't access server files using Javascript because of security reasons. However, you can make a postback in the onbeforeunload event and on the server-side event (which handles this postback) you can delete the server file(s) to be deleted.
| 2 | 0 | 0 |
Using a python cgi script and I have a form with both a Submit and Cancel button.
When a user tries to leave the web page by clicking Cancel, closing the window or hitting the back button, I want to delete a file that exists on the server. The file name is dependent on the values in the form.
When the user clicks the Submit button, no file will be deleted. The form action is to take the user to another python cgi script.
I can catch the user leaving the page with javascript onbeforeunload event, but I can't delete the files in javascript. How do I delete the files?
|
Delete Server File When Exiting from Web Page
| 1.2 | 0 | 0 | 457 |
12,568,725 |
2012-09-24T16:01:00.000
| 1 | 0 | 0 | 0 |
python,mysql,django
| 12,568,898 | 1 | true | 1 | 0 |
It depends a lot on what you plan on doing with the data. However, thinking long term you're going to have much more flexibility with breaking out the friends into distinct units than just storing them all together.
If the friend creation process is taking too long, you should consider off-loading it to a separate process that can finish it in the background, using something like Celery.
| 1 | 0 | 0 |
right now I think i'm stuck between two main choices for grabbing a user's friends list.
The first is a direct connection with facebook, and the pulling the friends list out and creating a list of friend models with the json. (Takes quite a while whenever I try it out, like 2 seconds?)
The other is whenever a user logs in, the program will store his or her entire friends list inside a big friends model (note that even if two people have the same exact friends, two sets will still be stored, all friend models will have an FK back to the person who has these friends on their list).
Whenever a user needs his or her friends list, I just use django's filter to grab them.
Right now this is pretty fast but that's because it hasn't been tested with many people yet.
Based off of your guys experience, which of these two decisions would make the most sense long term?
Thank you
|
Big mysql query versus an http post connection in terms of long term speed
| 1.2 | 0 | 0 | 34 |
12,569,076 |
2012-09-24T16:24:00.000
| 5 | 0 | 1 | 0 |
python
| 14,501,078 | 1 | false | 0 | 0 |
there is a python icon on the interface .click and change
| 1 | 2 | 0 |
Suppose I have both Python 2 and Python 3 installed.
In WingIDE 101, how do I choose whether I am using Python 2 or Python 3?
For example, I was currently working with python 3 and now I need to use the image module which is only supported in python 2. How do I change it?
Thanks.
|
WingIDE -- Python 2 and Python 3
| 0.761594 | 0 | 0 | 609 |
12,570,465 |
2012-09-24T18:09:00.000
| 0 | 0 | 0 | 0 |
python,amazon-s3,amazon
| 56,126,467 | 12 | false | 1 | 0 |
Given that encryption at rest is a much desired data standard now, smart_open does not support this afaik
| 2 | 38 | 0 |
Is there any feasible way to upload a file which is generated dynamically to amazon s3 directly without first create a local file and then upload to the s3 server? I use python. Thanks
|
How to upload a file to S3 without creating a temporary local file
| 0 | 1 | 1 | 52,339 |
12,570,465 |
2012-09-24T18:09:00.000
| 2 | 0 | 0 | 0 |
python,amazon-s3,amazon
| 12,570,568 | 12 | false | 1 | 0 |
I assume you're using boto. boto's Bucket.set_contents_from_file() will accept a StringIO object, and any code you have written to write data to a file should be easily adaptable to write to a StringIO object. Or if you generate a string, you can use set_contents_from_string().
| 2 | 38 | 0 |
Is there any feasible way to upload a file which is generated dynamically to amazon s3 directly without first create a local file and then upload to the s3 server? I use python. Thanks
|
How to upload a file to S3 without creating a temporary local file
| 0.033321 | 1 | 1 | 52,339 |
12,573,915 |
2012-09-24T22:40:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,caching,distributed-caching,image-caching
| 12,585,067 | 3 | false | 1 | 0 |
Blobstore is fine.
Just make sure you set the HTTP cache headers in your url handler. This allows your files to be either cached by the browser (in which case you pay nothing) or App Engine's Edge Cache, where you'll pay for bandwidth but not blobstore accesses.
Be very careful with edge caching though. If you set an overly long expiry, users will never see an updated version. Often the solution to this is to change the url when you change the version.
| 1 | 0 | 0 |
I am running a website on google app engine written in python with jinja2. I have gotten memcached to work for most of my content from the database and I am fuzzy on how I can increase the efficiency of images served from the blobstore. I don't think it will be much different on GAE than any other framework but I wanted to mention it just in case.
Anyway are there any recommended methods for caching images or preventing them from eating up my read and write quotas?
|
Options for Image Caching
| 0.066568 | 0 | 0 | 717 |
12,575,987 |
2012-09-25T03:56:00.000
| 0 | 0 | 1 | 0 |
python,alignment
| 12,576,250 | 2 | false | 0 | 0 |
try to read in FASTA file and store each sequence as string. You can certainly organize the sequences in a dictionary using text in the '<' line as key.
If a gene is of the same length as a reference sequence without mutation, [i for i, a in enumerate(gene) if a != reference[i]] will return a list of position of mutations. its length will be the number of mutations. If mutation involves missing or added AA, it will be much more complicated.
| 1 | 1 | 0 |
I have a FASTA file with an alignment of multiple gene samples. I am trying to develop a program that can count the number of mutations for each sample. What's the best way to do this? Store each gene sample in a dictionary and compare them somehow?
|
Using python to count nucleotide mutations in an alignment
| 0 | 0 | 0 | 1,907 |
12,576,724 |
2012-09-25T05:34:00.000
| 4 | 0 | 1 | 0 |
python,unit-testing,python-3.x,arguments,software-design
| 12,576,771 | 2 | true | 0 | 0 |
There are numerous functions in the python standard library which accept both -- strings which are filenames or open file objects (I assume that's what you're referring to as a "stream"). It's really not hard to create a decorator that you can use to make your functions accept either one.
One serious drawback to using "streams" is that you pass it to your function and then your function reads from it -- effectively changing it's state. Depending on your program, recovering that state could be messy if it's necessary. (e.g. you might need to litter you code with f.tell() and then f.seek().)
| 1 | 14 | 0 |
If a function takes as an input the name of a text file, I can refactor it to instead take a file object (I call it "stream"; is there a better word?). The advantages are obvious - a function that takes a stream as an argument is:
much easier to write a unit test for, since I don't need to create a temporary file just for the test
more flexible, since I can use it in situations where I somehow already have the contents of the file in a variable
Are there any disadvantages to streams? Or should I always refactor a function from a file name argument to a stream argument (assuming, of course, the file is text-only)?
|
file name vs file object as a function argument
| 1.2 | 0 | 0 | 3,700 |
12,577,045 |
2012-09-25T06:05:00.000
| 8 | 0 | 0 | 0 |
python,ios,django,api,django-socialauth
| 12,587,020 | 1 | true | 1 | 0 |
You cannot use django-social-auth directly.
To do Facebook login, you need to use the Facebook SDK for iOS (https://developers.facebook.com/docs/reference/iossdk/).
It will return you the access token which you would send to your API created using TastyPie.
When you have the access token, you can register a new user based on that. Using the Facebook Graph API, you can get the user's name and other info. Make sure to save the access token so you can identify a returning user.
After you register or login a user, return a "token" that is specific to that user. Your site generates the token. You'll use that token to communicate with your site.
| 1 | 2 | 0 |
I'm building out an API using tastypie for an iOS app.
I can handle normal authentication / authorization just fine but I'm a bit confused when it comes to using django-social-auth to register / login / link THROUGH Tastypie.
If I'd, for example like to authenticate or register users on an iOS app using django-social-auth and tastypie, how would I go about that? Any suggestions? Am I looking at this the wrong way?
|
Log in with Django-social-auth & tastypie on iOS
| 1.2 | 0 | 0 | 1,488 |
12,578,021 |
2012-09-25T07:20:00.000
| 0 | 1 | 0 | 1 |
android,python,centos,monkeyrunner,rpyc
| 12,593,620 | 2 | false | 0 | 0 |
Using this function to run monekyrunner doesn't work although running ls, pwd works fine.
conn.modules.subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v
ALL
/opt/android-sdk/tools/MYSCRIPT.py", shell=True)
The chunk of code below solved my problem :
import rpyc
import subprocess , os
conn = rpyc.classic.connect("192.XXX.XXX.XXX",XXXXX)
conn.execute ("print 'Hello'")
conn.modules.os.popen("monkeyrunner -v ALL MYSCRIPT.py",)
Hope this helps to those who are experiencing the same problem as mine.
| 1 | 2 | 0 |
I need to run a monkeyrunner script in a remote machine. I'm using python to to automate it and RPyC so that I could connect to other machines, everything is running in CentOS.
written below is the command that I used:
import rpyc
import subprocess
conn = rpyc.classic.connect("192.XXX.XXX.XXX",XXXXX)
conn.execute ("print 'Hello'")
subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v ALL
/opt/android-sdk/tools/MYSCRIPT.py", shell=True)
and this is the result:
can't open specified script file
Usage : monkeyrunner [option] script_file
-s MonkeyServer IP Address
-p MonkeyServer TCP Port
-v MonkeyServer Logging level
And then I realized that if you use the command below, it is running the command in your machine. (example: the command inside the Popen is "ls" the result that it will give you is the list of files and directories in the current directory of the LOCALHOST) hence, the command is wrong.
subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v ALL
/opt/android-sdk/tools/MYSCRIPT.py", shell=True)
and so I replaced the code with this
conn.modules.subprocess.Popen("/opt/android-sdk/tools/monkeyrunner -v ALL
/opt/android-sdk/tools/MYSCRIPT.py", shell=True)
And give me this error message
======= Remote traceback ======= Traceback (most recent call last): File
"/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/protocol.py",
line 300, in _dispatch_request
res = self._HANDLERS[handler](self, *args) File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/protocol.py",
line 532, in _handle_call
return self._local_objects[oid](*args, **dict(kwargs)) File "/usr/lib/python2.4/subprocess.py", line 542, in init
errread, errwrite) File "/usr/lib/python2.4/subprocess.py", line 975, in _execute_child
raise child_exception OSError: [Errno 2] No such file or directory
======= Local exception ======== Traceback (most recent call last): File "", line 1, in ? File
"/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/netref.py",
line 196, in call
return syncreq(_self, consts.HANDLE_CALL, args, kwargs) File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/netref.py",
line 71, in syncreq
return conn.sync_request(handler, oid, *args) File "/usr/lib/python2.4/site-packages/rpyc-3.2.2-py2.4.egg/rpyc/core/protocol.py",
line 438, in sync_request
raise obj OSError: [Errno 2] No such file or directory
I am thinking that it cannot run the file because I don't have administrator access (since I didn't supply the username and password of the remote machine)?
Help!
|
why is monkeyrunner not working when run from a remote machine?
| 0 | 0 | 0 | 1,078 |
12,580,198 |
2012-09-25T09:37:00.000
| 2 | 0 | 0 | 0 |
python,arrays,numpy,gtk,glade
| 12,638,921 | 2 | true | 0 | 1 |
In the end i decided to create a buffer for the pixels using:
self.pixbuf = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,0,8,1280,1024)
I then set the image from the pixel buffer:
self.liveImage.set_from_pixbuf(self.pixbuf)
| 1 | 2 | 1 |
I'm making live video GUI using Python and Glade-3, but I'm finding it hard to convert the Numpy array that I have into something that can be displayed in Glade. The images are in black and white with just a single value giving the brightness of each pixel. I would like to be able to draw over the images in the GUI so I don't know whether there is a specific format I should use (bitmap/pixmap etc) ?
Any help would be much appreciated!
|
How do you display a 2D numpy array in glade-3 ?
| 1.2 | 0 | 0 | 760 |
12,581,463 |
2012-09-25T10:51:00.000
| 0 | 0 | 1 | 1 |
python
| 12,581,615 | 2 | false | 0 | 0 |
If you have "Wake On Lan" enabled you could potentially run a python script on a different PC and trigger the wake up after your specific period of time.
The scripts would probably need to talk to each other, unless you just do it all at times set in advance.
| 1 | 2 | 0 |
While the copy is in progress, can we put a PC into sleep mode for a specific period of time, then wake up and continue copy using python script? Can you please share the code?
Actually this is possible using shell script.
|
How to put a PC into sleep mode using python?
| 0 | 0 | 0 | 1,160 |
12,581,638 |
2012-09-25T11:03:00.000
| 6 | 1 | 1 | 0 |
python,python-interactive
| 12,581,642 | 1 | true | 0 | 0 |
That can be done using the -i option. Quoting the interpreter help text:
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
So the interpreter runs the script, then makes the interactive prompt available after execution.
Example:
$ python -i boilerplate.py
>>> print mymodule.__doc__
I'm a module!
>>>
This can also be done using the environment variable PYTHONSTARTUP. Example:
$ PYTHONSTARTUP=boilerplate.py python
Python 2.7.3 (default, Sep 4 2012, 10:30:34)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> print mymodule.__doc__
I'm a module!
>>>
I personally prefer the former method since it doesn't show the three lines of information, but either will get the job done.
| 1 | 6 | 0 |
When working on a project my scripts often have some boiler-plate code, like adding paths to sys.path and importing my project's modules. It gets tedious to run this boiler-plate code every time I start up the interactive interpreter to quickly check something, so I'm wondering if it's possible to pass a script to the interpreter that it will run before it becomes "interactive".
|
Is it possible to get the Python Interactive Interpreter to run a script on load?
| 1.2 | 0 | 0 | 422 |
12,582,140 |
2012-09-25T11:36:00.000
| 1 | 0 | 0 | 0 |
python,arrayfire
| 12,584,253 | 2 | false | 0 | 0 |
bottleneck is worth looking into. They have performed several optimizations over the numpy.nanxxx functions which, in my experience, makes it around 5x faster than numpy.
| 1 | 3 | 1 |
I'm using Arrayfire on Python and I can't use the af.sum() function since my input array has NaNs in it and it would return NAN as sum.
Using numpy.nansum/numpy.nan_to_num is not an option due to speed problems.
I just need a way to convert those NaNs to floating point zeros in arrayfire.
|
Check Arrayfire Array against NaNs
| 0.099668 | 0 | 0 | 299 |
12,585,286 |
2012-09-25T14:35:00.000
| 0 | 0 | 0 | 0 |
java,python,hadoop,hbase,thrift
| 21,502,085 | 3 | false | 1 | 0 |
Phoenix is a good solution for low latency result from Hbase tables than Hive.
It is good for range scans than Hbase scanners because they use secondary indexes and SkipScan.
As in your case , you use Python and phoenix API have only JDBC connectors.
Else Try Hbase Coprocessors. Which do SUM, MAX, COUNT,AVG functions.
you can enable coprocessors while creating table and can USE the Coprocessor functions
You can try Impala, which provide an ODBC connector, JDBC connector. Impala uses hive metatable for executing massively parallel batch execution.
You need to create a Hive metatable for your Hbase Table.
| 1 | 0 | 0 |
I have the below setup
2 node hadoop/hbase cluster with thirft server running on hbase.
Hbase has a table with 10 million rows.
I need to run aggregate queries like sum() on the hbase table
to show it on the web(charting purpose).
For now I am using python(thrift client) to get the dataset and display.
I am looking for database(hbase) level aggregation function to use in the web.
Any thoughts?
|
Hadoop Hbase query
| 0 | 1 | 0 | 1,019 |
12,590,058 |
2012-09-25T19:49:00.000
| 9 | 0 | 1 | 0 |
python,performance
| 12,590,177 | 4 | false | 0 | 0 |
The time that you are not including is the programmer time spent tracking down the bugs created when using a global has a side effect somewhere else in your program. That time is many times greater than the time spent creating and freeing local variables,
| 1 | 31 | 0 |
I am still new to Python, and I have been trying to improve the performance of my Python script, so I tested it with and without global variables. I timed it, and to my surprise, it ran faster with global variables declared rather than passing local vars to functions. What's going on? I thought execution speed was faster with local variables? (I know globals are not safe, I am still curious.)
|
Performance with global variables vs local
| 1 | 0 | 0 | 23,896 |
12,590,492 |
2012-09-25T20:24:00.000
| 1 | 0 | 0 | 0 |
python,django
| 12,590,919 | 2 | false | 1 | 0 |
There's no similar "themes" for Django like you'd find for Wordpress. Wordpress is a CMS -- a full application -- whereas Django is a framework -- i.e., you could use it to build a Wordpress, but it is not a Wordpress. You could start off with a pure HTML/CSS template and use that to build in functionality, but you won't find anything Django-specific, because it would inherently depend on what you build.
| 1 | 2 | 0 |
Where can I find a list of (preferably curated) DJango based, simple-website, templates (or equivalent, prewritten, boilerplate DJango code and layout)?
Context
I'm considering a project which requires me to deploy a large number of (fairly simple) personal/vanity websites. They are simple enough that, most likely, I should be able to deploy them as Wordpress based websites, using a few existing templates.
However, I'm not a fan of Wordpress, and I'd like to see if I can get roughly the same result by working with Python/DJango.
|
Is there a curated repository for DJango based website templates?
| 0.099668 | 0 | 0 | 1,017 |
12,591,456 |
2012-09-25T21:40:00.000
| 0 | 0 | 0 | 0 |
python,pyqt,kde-plasma
| 12,647,323 | 1 | true | 0 | 1 |
Unfortunately the API of KActionSelector is a bit cumbersome to use, so I have resorted to implementing the functionality I needed via the use of two QListWidgets and two QPushButtons, marked ">>" and "<<" correspondingly.
Upon the button click I'm removing selected element from current list and add it to the list on the other side.
Click on any element in either list causes the selected item to be checked against the dictionary, where element is supposed to be a dictionary key. If the element is not present in the dictionary, the KeyError thrown is catched in try:except block.
| 1 | 1 | 0 |
I'm trying to create a dialog that uses two GUI elements: KDEUI KActionSelector and QTExtEdit.
I want to have an ability to populate some additional information about the objects in any of the two windows of ActionSelector upon the mouse click.
Under the hood: i have a python dictionary. The keys of the dictionary are the entries presented in the ActionSelector. When any of the entries on either side is clicked, i want to be able to catch that signal, understand which key was clicked and show corresponding value in the QTextEdit. That should help user to make a decision about moving or not moving the selected item.
If this is not easy than the alternative solution is probably to use two list widgets instead of kActionSelector and reimplement the whole management shebang, but I of course would like to avoid that;)
I'm also worried if the usage of KDE element is safe to be used on different machines that might have different versions of Linux running...
Thanks!
|
KDEUI KAactionSelector widget additional signals (PyQt)
| 1.2 | 0 | 0 | 97 |
12,593,541 |
2012-09-26T01:58:00.000
| 0 | 0 | 0 | 0 |
javascript,python,templates,web-applications
| 12,594,044 | 1 | true | 1 | 0 |
There are obviously many ways this can all work together, but it sounds like you have the (a) right idea. Generally the frontend deals with JSON, and the server provides JSON. What's consuming or providing those responses is irrelevant; you shouldn't need to worry that Mongo is your database, or that underscore is handling your templates.
Think of your frontend and backend as two totally separate applications (this is pretty much true). Ignore the fact that your frontend code and templates are probably delivered from the same machine that's handling the backend. Your backend is in the business of persisting data, and your frontend in the business of displaying it.
RE: Mongo using JSON/BSON; The fact it uses the same language as your frontend to communicate is a red herring. Your DB layer should abstract this away anyway, so you're just using Python dicts/tuples/etc to talk to the database.
I'm guessing i'd have to have a python script that pulls the information from the DB, create a JSON object, and use it to populate the fields in the html template.
Spot on :)
| 1 | 1 | 0 |
I apologize if this question is not specific enough, but I do need some help understanding this concept. I've been researching many Javascript libraries including JQuery, MooTools, Backbone, Underscore, Handlebars, Mustache, etc - also Node.js and Meteor (I know all those serve different purposes). I have a basic idea of what each does, but my question is mainly focused on the templating libraries.
I think the general idea is that the template will be filled by a JSON object that's retrieved from the server. However, i'm confused by how that JSON object is formed, and if it can go the other way to the backend to update the database. Please correct me if this is incorrect.
For a more solid example, let's say I have Apache running on Linux, and am using MongoDB as the database and python as my primary language. How do all these components interact with the templating library and each other?
For example, if I have an HTML file with a form in it and the action will be set to some python script; will that script have to retrieve the fields, validate them, and then update them in the DB? If it's MySQL I'd have to write a SQL statement to update it, but with Mongo wouldn't it be different/easier since it's BSON/JSON based?
And for the other example, let's say I have a view-account.html page that will need to pull up user information from the DB, in what form will it pull the information out and how will it fill it into the template? I'm guessing i'd have to have a python script that pulls the information from the DB, create a JSON object, and use it to populate the fields in the html template.
I am aware there are web frameworks that will ease this process, and please suggest any that you would recommend; however, I'm really interested in understanding the concepts of how these components interact.
Thanks!!
|
How does HTML templating fit in with the backend language and database?
| 1.2 | 0 | 0 | 160 |
12,593,759 |
2012-09-26T02:30:00.000
| 0 | 0 | 0 | 0 |
python,pandas
| 12,594,030 | 1 | true | 0 | 0 |
Try DataFrame.duplicated and DataFrame.drop_duplicates
| 1 | 1 | 1 |
I need to reconcile two separate dataframes. Each row within the two dataframes has a unique id that I am using to match the two dataframes. Without using a loop, how can I reconcile one dataframe against another and vice-versa?
I tried merging the two dataframes on an index (unique id) but the problem I run into when I do this is when there are duplicate rows of data. Is there a way to identify duplicate rows of data and put that data into an array or export it to a CSV?
Your help is much appreciated. Thanks.
|
Pandas Data Reconcilation
| 1.2 | 0 | 0 | 745 |
12,596,557 |
2012-09-26T07:20:00.000
| 1 | 1 | 1 | 0 |
python,coding-style
| 12,596,686 | 2 | false | 0 | 0 |
No, it is not. At the very minimum the framework should provide its own exception class, and probably should have several (depending on the variety of things that could go wrong).
As you said, except Exception will catch way too much and is not good practice.
| 1 | 2 | 0 |
I'm working with a framework and the source code is raising exceptions using the Exception class (and not a subclass, either framework specific or from the stdlib) in a few places, which is is not a good idea in my opinion.
The main argument against this idiom is that it forces the caller to use except Exception: which can catch more than what is meant, and therefore hide problems at lower stack levels.
However, a quick search in the Python documentation did not come up with arguments against this practice, and there are even examples of this in the tutorial (although things which are OK in Python scripts may not be OK at all in a Python framework in my opinion).
So is raise Exception considered pythonic?
|
arguments for / against `raise Exception(message)` in Python
| 0.099668 | 0 | 0 | 89 |
12,597,394 |
2012-09-26T08:15:00.000
| 2 | 1 | 0 | 0 |
python,python-3.x,python-2.x
| 12,599,590 | 1 | false | 0 | 0 |
The best way? Write everything in Python 2.x. It's a simple question: can I do everything in Python 2.x? Yes! Can I do everything in Python 3.x? No. What's your problem then?
But if you really, really have to use two different Python versions ( why not two different languages for example? ) then you will probably have to create two different servers ( which will be clients at the same time ) which will communicate via TCP/UDP or whatever protocol you want. This might actually be quite handy if you think about scaling the application in the future. Although let me warn you: it won't be easy at all.
| 1 | 3 | 0 |
What is the best way to communicate between a Python 3.x and a Python 2.x program?
We're writing a web app whose front end servers will be written in Python 3 (CherryPy + uWSGI) primarily because it is unicode heavy app and Python 3.x has a cleaner support for unicode.
But we need to use systems like Redis and Boto (AWS client) which don't yet have Python 3 support.
Hence we need to create a system in which we can communicate between Python 3.x and 2.x programs.
What do you think is the best way to do this?
|
communication between Python 3 and Python 2
| 0.379949 | 0 | 1 | 1,491 |
12,603,482 |
2012-09-26T14:00:00.000
| 2 | 0 | 1 | 0 |
python
| 12,603,545 | 4 | false | 0 | 0 |
Create a new file that imports these files and run that file.
| 1 | 5 | 0 |
I have multiple python files, each with different classes and methods in it. I want to execute all those files with a main function I have separately outside all of them.
For example:
I have three files say one.py, two.py, three.py
I have no main method in any of them, but when I execute them then I want them to pass through the main function that I have separately. Is this possible, how?
Thanks.
|
Execute multiple python files using a single main
| 0.099668 | 0 | 0 | 33,676 |
12,603,678 |
2012-09-26T14:11:00.000
| 2 | 0 | 1 | 0 |
python,django,concurrency,webserver
| 12,604,317 | 3 | false | 0 | 0 |
You usually have many workers(i.e. gunicorn), each being dispatched with independent requests. Everything else(concurrency related) is handled by the database so it is abstracted from you.
You don't need IPC, you just need a "single source of truth", which will be the RDBMS, a cache server(redis, memcached), etc.
| 2 | 16 | 0 |
Any web server might have to handle a lot of requests at the same time. As python interpreter actually has GIL constraint, how concurrency is implemented?
Do they use multiple processes and use IPC for state sharing?
|
How does a python web server overcomes GIL
| 0.132549 | 0 | 1 | 2,640 |
12,603,678 |
2012-09-26T14:11:00.000
| 1 | 0 | 1 | 0 |
python,django,concurrency,webserver
| 12,603,848 | 3 | false | 0 | 0 |
As normal. Web serving is mostly I/O-bound, and the GIL is released during I/O operations. So either threading is used without any special accommodations, or an event loop (such as Twisted) is used.
| 2 | 16 | 0 |
Any web server might have to handle a lot of requests at the same time. As python interpreter actually has GIL constraint, how concurrency is implemented?
Do they use multiple processes and use IPC for state sharing?
|
How does a python web server overcomes GIL
| 0.066568 | 0 | 1 | 2,640 |
12,605,139 |
2012-09-26T15:23:00.000
| 4 | 0 | 0 | 0 |
python,debugging,pyqt
| 12,605,679 | 1 | true | 0 | 1 |
Using assert is the wrong way. For one thing, if Python is run with -O (or -OO) asserts are turned off; for another, the error message is not very helpful. That library needs to be redesigned to properly use exceptions.
As far as using the library as it stands: what do you want to have happen? Should your app quit? If so, you could create your own AssertionError class, replace the one in __builtins__ with yours, and have it do whatever you want in its __init__. Note that you are completely on your own if you do this.
| 1 | 0 | 0 |
My app use QT for the gui layer, and many other lib I made.
One of this other lib is quite complex (it's a type system) and full of asserts to make it as solid as possible.
But when an assert is triggered in this lib, the Qt mainloop simply continue.
I have a qt_debug() that works well (with pyqtRemoveInputHook) for the Qt part but nothing for the rest of python libraries.
And, obviously I would avoid to change code in the library as it should useable without Qt.
The best solution would be an assert hook, but despite googling around I didn't any obvious way to do it. Any idea ?
|
How to globally overload assert so I don't have to change lib code in my PyQT app?
| 1.2 | 0 | 0 | 275 |
12,606,027 |
2012-09-26T16:12:00.000
| 1 | 0 | 0 | 0 |
python,performance,numpy,f2py
| 12,606,715 | 2 | false | 0 | 0 |
There shouldn't be any slow-down. Since NumPy 1.6, most ufuncs (ie, the basic 'universal' functions) take an optional argument allowing a user to specify the memory layout of her array: by default, it's K, meaning that the 'the element ordering of the inputs (is matched) as closely as possible`.
So, everything should be taken care of below the hood.
At worst, you could always switch from one order to another with the order parameter of np.array (but that will copy your data and is probably not worth it).
| 1 | 1 | 1 |
I'm writing some code in fortran (f2py) in order to gain some speed because of a large amount of calculations that would be quite bothering to do in pure Python.
I was wondering if setting NumPy arrays in Python as order=Fortran will kind of slow down
the main python code with respect to the classical C-style order.
|
f2py speed with array ordering
| 0.099668 | 0 | 0 | 502 |
12,606,333 |
2012-09-26T16:31:00.000
| 0 | 0 | 0 | 1 |
python,linux,shell,unix,python-2.7
| 12,606,395 | 3 | false | 0 | 0 |
If you must run on python 2, you best also call the interpreter as python2. I think most UNIX releases have symlinks from /usr/bin/python/and /usr/bin/python2 to the appropriate binary.
| 1 | 3 | 0 |
I'm writing scripts that have to run on a number of different UNIX-like releases.
These are written in python 2.x.
Unfortunately, some newer releases have taken to calling this flavor binary "python2" instead of "python." Thus, "#!/usr/bin/env python" doesn't work to look for the proper installed python interpreter. Either I get the version 3 interpreter (bad) or no interpreter at all (worse!)
Is there a clever way to write a python script such that it will load with the python2 interpreter if installed, else the python interpreter if it's installed? I'd have to use other mechanisms to detect when "python" is a python3, but as I'm inside a python-like environment at that point, I can live with that.
I imagine I can write a ripple launcher, call it "findpython2," and use that as the #! interpreter for the script, but that means I have to install findpython2 in the search path, which is decidedly sub-optimal (these scripts are often called by absolute reference, so they're not in the path.)
|
Scripting Hashbang: How to get the python 2 interpreter?
| 0 | 0 | 0 | 739 |
12,607,326 |
2012-09-26T17:38:00.000
| 0 | 0 | 0 | 0 |
wxpython
| 12,607,701 | 1 | true | 0 | 1 |
So I found a solution for this.
If I call event.Skip() in my handlers for EVT_LEFT_DOWN and EVT_LEFT_UP it seems to propagate the event such that the default button appearance behavior is restored.
| 1 | 0 | 0 |
I have a button in wxpython for which I have bound EVT_LEFT_DOWN and EVT_LEFT_UP. I need to know explicitly when it is pressed and released, that's why I'm not using EVT_BUTTON.
The events work fine, the problem is just aesthetic: when I bind EVT_LEFT_DOWN and EVT_LEFT_UP the button no longer exhibits the normal appearance of being pressed (shaded and indented).
Is there any way to explicitly know when a button is pressed and released but also preserve its default appearance behavior?
|
How to preserve the default button appearance behavior when binding EVT_LEFT_DOWN
| 1.2 | 0 | 0 | 76 |
12,610,170 |
2012-09-26T20:49:00.000
| 0 | 0 | 1 | 0 |
python
| 12,642,397 | 3 | false | 0 | 0 |
The way the encoders I've messed with have done this, is to read whatever's there, or a particular chunk size, note the position of the last newline (.rfind('\n')), process the data up to that newline, and then store from the newline to the end of the chunk in a list. When reading the next block, you read from the same position as you stopped reading before, and you append the leftover string from before onto it. The performance was reasonable, and it's stable, of course this was for network sockets, where you can't seek backwards, I'm not sure which method would actually perform better on files.
| 1 | 2 | 0 |
I'm reading a text file, line by line, using Python. Each line is of a variable length. The first line could be 10 characters, the next one could be 100; there's no way of telling. Presently, I issue a file.readline() method for each line, process it, and then save it to a database. This method guarantees me one full line of input. I'd like to do this faster however. Is there a way to do a bulk read using the Python file.read() method such that I can guarantee an end-of-line read character when the buffer stops in the middle of a line? What's the best way to handle this?
|
Python file.read() method
| 0 | 0 | 0 | 3,634 |
12,612,229 |
2012-09-27T00:04:00.000
| 2 | 0 | 0 | 0 |
python,xml,xml-parsing,large-files
| 12,613,046 | 2 | false | 0 | 0 |
The best solution will depend in part on what you are trying to do, and how free your system resources are. Converting it to a postgresql or similar database might not be a bad first goal; on the other hand, if you just need to pull data out once, it's probably not needed. When I have to parse large XML files, especially when the goal is to process the data for graphs or the like, I usually convert the xml to S-expressions, and then use an S-expression interpreter (implemented in python) to analyse the tags in order and build the tabulated data. Since it can read the file in a line at a time, the length of the file doesn't matter, so long as the resulting tabulated data all fits in memory.
| 1 | 3 | 0 |
I've got an XML file I want to parse with python. What is best way to do this? Taking into memory the entire document would be disastrous, I need to somehow read it a single node at a time.
Existing XML solutions I know of:
element tree
minixml
but I'm afraid they aren't quite going to work because of the problem I mentioned. Also I can't open it in a text editor - any good tips in generao for working with giant text files?
|
Parsing a large (~40GB) XML text file in python
| 0.197375 | 0 | 1 | 3,511 |
12,612,648 |
2012-09-27T01:06:00.000
| 6 | 0 | 0 | 0 |
python,xml,elementtree
| 19,738,566 | 2 | false | 0 | 0 |
There are different versions of ElementTree.
Some of them accept the xml_declaration argument, some do not.
The one I happen to have does not. It emits the declaration if and only if encoding != 'utf-8'. So, to get the declaration, I call write(filename, encoding='UTF-8').
| 1 | 10 | 0 |
I'm writing some XML with element tree.
I'm giving the code an empty template file that starts with the XML declaration:<?xml version= "1.0"?> when ET has finished making its changes and writes the completed XML its stripping out the declarion and starting with the root tag. How can I stop this?
Write call:
ET.ElementTree(root).write(noteFile)
|
Python - Element Tree is removing the XML declaration
| 1 | 0 | 1 | 13,577 |
12,613,552 |
2012-09-27T03:14:00.000
| 0 | 0 | 0 | 1 |
python,shell,terminal,centos
| 12,613,729 | 3 | false | 0 | 0 |
per my experience, use sftp the first time will prompt user to accept host public key, such as
The authenticity of host 'xxxx' can't be established.
RSA key fingerprint is xxxx. Are you sure you want to continue connecting
(yes/no)?
once you input yes, the public key will be saved in ~/.ssh/known_hosts, and next time you will not get such prompt/alert.
To avoid this prompt/alert in batch script, you can use turn strict host check off use
scp -Bqpo StrictHostKeyChecking=no
but you are vulnerable to man-in-the-middle attack.
you can also choose to connect to target server manually and save host public key before deploy your batch script.
| 1 | 0 | 0 |
Here's what I need to do:
I need to copy files over the network. The files to be copied is in the one machine and I need to send it to the remote machines. It should be automated and it should be made using python. I am quite familiar with os.popen and subprocess.Popen of python. I could use this to copy the files, BUT, the problem is once I have run the one-liner command (like the one shown below)
scp xxx@localhost:file1.txt yyy@]192.168.104.XXX:file2.txt
it will definitely ask for something like
Are you sure you want to connect (yes/no)?
Password :
And if im not mistaken., once I have sent this command (assuming that I code this in python)
conn.modules.os.popen("scp xxx@localhost:file1.txt yyy@]192.168.104.XXX:file2.txt")
and followed by this command
conn.modules.os.popen("yes")
The output (and I'm quite sure that it would give me errors) would be the different comparing it to the output if I manually type it in in the terminal.
Do you know how to code this in python? Or could you tell me something (a command etc.) that would solve my problem
Note: I am using RPyC to connect to other remote machines and all machines are running on CentOS
|
How to automate the sending of files over the network using python?
| 0 | 0 | 1 | 1,825 |
12,614,131 |
2012-09-27T04:32:00.000
| 10 | 1 | 1 | 0 |
python,code-analysis
| 12,663,047 | 6 | false | 0 | 0 |
I'm afraid you are mostly on your own.
If you have decent set of tests, look at code coverage and dead code.
If you have a decent profiling setup, use that to get a glimpse of what's used more.
In the end, it seems you are more interested in fan-in/fan-out analysis, I'm not aware of any good tools for Python, primarily because static analysis is horribly unreliable against a dynamic language, and so far I didn't see any statistical analysis tools.
I reckon that this information is sort of available in JIT compilers -- whatever (function, argument types) is in cache (compiled) those are used the most. Whether or not you can get this data out of e.g. PyPy I really don't have a clue.
| 1 | 21 | 0 |
Looking to improve quality of a fairly large Python project. I am happy with the types of warnings PyLint gives me. However, they are just too numerous and hard to enforce across a large organization. Also I believe that some code is more critical/sensitive than others with respect to where the next bug may come. For example I would like to spend more time validating a library method that is used by 100 modules rather than a script that was last touched 2 years ago and may not be used in production. Also it would be interesting to know modules that are frequently updated.
Is anyone familiar with tools for Python or otherwise that help with this type of analysis?
|
Identifying "sensitive" code in your application
| 1 | 0 | 0 | 1,381 |
12,616,816 |
2012-09-27T08:10:00.000
| 2 | 0 | 0 | 0 |
keyboard,python-2.7,hook
| 12,640,731 | 1 | true | 0 | 0 |
I found the solutions and finished the script. Here are my findings.
1.I got the global keyboard/mouse inputs from Pyhook. Installing it on Python 2.7 amdx64 might be a bit tricky.
2.To send keystrokes/input to an application I used sendkeys-ctypes which works well with python 2.7
| 1 | 2 | 0 |
I am trying to write a script that duplicates a little bit of what Autohotkey does, because it doesn't work very well for me. I need the script to detect keyboard/mouse-click input to a program, and send different strings/sequence of keystrokes based on the original key pressed. For example if I press mouse middle button, I want to send the three keystrokes 8,9 and 0 in place of the click. All this while some other application is being used. i.e. Torchlight II. Can anyone tell me
what to use to get the global keyboard input and
How to send keystrokes to an application?
I would have used Autohotkey for this but it is acting very unreliably with random unacceptable bugs. I am using python 2.7 64bit, windows 7.
|
How do I get python to detect keystrokes from the keyboard and send different strings based on the input key(s) in windows
| 1.2 | 0 | 0 | 893 |
12,617,594 |
2012-09-27T08:56:00.000
| 2 | 0 | 1 | 0 |
python,regex
| 12,617,649 | 3 | false | 0 | 0 |
I don't know Python, but with all the regexp engines I know, that would be /[^,]*/. Or if Python has a built-in function to split a string on a regexp, then you could just split on /,/.
| 1 | 0 | 0 |
Which regular expression pattern will match a substring not containing a specific character in Python? For example, I have the string "abc,5 * de", and I want to match "abc" and "5 * de" as two substrings, but not the ,.
|
How do I match a string without a specific character in Python?
| 0.132549 | 0 | 0 | 226 |
12,619,220 |
2012-09-27T10:25:00.000
| -1 | 0 | 1 | 0 |
python,programming-languages
| 12,619,301 | 9 | false | 0 | 0 |
To answer your second question, Python is an interpreted language so you don't need a compiler. So long as you have Python installed, just run the script.
You can use whatever IDE you prefer to write the code.
| 1 | 0 | 0 |
I am quite comfortable with C/C++ but I felt that another language would surely help me. So, I decided that Python would be good language to start as I have heard many people talking about Python. I have the following questions :
Where do I start for Python ?
Do I have a compiler like Visual Studio for Python ? I use VS2010 for C/C++
Thanks in Advance.
|
Python - Where do I start?
| -0.022219 | 0 | 0 | 333 |
12,620,695 |
2012-09-27T11:48:00.000
| 4 | 0 | 0 | 0 |
python,flask,wsgi
| 12,620,810 | 3 | true | 1 | 0 |
I use uWSGI with the gevent loop. That is the ticket. In fact, this is how I use py-redis which is blocking to not be blocking.
Also, I use uWSGI to write requests after the response while still accepting more requests.
| 1 | 6 | 0 |
When I test my new Flask application with the built in web server, everything is "single threaded" and blocking. The server can not serve one request without finishing another. It can only process one request at a time.
When deploying a web service, this is obviously not desirable. How do you deploy Flask applications so that things can move in parallel?
Are there different things to consider regarding thread safety and concurrency inside the code (protect objects with locks and so on) or are all the offerings equivalent?
|
Deploying Flask, parallel requests
| 1.2 | 0 | 0 | 2,708 |
12,623,155 |
2012-09-27T13:59:00.000
| 3 | 0 | 1 | 1 |
python,macports,homebrew
| 12,623,496 | 1 | true | 0 | 0 |
I wouldn't use the Macports packages in Homebrew. I'd reinstall them all. A lot of Python packages are compiled , or at least have compiled elements. You're asking for a lot of potential troubles mixing them up.
| 1 | 1 | 0 |
I recently removed Macports and all its packages and installed Python, Gphoto and some other bits using Homebrew. However python is crashing when looking for libraries as it is looking for them in a MacPorts path. My PATH is correct and the python config show the right path /usr/local/Cellar etc.
Can someone tell me how to set Python to use the libraries installed via Homebrew, I suppose change the path effectively?
|
Python libraries after removing MacPorts and installing homebrew
| 1.2 | 0 | 0 | 215 |
12,625,951 |
2012-09-27T16:28:00.000
| 3 | 0 | 0 | 1 |
python,subprocess,popen,kill,kill-process
| 12,680,641 | 2 | true | 0 | 0 |
By changing the gid at the beginning of the execution of script2.py, the sub sequent processes belongs to script2 gid. So calling killpg() from script1.py with script2's pid does it well.
| 1 | 3 | 0 |
I have a few python scripts who are opening themselves in cascade by subprocess.Popen().
(I call script1.py who make a popen of script2.py who makes popen of script3.py, etc)
Is there any way to terminate/kill all subprocesses of script1.py from the script1.py PID.
os.killpg() doesn't work.
Thanks for your help.
|
python terminate/kill subprocess group
| 1.2 | 0 | 0 | 1,294 |
12,627,401 |
2012-09-27T17:59:00.000
| 0 | 1 | 1 | 0 |
python,symbols,decompiling
| 15,160,831 | 2 | false | 0 | 0 |
This post makes me recall my pain once with Telit GM862-GPS modules. My code was exactly at the point that the number of variables, strings, etc added up to the limit. Of course, I didn't know this fact by then. I added one innocent line and my program did not work any more. I drove me really crazy for two days until I look at the datasheet to find this fact.
What you are looking for might not have a good answer because the Python interpreter is not a full fledged version. What I did was to use the same local variable names as many as possible. Also I deleted doc strings for functions (those count too) and replace with #comments.
In the end, I want to say that this module is good for small applications. The python interpreter does not support threads or interrupts so your program must be a super loop. When your application gets bigger, each iteration will take longer. Eventually, you might want to switch to a faster platform.
| 1 | 2 | 0 |
I have a Telit module which runs [Python 1.5.2+] (http://www.roundsolutions.com/techdocs/python/Easy_Script_Python_r13.pdf)!. There are certain restrictions in the number of variable, module and method names I can use (< 500), the size of each variable (16k) and amount of RAM (~ 1MB). Refer pg 113&114 for details. I would like to know how to get the number of symbols being generated, size in RAM of each variable, memory usage (stack and heap usage).
I need something similar to a map file that gets generated with gcc after the linking process which shows me each constant / variable, symbol, its address and size allocated.
|
Counting number of symbols in Python script
| 0 | 0 | 0 | 403 |
12,628,054 |
2012-09-27T18:42:00.000
| 0 | 0 | 1 | 0 |
python,csv
| 12,628,748 | 1 | false | 0 | 0 |
The problem is, files on the disk are just a sequence of bytes. And not an intelligently stored sequence of lines.
That is, you can alter the n'th line in your file, if and only if, the amount of bytes it contains remains the same after that operation.
If your csv format allows to contain arbitrary amounts of insignificant whitespace as part of those lines, you could make them large enough to hold any data you may ever write to it (say a fixed large-enough-line-size). Then you can update single lines without a rewrite of the entire file. You need to make sure that you overwrite the previous content (if any), though.
| 1 | 0 | 0 |
I am trying to update a file, but only certain lines. There are a lot of lines and I don't want to rewrite the whole file to update it.
Ex. out of 4k line i need to change 5 item in n-th item
Most answers on this question use two files or rewrite it completely. I am wondering if there is a more effective command for this, one that can attack one line at a time without writing the whole file at the end of the process. If no possible way, what would be most efficient way to do so.
I used Python 2.7
|
updating a line in file without using 2nd file
| 0 | 0 | 0 | 102 |
12,629,091 |
2012-09-27T19:55:00.000
| 3 | 0 | 0 | 1 |
python,python-2.7
| 12,629,604 | 1 | false | 0 | 0 |
os.listdir is very likely to be compiled c that calls the same base libc system methods that ls does.
In contrast, subprocess.Popen forks a whole new process which is an expensive system operation and requires new file handles to deal with tty operations.
| 1 | 0 | 0 |
I want to retrieve the list of files in a directory. What would be the fastest way to do so
using subprocess.Popen or using os.listdir. The directory contain 10000 of files. and this has to be done recursively to retrieve the list from the directory and its sub directories. I know we can use os.walk to retrieve the contents of directories but os.walk just not work for what I am suppose to do.
Thanks
|
Which one is the faster way to get the list of directories subprocess.Popen or os.listdir
| 0.53705 | 0 | 0 | 692 |
12,631,577 |
2012-09-27T23:32:00.000
| 3 | 1 | 0 | 1 |
python,embedded
| 12,632,227 | 3 | false | 0 | 0 |
There may be ways you can cram it down a little more just by configuring, but not much more.
Also, the actual interactive-mode code is pretty trivial, so I doubt you're going to save much there.
I'm sure there are more substantial features you're not using that you could hack out of the interpreter to get the size down. For example, you can probably throw out a big chunk of the parser and compiler and just deal with nothing but bytecode. The problem is that the only way to do that is to hack the interpreter source. (And it's not the most beautiful code in the world, so you're going to have to dedicate a good amount of time to learning your way around.) And you'll have to know what features you can actually hack out.
The only other real alternative would be to write a smaller interpreter for a Python-like language—e.g., by picking up the tinypy project. But from your comments, it doesn't sound as if "Python-like" is sufficient for you unless it's very close.
Well, I suppose there's one more alternative: Hack up a different, nicer Python implementation than CPython. The problem is that Jython and IronPython aren't native code (although maybe you can use a JVM->native compiler, or possibly cram enough of Jython into a J2ME JVM?), and PyPy really isn't ready for prime time on embedded systems. (Can you wait a couple years?) So, you're probably stuck with CPython.
| 1 | 17 | 0 |
I spent the last 3 hours trying to find out if it possible to disable or to build Python without the interactive mode or how can I get the size of the python executable smaller for linux.
As you can guess it's for an embedded device and after the cross compilation Python is approximately 1MB big and that is too much for me.
Now the questions:
Are there possibilities to shrink the Python executable? Maybe to disable the interactive mode (starting Python programms on the command line).
I looked for the configure options and tried some of them but it doesn't produce any change for my executable.
I compile it with optimized options from gcc and it's already stripped.
|
Optimizing the size of embedded Python interpreter
| 0.197375 | 0 | 0 | 8,769 |
12,632,323 |
2012-09-28T01:17:00.000
| 0 | 0 | 1 | 0 |
python,windows,ide,pyglet
| 62,126,569 | 5 | false | 0 | 1 |
pycharm, I would highly recommend. The free version has all you need, and is very highly organised. I use it all the time. You'll need to copy your pyglet file into every project tho. Still, it's easy to use and functional, and comes with a lot of good tools.
| 1 | 0 | 0 |
I'm starting to code with pyglet in Windows. I usually rely on Spyder as an IDE, but it seems not to like pyglet. So what would be a convenient way to code / run pyglet in Windows? What is your minimal development setup? Short of obvious minimal setups such as running code on Console2 or using IDLE.
|
IDE for pyglet?
| 0 | 0 | 0 | 1,228 |
12,633,100 |
2012-09-28T03:09:00.000
| 5 | 0 | 0 | 0 |
python,wxpython,.app
| 12,755,571 | 2 | true | 0 | 1 |
Be sure to use Py2app, and make sure the .plist is filled out correctly, IE CFBundleName and CFBundleDisplayName.
| 1 | 3 | 0 |
I've made a .app with my WxPython script, and it's just about finished. The problem is, the menu bar title reads "Python". How can this be changed? Would I use wx.Menu()/wx.MenuBar(), or is this a problem with the .app file itself?
|
Changing WxPython app Mac menu bar title?
| 1.2 | 0 | 0 | 1,492 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.