Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,304,363 |
2012-04-24T19:01:00.000
| 1 | 0 | 0 | 0 |
python,django,web-deployment,source-code-protection
| 10,305,580 | 4 | false | 1 | 0 |
While your source code's probably fine where it is, I'd recommend not storing your configuration passwords in plaintext, whether the code file is compiled or not. Rather, have a hash of the appropriate password on the server, have the server generate a hash of the password submitted during login and compare those instead. Standard security practice.
Then again I could just be talking out my rear end since I haven't fussed about with Django yet.
| 4 | 3 | 0 |
I have picked up python/django just barely a year. Deployment of django site is still a subject that I have many questions about, though I have successfully manual deployed my site. One of my biggest questions around deployment is what measures can I take to safeguard the source code of my apps, including passwords in django's setting.py, from others, especially when my site runs on a virtual hosting provided by some 3rd party. Call me paranoid but the fact that my source code is running on a third-party server, which someone has the privileges to access anything/anywhere on the server, makes me feel uneasy.
|
What measures can I take to safeguard the source code of my django site from others?
| 0.049958 | 0 | 0 | 2,223 |
10,305,327 |
2012-04-24T20:12:00.000
| 8 | 0 | 1 | 0 |
python,python-c-extension
| 10,305,714 | 1 | true | 0 | 1 |
PyList_SET_ITEM is an unsafe macro that basically sticks an object into the list's internal pointer array without any bound checks. If anything non-NULL is in the ith position of the list, a reference leak will occur. PyList_SET_ITEM steals the reference to the object you put in the list. PyList_SetItem also steals the reference, but it checks bounds and decrefs anything which may be in the ith position. The rule-of-thumb is use PyList_SET_ITEM to initialize lists you've just created and PyList_SetItem otherwise. It's also completely safe to use PyList_SetItem everywhere; PyList_SET_ITEM is basically a speed hack.
| 1 | 8 | 0 |
From what I can tell, the difference between PyList_SetItem and PyList_SETITEM is that PyList_SetItem will lower the reference count of the list item it overwrites and PyList_SETITEM does not.
Is there any reason why I shouldn't just use PyList_SetItem all the time? Or would I get into trouble if I used PyList_SetItem to initialize an index position in a list?
|
PyList_SetItem vs. PyList_SETITEM
| 1.2 | 0 | 0 | 2,163 |
10,305,964 |
2012-04-24T21:04:00.000
| 3 | 0 | 0 | 0 |
python,numpy,statistics,scipy
| 27,016,762 | 4 | false | 0 | 0 |
I'm just trying to do this myself and it sound like you want the command "scipy.stats.binned_statistic_2d" from you can find the mean, median, standard devation or any defined function for the third parameter given the bins.
I realise this question has already been answered but I believe this is a good built in solution.
| 1 | 7 | 1 |
do you know a quick/elegant Python/Scipy/Numpy solution for the following problem:
You have a set of x, y coordinates with associated values w (all 1D arrays). Now bin x and y onto a 2D grid (size BINSxBINS) and calculate quantiles (like the median) of the w values for each bin, which should at the end result in a BINSxBINS 2D array with the required quantiles.
This is easy to do with some nested loop,but I am sure there is a more elegant solution.
Thanks,
Mark
|
Quantile/Median/2D binning in Python
| 0.148885 | 0 | 0 | 5,849 |
10,307,173 |
2012-04-24T22:58:00.000
| 0 | 0 | 0 | 0 |
python,c++,mysql,django,large-data
| 10,327,841 | 2 | false | 0 | 0 |
Python is just fine. I am a Python person. I do not know C++ personally. However, during my research of python the creator of mathematica stated himself that python is equally as powerful as mathematica. Python is used in many highly accurate calculations. (i. e. engineering software, architecture work, etc. . .)
| 1 | 1 | 1 |
I have implemented Tensor Factorization Algorithm in Matlab. But, actually, I need to use it in Web Application.
So I implemented web site on Django framework, now I need to merge it with my Tensor Factorization algorithm.
For those who are not familiar with tensor factorization, you can think there are bunch of multiplication, addition and division on large matrices of size, for example 10 000 x 8 000. In tensor factorization case we do not have matrices, instead we have 3-dimensional(for my purpose) arrays.
By the way, I m using MySQL as my database.
I am considering to implement this algorithm in Python or in C++. But I can't be sure which one is better.
Do you have any idea about efficiency of Python and C++ when processing on huge data set? Which one is better? Why?
|
Most efficient language to implement tensor factorization for Web Application
| 0 | 0 | 0 | 291 |
10,307,953 |
2012-04-25T00:45:00.000
| 0 | 0 | 0 | 0 |
python,download
| 10,308,010 | 1 | false | 0 | 0 |
It depends on what type of connection do you expect to use. I think, 64 Kb could be just enough.
| 1 | 0 | 0 |
I need to download a file that could be potentially quite large (300MB) that will later be saved locally. I don't want to read the remote file in one go to prevent excessive memory usage and intend to read the remote file in small chunks.
Is there an optimum size for these chunks?
|
Downloading a file in chunks -- is there an optimal sized chunk?
| 0 | 0 | 1 | 404 |
10,309,579 |
2012-04-25T05:00:00.000
| 0 | 1 | 1 | 0 |
python,perl,rpc,zeromq,msgpack
| 10,339,941 | 2 | false | 0 | 0 |
After studying this for a couple days I'm going with ZeroMQ + messagepack. The ZeroMQ docs show how to use messagepack, and I can implement an RPC server or client in only a few lines. The ZeroMQ modules for perl and python both have JSON serialization built in, so it's possible to implement RPC with ZeroMQ alone, but messagepack will give a nice boost to my data heavy calls. Thrift looks nice too, but it adds an extra configuration file and is fairly high level. I am sure to get max performance with ZeroMQ and it leaves a lot more options open.
| 1 | 1 | 0 |
I'm currently using json and http to call perl functions from python, but it's slow. Based on some research, messagepack is best for serialization and zeromq is the best transport. Both have cross platform bindings, but before I dig in, I would like to know what others are using for fast cross-language RPC (preferably with persistent tcp connections)
|
RPC between python and perl
| 0 | 0 | 0 | 523 |
10,309,956 |
2012-04-25T05:44:00.000
| 0 | 0 | 1 | 0 |
python,scheduling,pyramid
| 10,327,650 | 1 | true | 1 | 0 |
I would avoid running your Controller in the same process as the web application - it is a common practice to run web-applications with lowered permissions, for example; in some multi-threaded/multi-process environment which may spawn multiple workers and then possibly kill/recycle them whenever it feels like doing so. So having your controller running in a separate process with some kind of RPC mechanism seems like a much better idea.
Regarding code duplication - there are 2 options:
you can extract the common code (models) into a separate module/egg which is used by both applications
if you're finding that you need to share a lot of code - nothing forces you to have separate projects for those applications at all. You can have a single code base with two or more "entry points" - one of which would start a Pyramid WSGI application and another would start your Controller process.
| 1 | 0 | 0 |
I have a multi-stage process that needs to be run at some intervals.
I also have a Controller program which starts the process at the right times, chains together the stages of the process, and checks that each stage has executed correctly.
The Controller accesses a database which stores information about past runs of the process, parameters for future executions of the process, etc.
Now, I want to use Pyramid to build a web interface to the Controller, so that I can view information about the process and affect the operation of the Controller.
This will mean that actions in the web interface must effect changes in the controller database.
Naturally, the web interface will use the exact same data models as the Controller.
What's the best way for the Controller and Web Server to interact?
I've considered two possibilities:
Combine the controller and web server by calling sched in Pyramid's initialisation routine
Have the web server make RPCs to the controller, e.g. using Pyro.
How should I proceed here? And how can I avoid code duplication (of the data models) when using the second option?
|
Creating web interface to a controller process in python
| 1.2 | 0 | 0 | 182 |
10,310,068 |
2012-04-25T05:53:00.000
| 0 | 0 | 0 | 0 |
python,pygame
| 10,647,216 | 2 | true | 0 | 1 |
Pygame is as good as they get for 2D CPU graphics. All the graphics is implemented in C, (PyGame wraps SDL) so the code is nearly as fast as an equivalent C software renderer.
That said, it's still (basically) a software renderer, and there's this interesting device in every modern computer called a GPU which is designed to do that. PyOpenGL/OpenGL will take advantage of it, so yes, absolutely PyOpenGL will render faster than PyGame.
Bottom line:
PyGame is fast, but not as fast as PyOpenGL. For hundreds of onscreen sprites, that will mainly be a logic problem (Python logic is slow, even by interpreted language standards). Rewriting it in SDL would make it faster (because C/C++ is faster than Python). You could also use PyOpenGL, which I predict in this case would improve performance significantly, though not dramatically (but it's much harder to use).
Like I said, though, it will be primarily a logic issue, I think. There is something to be said for using PyOpenGL, but as they say, the greatest optimization you will ever make is when your code works for the first time.
| 2 | 0 | 0 |
I'm starting to work on a 2D scrolling shoot-em-up game, and I was wondering if pygame is suitable. I would like to hit close to 60 fps while animating a scrolling background with hundreds of sprites (mostly bullets, of course); is this feasible with pygame? From what I've read, I'm leaning toward no, but I'd like another opinion from someone with more experience with pygame.
I'm also looking at using PyOpenGL with pygame, but I have absolutely no experience with OpenGL. Will OpenGL work better in this case than native pygame graphics, and are there any good tutorials for OpenGL/PyOpenGL/using PyOpenGL with pygame?
|
Using Pygame to make scrolling shoot-em-up
| 1.2 | 0 | 0 | 1,238 |
10,310,068 |
2012-04-25T05:53:00.000
| 0 | 0 | 0 | 0 |
python,pygame
| 10,315,791 | 2 | false | 0 | 1 |
Pygame is the best solution for 2D games in python according to me. You can save Surfaces uses its optimized Sprites animation, so I think it's the fastest solution : as for development process than for code execution.
| 2 | 0 | 0 |
I'm starting to work on a 2D scrolling shoot-em-up game, and I was wondering if pygame is suitable. I would like to hit close to 60 fps while animating a scrolling background with hundreds of sprites (mostly bullets, of course); is this feasible with pygame? From what I've read, I'm leaning toward no, but I'd like another opinion from someone with more experience with pygame.
I'm also looking at using PyOpenGL with pygame, but I have absolutely no experience with OpenGL. Will OpenGL work better in this case than native pygame graphics, and are there any good tutorials for OpenGL/PyOpenGL/using PyOpenGL with pygame?
|
Using Pygame to make scrolling shoot-em-up
| 0 | 0 | 0 | 1,238 |
10,315,069 |
2012-04-25T11:55:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,deployment
| 12,338,986 | 6 | false | 1 | 0 |
Add the --oauth2 flag to appcfg.py update for an easier fix
| 4 | 11 | 0 |
When I try to deploy my app I get the following error:
Starting update of app: flyingbat123, version: 0-1
Getting current resource limits.
Password for avigmati: Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 125, in
run_file(__file__, globals())
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 121, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4062, in
main(sys.argv)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4053, in main
result = AppCfgApp(argv).Run()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2543, in Run
self.action(self)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3810, in __call__
return method()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3006, in Update
self.UpdateVersion(rpcserver, self.basepath, appyaml)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2995, in UpdateVersion
self.options.max_size)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2122, in DoUpload
resource_limits = GetResourceLimits(self.rpcserver, self.config)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 355, in GetResourceLimits
resource_limits.update(GetRemoteResourceLimits(rpcserver, config))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 326, in GetRemoteResourceLimits
version=config.version)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 379, in Send
self._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 437, in _Authenticate
super(HttpRpcServer, self)._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 281, in _Authenticate
auth_token = self._GetAuthToken(credentials[0], credentials[1])
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 233, in _GetAuthToken
e.headers, response_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 94, in __init__
self.reason = args["Error"]
AttributeError: can't set attribute
2012-04-25 19:30:15 (Process exited with code 1)
The following is my app.yaml:
application: flyingbat123
version: 0-1
runtime: python
api_version: 1
threadsafe: no
It seems like an authentication error, but I'm entering a valid email and password.
What am I doing wrong?
|
GAE - Deployment Error: "AttributeError: can't set attribute"
| 0.033321 | 0 | 0 | 4,377 |
10,315,069 |
2012-04-25T11:55:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine,deployment
| 12,912,373 | 6 | false | 1 | 0 |
This also happens if your default_error value overlaps with your static_dirs in app.yaml.
| 4 | 11 | 0 |
When I try to deploy my app I get the following error:
Starting update of app: flyingbat123, version: 0-1
Getting current resource limits.
Password for avigmati: Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 125, in
run_file(__file__, globals())
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 121, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4062, in
main(sys.argv)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4053, in main
result = AppCfgApp(argv).Run()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2543, in Run
self.action(self)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3810, in __call__
return method()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3006, in Update
self.UpdateVersion(rpcserver, self.basepath, appyaml)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2995, in UpdateVersion
self.options.max_size)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2122, in DoUpload
resource_limits = GetResourceLimits(self.rpcserver, self.config)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 355, in GetResourceLimits
resource_limits.update(GetRemoteResourceLimits(rpcserver, config))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 326, in GetRemoteResourceLimits
version=config.version)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 379, in Send
self._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 437, in _Authenticate
super(HttpRpcServer, self)._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 281, in _Authenticate
auth_token = self._GetAuthToken(credentials[0], credentials[1])
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 233, in _GetAuthToken
e.headers, response_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 94, in __init__
self.reason = args["Error"]
AttributeError: can't set attribute
2012-04-25 19:30:15 (Process exited with code 1)
The following is my app.yaml:
application: flyingbat123
version: 0-1
runtime: python
api_version: 1
threadsafe: no
It seems like an authentication error, but I'm entering a valid email and password.
What am I doing wrong?
|
GAE - Deployment Error: "AttributeError: can't set attribute"
| 0 | 0 | 0 | 4,377 |
10,315,069 |
2012-04-25T11:55:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,deployment
| 10,871,690 | 6 | false | 1 | 0 |
I had the same problem and after inserting logger.warn(body), I get this:
WARNING appengine_rpc.py:231 Error=BadAuthentication
Info=InvalidSecondFactor
The standard error message could have been more helpful, but this makes me wonder if I should not use an application specific password?
| 4 | 11 | 0 |
When I try to deploy my app I get the following error:
Starting update of app: flyingbat123, version: 0-1
Getting current resource limits.
Password for avigmati: Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 125, in
run_file(__file__, globals())
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 121, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4062, in
main(sys.argv)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4053, in main
result = AppCfgApp(argv).Run()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2543, in Run
self.action(self)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3810, in __call__
return method()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3006, in Update
self.UpdateVersion(rpcserver, self.basepath, appyaml)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2995, in UpdateVersion
self.options.max_size)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2122, in DoUpload
resource_limits = GetResourceLimits(self.rpcserver, self.config)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 355, in GetResourceLimits
resource_limits.update(GetRemoteResourceLimits(rpcserver, config))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 326, in GetRemoteResourceLimits
version=config.version)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 379, in Send
self._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 437, in _Authenticate
super(HttpRpcServer, self)._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 281, in _Authenticate
auth_token = self._GetAuthToken(credentials[0], credentials[1])
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 233, in _GetAuthToken
e.headers, response_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 94, in __init__
self.reason = args["Error"]
AttributeError: can't set attribute
2012-04-25 19:30:15 (Process exited with code 1)
The following is my app.yaml:
application: flyingbat123
version: 0-1
runtime: python
api_version: 1
threadsafe: no
It seems like an authentication error, but I'm entering a valid email and password.
What am I doing wrong?
|
GAE - Deployment Error: "AttributeError: can't set attribute"
| 0.033321 | 0 | 0 | 4,377 |
10,315,069 |
2012-04-25T11:55:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,deployment
| 12,750,238 | 6 | false | 1 | 0 |
I know this doesn't answer the OP question, but it may help others who experience problems using --oauth2 mentioned by others in this question.
I have 2-step verification enabled, and I had been using the application-specific password, but found it tedious to look up and paste the long string every day or so. I found that using --oauth2 returns
This application does not exist (app_id=u'my-app-id')
but by adding the --no_cookies option
appcfg.py --oauth2 --no_cookies update my-app-folder\
I can now authenticate each time by just clicking [Allow access] in the browser window that is opened.
I'm using Python SDK 1.7.2 on Windows 7
NOTE: I found this solution elsewhere, but I can't remember where, so I can't properly attribute it. Sorry.
.
| 4 | 11 | 0 |
When I try to deploy my app I get the following error:
Starting update of app: flyingbat123, version: 0-1
Getting current resource limits.
Password for avigmati: Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 125, in
run_file(__file__, globals())
File "C:\Program Files (x86)\Google\google_appengine\appcfg.py", line 121, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4062, in
main(sys.argv)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 4053, in main
result = AppCfgApp(argv).Run()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2543, in Run
self.action(self)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3810, in __call__
return method()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 3006, in Update
self.UpdateVersion(rpcserver, self.basepath, appyaml)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2995, in UpdateVersion
self.options.max_size)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 2122, in DoUpload
resource_limits = GetResourceLimits(self.rpcserver, self.config)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 355, in GetResourceLimits
resource_limits.update(GetRemoteResourceLimits(rpcserver, config))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appcfg.py", line 326, in GetRemoteResourceLimits
version=config.version)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 379, in Send
self._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 437, in _Authenticate
super(HttpRpcServer, self)._Authenticate()
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 281, in _Authenticate
auth_token = self._GetAuthToken(credentials[0], credentials[1])
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 233, in _GetAuthToken
e.headers, response_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 94, in __init__
self.reason = args["Error"]
AttributeError: can't set attribute
2012-04-25 19:30:15 (Process exited with code 1)
The following is my app.yaml:
application: flyingbat123
version: 0-1
runtime: python
api_version: 1
threadsafe: no
It seems like an authentication error, but I'm entering a valid email and password.
What am I doing wrong?
|
GAE - Deployment Error: "AttributeError: can't set attribute"
| 0.066568 | 0 | 0 | 4,377 |
10,315,232 |
2012-04-25T12:05:00.000
| 0 | 1 | 0 | 1 |
python,linux,eclipse,pydev
| 10,343,117 | 1 | true | 1 | 0 |
I don't really think there's anything that can be done on the PyDev side... it seems @sys is resolved based on the kind of process you're running (not your system), so, if you use a 64 bit vm (I think) it should work...
Other than that, you may have to provide the actual path instead of using @sys...
| 1 | 2 | 0 |
I'm working in a multiuser environment with the following setup:
Linux 64bits environment (users can login in to different servers).
Eclipse (IBM Eclipse RSA-RTE) 32bits. So Java VM, Eclipse and PyDev is 32bits.
Python 3 interpreter is only available for 64bits at this moment.
In the preferences for PyDev, I want to set the path to the Python interpreter like this:
/app/python/@sys/3.2.2/bin/python
In Eclipse/PyDev, @sys points to i386_linux26 even if the system actually is amd64_linux26. So if I do not explicitly write amd64_linux26 instead of @sys, PyDev will not be able to find the Python 3 interpreter which is only available for 64bits. The link works as expected outside Eclipse/PyDev, e.g. in the terminal.
Any ideas how to force Eclipse/PyDev to use the real value of @sys?
Thanks in advance!
|
Eclipse / PyDev overrides @sys, cannot find Python 64bits interpreter
| 1.2 | 0 | 0 | 321 |
10,315,257 |
2012-04-25T12:07:00.000
| 0 | 0 | 0 | 0 |
python,openerp
| 10,324,086 | 2 | false | 1 | 0 |
If your purpose is to debug, the simplest solution is to add print statements in your code and then run the server in a console.
| 1 | 1 | 0 |
In openerp, im working on a dummy function that (for example) returns the sum of a certain field on selected records.
for instance, u select 3 invoices and it returns the sum of the quantity in the invoice lines. i think the function to perform the sum is correct, and even if it wasnt, i just need help in displaying the result of the function when called in a popup box. for that, i've added an action similar to "Confirm Invoices" found in the invoice object.
to make myself clearer, when the confirm invoice is pressed, its function is called and the popup previously opened is of course closed because of this line found in the function: return {'type': 'ir.actions.act_window_close'}
how can i tell it in my function instead (of closing) to display the result stored after executing the function?
|
openerp echo the return result of a function
| 0 | 0 | 0 | 517 |
10,315,662 |
2012-04-25T12:31:00.000
| 4 | 1 | 0 | 1 |
python,gdb,debug-symbols,activepython,mingw-w64
| 10,323,635 | 3 | false | 0 | 0 |
The best way to create a debug version of Python under Windows is to use the Debug build in the Visual Studio projects that come with the Python source, using the compiler version needed for the specific Python release, i.e. VS 2008.
There may be other ways, but this is certainly the best way.
If you really need a 64-bit debug build also, the best way is to buy a copy of VS 2008 (i.e. not use the Express version). It may be possible to create an AMD64 debug build using the SDK 64-bit compiler, but again, using the officially-supported procedures is the best way.
| 1 | 12 | 0 |
Firstly, I should state that my current development environment is MSYS + mingw-w64 + ActivePython under Windows 7 and that on a normal day I am primarily a Linux developer. I am having no joy obtaining, or compiling, a version of the Python library with debug symbols.
I need both 32bit and 64bit debug versions of the Python27.dll file, ideally. I want to be able to embed Python and implement Python extensions in C++, and be able to call upon a seamless debugging facility using the gdb-7.4 I have built for mingw-w64, and WingIDE for the pure Python side of things.
Building Python 2.7.3 from source with my mingw-w64 toolchain is proving too problematic -- and before anyone flames me for trying: I acknowledge that this environment is unsupported, but I thought I might be able to get this working with a few judicious patches (hacks) and:
make OPT='-g -DMS_WIN32 -DWIN32 -DNDEBUG -D_WINDOWS -DUSE_DL_EXPORT'
I was wrong... I gave up at posixmodule.c since the impact of my changes became uncertain; ymmv.
I have tried building with Visual C++ 2010 Express but being primarily a Linux developer the culture-shock is too much for me to bear today; the Python project does not even import successfully. Apparently, I need Visual C++ 2008, yet I am already convinced I don't want to go down this road if at all possible...
It's really surprising to me that there is not a zip-file providing the requisite .dlls somewhere on the Internet. ActiveState should really provide these as an optional download with each release of ActivePython that they make -- perhaps that's where the paid support comes in ;-).
What is the best way to obtain the Python debug library files given my environment?
|
How to obtain pre-built *debug* version of Python library (e.g. Python27_d.dll) for Windows
| 0.26052 | 0 | 0 | 12,543 |
10,317,114 |
2012-04-25T13:50:00.000
| 5 | 0 | 0 | 0 |
python,database,django,postgresql
| 19,072,541 | 8 | false | 0 | 0 |
Had the same problem.
There were not any locks on the table.
Reboot helped.
| 4 | 42 | 0 |
I'm trying to drop a few tables with the "DROP TABLE" command but for a unknown reason, the program just "sits" and doesn't delete the table that I want it to in the database.
I have 3 tables in the database:
Product, Bill and Bill_Products which is used for referencing products in bills.
I managed to delete/drop Product, but I can't do the same for bill and Bill_Products.
I'm issuing the same "DROP TABLE Bill CASCADE;" command but the command line just stalls. I've also used the simple version without the CASCADE option.
Do you have any idea why this is happening?
Update:
I've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table.
So, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).
|
Postgresql DROP TABLE doesn't work
| 0.124353 | 1 | 0 | 55,426 |
10,317,114 |
2012-04-25T13:50:00.000
| 2 | 0 | 0 | 0 |
python,database,django,postgresql
| 40,749,694 | 8 | false | 0 | 0 |
Old question but ran into a similar issue. Could not reboot the database so tested a few things until this sequence worked :
truncate table foo;
drop index concurrently foo_something; times 4-5x
alter table foo drop column whatever_foreign_key; times 3x
alter table foo drop column id;
drop table foo;
| 4 | 42 | 0 |
I'm trying to drop a few tables with the "DROP TABLE" command but for a unknown reason, the program just "sits" and doesn't delete the table that I want it to in the database.
I have 3 tables in the database:
Product, Bill and Bill_Products which is used for referencing products in bills.
I managed to delete/drop Product, but I can't do the same for bill and Bill_Products.
I'm issuing the same "DROP TABLE Bill CASCADE;" command but the command line just stalls. I've also used the simple version without the CASCADE option.
Do you have any idea why this is happening?
Update:
I've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table.
So, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).
|
Postgresql DROP TABLE doesn't work
| 0.049958 | 1 | 0 | 55,426 |
10,317,114 |
2012-04-25T13:50:00.000
| 0 | 0 | 0 | 0 |
python,database,django,postgresql
| 69,412,889 | 8 | false | 0 | 0 |
The same thing happened for me--except that it was because I forgot the semicolon. face palm
| 4 | 42 | 0 |
I'm trying to drop a few tables with the "DROP TABLE" command but for a unknown reason, the program just "sits" and doesn't delete the table that I want it to in the database.
I have 3 tables in the database:
Product, Bill and Bill_Products which is used for referencing products in bills.
I managed to delete/drop Product, but I can't do the same for bill and Bill_Products.
I'm issuing the same "DROP TABLE Bill CASCADE;" command but the command line just stalls. I've also used the simple version without the CASCADE option.
Do you have any idea why this is happening?
Update:
I've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table.
So, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).
|
Postgresql DROP TABLE doesn't work
| 0 | 1 | 0 | 55,426 |
10,317,114 |
2012-04-25T13:50:00.000
| 4 | 0 | 0 | 0 |
python,database,django,postgresql
| 60,367,779 | 8 | false | 0 | 0 |
I ran into this today, I was issuing a:
DROP TABLE TableNameHere
and getting ERROR: table "tablenamehere" does not exist. I realized that for case-sensitive tables (as was mine), you need to quote the table name:
DROP TABLE "TableNameHere"
| 4 | 42 | 0 |
I'm trying to drop a few tables with the "DROP TABLE" command but for a unknown reason, the program just "sits" and doesn't delete the table that I want it to in the database.
I have 3 tables in the database:
Product, Bill and Bill_Products which is used for referencing products in bills.
I managed to delete/drop Product, but I can't do the same for bill and Bill_Products.
I'm issuing the same "DROP TABLE Bill CASCADE;" command but the command line just stalls. I've also used the simple version without the CASCADE option.
Do you have any idea why this is happening?
Update:
I've been thinking that it is possible for the databases to keep some references from products to bills and maybe that's why it won't delete the Bill table.
So, for that matter i issued a simple SELECT * from Bill_Products and after a few (10-15) seconds (strangely, because I don't think it's normal for it to last such a long time when there's an empty table) it printed out the table and it's contents, which are none. (so apparently there are no references left from Products to Bill).
|
Postgresql DROP TABLE doesn't work
| 0.099668 | 1 | 0 | 55,426 |
10,317,570 |
2012-04-25T14:15:00.000
| 3 | 0 | 1 | 1 |
python,multithreading,terminal,cpu,cpu-usage
| 10,323,602 | 1 | true | 0 | 0 |
One reason for this might be the use of hyper-threading. HT logical CPUs appear to the operating system as separate CPUs, but really are not. So if two threads run on the same core in separate logical (HT) CPUs, performance would be smaller than if they ran on separate cores.
The easiest solution might be to disable hyper-threading. If that is not an option, use processor affinity to pin each Python process to its separate CPU.
| 1 | 2 | 0 |
I just bought a new, machine to run python scripts for large scale modeling. It has two CPUs with 4 cores each (Xeon, 2.8GhZ). Each core has hyper-threading enabled for 4 logical cpu cores.
Now for the problem: When I run identical python processes in 8 separate terminals, top command shows that each process is taking 100% of the cpu. However, the process in terminal 1 is running about 4 times slower than the process in terminal 8. This seems odd to me...
I wonder if it has something to do with how the processes are scheduled on the various (logical?) cores? Does anyone have an idea of how I could all of the to run in about the same speed?
EDIT (in response to larsmans): Good point. The script is a ginat loop that runs about 10,000 times. Each loop reads in a text file (500 lines) and runs some basic calculations on the quantities read in. While the loop runs, it uses about 0.2% of Memory. There is no writing to disk during the loop. I could understand that the read access could be a limiting factor but I am perplexed about the fact that it would be the first process that would be the slowest if that was the case. I would have expected that it would get slower as I start more processes...
I timed the processes a couple of times using the time command in the terminal.
EDIT2: I just found out that sometimes a single core is designated to handle all reading and writing - so multiple processes (even if they run on separate cores) will use one single core for all the I/O... This would however only affect one of the cores, not cause all to have various processing speeds...
|
Large Variations in Identical Python Process Run Times
| 1.2 | 0 | 0 | 152 |
10,317,632 |
2012-04-25T14:18:00.000
| 1 | 1 | 0 | 0 |
c++,python,architecture,thrift
| 10,328,853 | 2 | true | 1 | 0 |
you could consider:
already mentioned CORBA solution: built in marshaling, compact binary protocol
REST http and based json server: simple, a bit chatty on the network, you need to serialize your data to json
AMQP messaging + json or some other serializer: you need to serialize your data to json or something else like google protocol buffers, plus is that scaling if you need more servers will be simpler.
| 2 | 0 | 0 |
I am building an application, which has an application based front-end in C++/Qt and a web based front-end in Python (using Django) framework. I'm trying to migrate the architecture to services-based, as both these front-ends have business logic embedded in them, which makes it hard to maintain.
I'm thinking of choosing Thrift to write the RPC services, which can be consumed by the other modules in the system and Python code. However, as it seems, Thrift does not work well with Windows, so I'm left with the option of converting the Thrift output to some C++ structures, which theen need to be serialized/de-serialized again, so that the services can be consumed by Qt/C++. Python code can consume these Thrift services easily.
In this process, I need to convert/serialize the structure, first according to the Thrift IDL and then some custom code. Any suggestions to change the architecture, so as to
keep it simple
works with multiple languages
quick to implement?
|
migrating business logic to services: alternatives to Thrift
| 1.2 | 0 | 0 | 254 |
10,317,632 |
2012-04-25T14:18:00.000
| 1 | 1 | 0 | 0 |
c++,python,architecture,thrift
| 10,317,981 | 2 | false | 1 | 0 |
I've implemented something similar using omniORB. It has bindings for python and for C++. It's really easy in python and performs very well.
| 2 | 0 | 0 |
I am building an application, which has an application based front-end in C++/Qt and a web based front-end in Python (using Django) framework. I'm trying to migrate the architecture to services-based, as both these front-ends have business logic embedded in them, which makes it hard to maintain.
I'm thinking of choosing Thrift to write the RPC services, which can be consumed by the other modules in the system and Python code. However, as it seems, Thrift does not work well with Windows, so I'm left with the option of converting the Thrift output to some C++ structures, which theen need to be serialized/de-serialized again, so that the services can be consumed by Qt/C++. Python code can consume these Thrift services easily.
In this process, I need to convert/serialize the structure, first according to the Thrift IDL and then some custom code. Any suggestions to change the architecture, so as to
keep it simple
works with multiple languages
quick to implement?
|
migrating business logic to services: alternatives to Thrift
| 0.099668 | 0 | 0 | 254 |
10,319,478 |
2012-04-25T16:08:00.000
| 0 | 0 | 1 | 0 |
python,arcmap
| 12,737,728 | 3 | false | 0 | 0 |
Try
len(!Name!.split(" "))
If that doesn't work...let us know which feature it fails on and maybe more sample data?
| 2 | 1 | 0 |
I am python newbie and I am trying to count the number of words in a column (Name) in ArcMap by using
!NAME!.count(' ') +
1
but I run into problems with strings like :
First N' Infant Care Center "Baby World"
type.exceptions.Syntaxerror,
even if I use " ",same problem I encounter when I am using other methods like split, strip etc.
|
Python- ArcMap - Calculate Fields
| 0 | 0 | 0 | 448 |
10,319,478 |
2012-04-25T16:08:00.000
| 0 | 0 | 1 | 0 |
python,arcmap
| 71,880,005 | 3 | false | 0 | 0 |
Python can not easily handle mixed double and single quotes, so it's best if you first remove them.
One way to do this is to add another field (say newName, calculate it to have the same values as "Name" field, by doing just !NAME!. I am assuming you don't want to alter the Name field.
Then within editing mode, use find and replace to replace all quotes " or ' in that new column with nothing (just don't type anything in the replace and run replace all).
Now if you use that same approach you used with this new column/field, the problem won't occur.
!newName!.count(' ') + 1
| 2 | 1 | 0 |
I am python newbie and I am trying to count the number of words in a column (Name) in ArcMap by using
!NAME!.count(' ') +
1
but I run into problems with strings like :
First N' Infant Care Center "Baby World"
type.exceptions.Syntaxerror,
even if I use " ",same problem I encounter when I am using other methods like split, strip etc.
|
Python- ArcMap - Calculate Fields
| 0 | 0 | 0 | 448 |
10,319,696 |
2012-04-25T16:19:00.000
| 6 | 0 | 1 | 0 |
java,.net,python,regex,perl
| 10,319,727 | 8 | false | 0 | 0 |
I suspect you want to be using negative lookahead: (.)\1{N-1}(?!\1).
But that said...I suspect the simplest cross-language solution is just write it yourself without using regexes.
UPDATE:
^(.)\\1{3}(?!\\1)|(.)(?<!(?=\\2)..)\\2{3}(?!\\2) works for me more generally, including matches starting at the beginning of the string.
| 1 | 8 | 0 |
How do I write an expression that matches exactly N repetitions of the same character (or, ideally, the same group)? Basically, what (.)\1{N-1} does, but with one important limitation: the expression should fail if the subject is repeated more than N times. For example, given N=4 and the string xxaaaayyybbbbbzzccccxx, the expressions should match aaaa and cccc and not bbbb.
I'm not focused on any specific dialect, feel free to use any language. Please do not post code that works for this specific example only, I'm looking for a general solution.
|
Match exactly N repetitions of the same character
| 1 | 0 | 0 | 832 |
10,319,920 |
2012-04-25T16:32:00.000
| 0 | 0 | 0 | 0 |
python,r,open-source
| 10,335,341 | 2 | false | 0 | 0 |
I think that you mean ipython with pylab. This is a real alternative, but not exactly for R, but for Matlab.
| 1 | 2 | 0 |
I remember reading about an alternative to GNU R (statistical research system) using Python. I've googled around a bit to find it, but can't seem to. Can you point me in the right direction?
|
Python alternative for GNU R
| 0 | 0 | 0 | 754 |
10,321,568 |
2012-04-25T18:25:00.000
| -1 | 0 | 0 | 0 |
python,django
| 20,124,244 | 4 | false | 1 | 0 |
sudo apt-get install python-psycopg2 should work fine since it worked solution for me as well.
| 2 | 6 | 0 |
Ive been reading the Django Book and its great so far, unless something doesn't work properly. I have been trying for two days to install the psycogp2 plugin with no luck.
i navigate to the unzipped directory and run setup.py install and it returns "You must have postgresql dev for building a serverside extension or libpq-dev for client side."
I don't know what any of this means, and google returns results tossing a lot of terms I don't really understand.
Ive been trying to learn django for abut a week now plus linux so any help would be great. Thanks
Btw, I have installed postgresql and pgadminIII from installer pack.
I also tried sudo apt-get post.... and some stuff happens...but Im lost.
|
Django with psycopg2 plugin
| -0.049958 | 1 | 0 | 4,157 |
10,321,568 |
2012-04-25T18:25:00.000
| 3 | 0 | 0 | 0 |
python,django
| 22,528,687 | 4 | false | 1 | 0 |
I'm working on Xubuntu (12.04) and I have encountered the same error when I wanted to install django-toolbelt. I solved this error with the following operations :
sudo apt-get install python-dev
sudo apt-get install libpq-dev
sudo apt-get install python-psycopg2
I hope this informations may be helpful for someone else.
| 2 | 6 | 0 |
Ive been reading the Django Book and its great so far, unless something doesn't work properly. I have been trying for two days to install the psycogp2 plugin with no luck.
i navigate to the unzipped directory and run setup.py install and it returns "You must have postgresql dev for building a serverside extension or libpq-dev for client side."
I don't know what any of this means, and google returns results tossing a lot of terms I don't really understand.
Ive been trying to learn django for abut a week now plus linux so any help would be great. Thanks
Btw, I have installed postgresql and pgadminIII from installer pack.
I also tried sudo apt-get post.... and some stuff happens...but Im lost.
|
Django with psycopg2 plugin
| 0.148885 | 1 | 0 | 4,157 |
10,322,422 |
2012-04-25T19:27:00.000
| 5 | 0 | 0 | 1 |
python,database,multiprocessing,signals
| 10,322,481 | 1 | true | 0 | 0 |
Store all the open files/connections/etc. in a global structure, and close them all and exit in your SIGTERM handler.
| 1 | 3 | 0 |
I have a daemon process witch spawns child processes using multiprocessing to do some work, each child process opens its own connection handle do DB (postgres in my case). Jobs to processes are passed via Queue and if queue is empty processes invoke sleep for some time, and recheck queue
How can I implement "graceful shutdown" on SIGTERM? Each subprocess should terminate as fast as possible, with respect of closing/terminating current cursor/transaction and db connection, and opened files.
|
Gracefull shutdown, close db connections, opened files, stop work on SIGTERM, in multiprocessing
| 1.2 | 1 | 0 | 403 |
10,322,424 |
2012-04-25T19:27:00.000
| 2 | 0 | 1 | 0 |
python,configuration,pycharm
| 51,472,500 | 6 | false | 0 | 0 |
Quick Answer:
File --> Setting
In left side in project section --> Project interpreter
Select desired Project interpreter
Apply + OK
[NOTE]:
Tested on Pycharm 2018 and 2017.
| 2 | 124 | 0 |
I have PyCharm 1.5.4 and have used the "Open Directory" option to open the contents of a folder in the IDE.
I have Python version 3.2 selected (it shows up under the "External Libraries" node).
How can I select another version of Python (that I already have installed on my machine) so that PyCharm uses that version instead?
|
How to select Python version in PyCharm?
| 0.066568 | 0 | 0 | 203,082 |
10,322,424 |
2012-04-25T19:27:00.000
| 4 | 0 | 1 | 0 |
python,configuration,pycharm
| 26,644,056 | 6 | false | 0 | 0 |
This can also happen in Intellij Ultimate, which has PyCharm integrated. The issue is as diagnosed above, you have the wrong interpreter selected.
The exact method to fix this for any given project is to go to Project Settings...Project and adjust the Project SDK. You can add a New Project SDK if you don't have Python 3 added by navigating to the python3 binary. This will fix the errors listed above. A shortcut to Project Settings is the blue checkerboard-type icon.
You can also add Python 3 as the default interpreter for Python projects. On OSX this is in File..Other Settings...Default Project Structure. There you can set the Project SDK which will now apply on each new project. It can be different on other platforms, but still similar.
| 2 | 124 | 0 |
I have PyCharm 1.5.4 and have used the "Open Directory" option to open the contents of a folder in the IDE.
I have Python version 3.2 selected (it shows up under the "External Libraries" node).
How can I select another version of Python (that I already have installed on my machine) so that PyCharm uses that version instead?
|
How to select Python version in PyCharm?
| 0.132549 | 0 | 0 | 203,082 |
10,322,632 |
2012-04-25T19:43:00.000
| 0 | 0 | 1 | 0 |
django,virtualenv,documentation-generation,python-sphinx
| 10,322,795 | 1 | true | 1 | 0 |
The api documentation for your code can only be generated with proper access to your code, so the anser will be "no, you'll need to have them both in the same virtualenv".
Some extra thoughts:
If your code virtualenv isn't isolated from the system's python packages, you could install sphinx globally, but you probably don't and shouldn't want that.
I'd just add sphinx to your code's virtualenv. I don't think you'll have to worry about extra overhead of a few extra kilobytes.
| 1 | 1 | 0 |
I have multiple django projects running different django versions in their own virtualenv. I want to use sphinx-api-doc command to generate api docs for the django projects. However i dont want to install sphinx directly in the system and would like to install it in a separate virtualenv.
Since only one virtualenv can be activated at a time, i am not able to use sphinx-api-doc. Is there a way to use sphinx-api-doc with sphinx and django in independent virtualenv or is installing sphinx directly in the system the only way to go?
|
Using sphinx-api-doc when both sphinx and django are in multiple virtualenv
| 1.2 | 0 | 0 | 388 |
10,322,938 |
2012-04-25T20:05:00.000
| 3 | 0 | 1 | 0 |
python
| 10,322,992 | 4 | true | 0 | 0 |
If it's exactly that format, you could just print out line[3:7]
| 2 | 0 | 0 |
I have long a text file where each line looks something like /MM0001 (Table(12,)) or /MM0015 (Table(11,)). I want to keep only the four-digit number next to /MM. If it weren't for the "table(12,)" part I could just strip all the non-numeric characters, but I don't know how to extract the four-digit numbers only. Any advice on getting started?
|
Removing selected characters from text file
| 1.2 | 0 | 0 | 460 |
10,322,938 |
2012-04-25T20:05:00.000
| 2 | 0 | 1 | 0 |
python
| 10,322,983 | 4 | false | 0 | 0 |
You could parse text line by line and then use 4th to 7th char of every line.
ln[3:7]
| 2 | 0 | 0 |
I have long a text file where each line looks something like /MM0001 (Table(12,)) or /MM0015 (Table(11,)). I want to keep only the four-digit number next to /MM. If it weren't for the "table(12,)" part I could just strip all the non-numeric characters, but I don't know how to extract the four-digit numbers only. Any advice on getting started?
|
Removing selected characters from text file
| 0.099668 | 0 | 0 | 460 |
10,325,072 |
2012-04-25T23:09:00.000
| 1 | 1 | 0 | 0 |
python,graph,hadoop,graph-theory
| 11,112,245 | 2 | false | 0 | 0 |
Streaming out to a scripting language is not yet supported but certainly would be a good addition. Patches welcome.
| 1 | 6 | 0 |
Is python supported on Giraph and if it is, is it as well-supported as python is on Hadoop or well it lead to considerably worse performance than using raw Java?
|
Can I use python with giraph?
| 0.099668 | 0 | 0 | 2,017 |
10,325,418 |
2012-04-25T23:52:00.000
| 1 | 0 | 1 | 0 |
python,collision-detection,pygame
| 11,217,539 | 1 | false | 0 | 1 |
you could make it so that when your character hits a block, they move up at the current speed until they are no longer colliding with the polygon. That way, when you hit the ground from above,you don't go downward through it, but when you hit the bottom, you do. I would recommend a while loop set to the collide function.
| 1 | 0 | 0 |
I am trying to implement a Mario type plat-former in pyGame. I have Collision detection working with Polygons no problem. I am curious how I can get the player to be able to jump through the floor above him, which is a polygon floating in air.
What is the theory on how to handle that?
|
Jump Through Polygon/Floor Collision Detection
| 0.197375 | 0 | 0 | 228 |
10,325,600 |
2012-04-26T00:15:00.000
| 0 | 0 | 1 | 0 |
python,pywin32,robot
| 10,326,072 | 1 | false | 0 | 1 |
Does this problem occurred with every Python you run? Based on your problem description it is not certain if this problem is specific to the script you are trying to run or it is a general issue.
What I will do is that try running builtin demo script first to test & verify that you do have full set of libraries installed and runs fine. Once you verified demo script does run as expected then you can look at your test file to see what is wrong. If you could not run demo script successfully then you would need to re-install PyWin completely to have it fully functional.
| 1 | 0 | 0 |
So when I try to run a file in PyWin, it opens an edit window instead. The first could times that it did this, I assumed it was due to some syntax errors in the file, but after I fixed them, it continued to open an edit window each time. This particular file has a lot of defined functions in it if that helps at all.
|
Trouble Running a File in PyWin
| 0 | 0 | 0 | 118 |
10,327,804 |
2012-04-26T05:40:00.000
| 1 | 1 | 0 | 1 |
python,macos,unix,filesystems,osx-lion
| 10,327,842 | 2 | false | 0 | 0 |
os.link claims to work on all Unix platforms. Are there any OS X specific issues with it?
| 1 | 0 | 0 |
I'm assuming with a call to a UNIX shell, but I was wondering if there are other options from within Python.
|
How to create a hard link from within a Python script on a Mac?
| 0.099668 | 0 | 0 | 2,229 |
10,328,943 |
2012-04-26T07:24:00.000
| 0 | 0 | 0 | 0 |
python-3.x,pixels
| 10,331,335 | 1 | false | 0 | 1 |
You need to use some sort of cross-platform GUI toolkit, such as GTK or KDE, maybe Tk or wx will work as well, I don't know.
How you then do it depends on what toolkit you choose.
| 1 | 1 | 0 |
I am using Python3 on Windows 7. I want to grab all the attributes like color intensity, color etc. Of all the pixels of the screen area that I select with mouse. The selection can be of any shape but right now rectangular and square will do.
I want to do it in any area of the screen.
Can you guys please guide me how to do that in Python?
PS: If the method can work across all the platforms that would be much more appreciated.
Thanks,
Aashiq
|
Grabbing pixel attributes in Python
| 0 | 0 | 0 | 185 |
10,329,486 |
2012-04-26T08:09:00.000
| 1 | 0 | 0 | 0 |
python,sql,sql-injection
| 10,329,694 | 4 | false | 0 | 0 |
I don't know if this is in any way applicable but I am just putting it up there for completeness and experts can downvote me at will... not to mention i have concerns about its performance in some cases.
I was once tasked with protecting an aging web app written in classic asp against sql injection (they were getting hit pretty bad at the time)
I dint have time to go through all code (not may choice) so I added a method to one of our standard include files that looked at everything being submitted by the user (iterated through request params) and checked it for blacklisted html tags (e.g. script tags) and sql injection signs (e.g. ";--" and "';shutdown")..
If it found one it redirected the user told them they submission was suspicious and if they have an issue call or email.. blah blah.
It also recorded the injection attempt in a table (once it have been escaped) and details about the IP address time etc of the attack..
Overall it worked a treat.. at least the attacks stopped.
every web technology i have used has some way of fudging something like this in there and it only took me about a day to dev and test..
hope it helps, I would not call it an industry standard or anything
tl;dr?:
Check all request params against a blacklist of strings
| 4 | 0 | 0 |
I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but:
I already do this, thank you.
I know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail.
Even if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once
So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.
|
protecting against sql injection attacks beyond parameter binding
| 0.049958 | 1 | 0 | 621 |
10,329,486 |
2012-04-26T08:09:00.000
| 2 | 0 | 0 | 0 |
python,sql,sql-injection
| 10,336,420 | 4 | true | 0 | 0 |
I already do this, thank you.
Good; with just this, you can be totally sure (yes, totally sure) that user inputs are being interpreted only as values. You should direct your energies toward securing your site against other kinds of vulnerabilities (XSS and CSRF come to mind; make sure you're using SSL properly, et-cetera).
I know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail.
Well, there's no such thing as fool proof because fools are so ingenious. If your your audience is determined to undermine all of your hard work for securing their data, you can't really do anything about it. what you can do is determine which drivers you believe are secure, and generate a big scary warning when you detect that your users are using something else.
Even if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once
So don't do that!
During development, log every sql statement sent to your driver. check, on a regular basis, that user data is never in this log (or logged as a separate event, for the parameters).
SQL injection is basically string formatting. You can usually follow each database transaction backwards to the original sql; if user data is formatted into that somewhere along the way, you have a problem. When scanning over projects, I find that I'm able to locate these at a rate of about one per minute, with effective use of grep and my editor of choice. unless you have tens of thousands of different sql statements, going over each one shouldn't really be prohibitively difficult.
Try to keep your database interactions well isolated from the rest of your application. mixing sql in with the rest of your code makes it hard to mantain, or do the checks I've described above. Ideally, you should go through some sort of database abstraction, (a full ORM or maybe something thinner), so that you can work on just your database related code when that's the task at hand.
| 4 | 0 | 0 |
I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but:
I already do this, thank you.
I know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail.
Even if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once
So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.
|
protecting against sql injection attacks beyond parameter binding
| 1.2 | 1 | 0 | 621 |
10,329,486 |
2012-04-26T08:09:00.000
| 0 | 0 | 0 | 0 |
python,sql,sql-injection
| 10,336,013 | 4 | false | 0 | 0 |
So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input.
This strategy is doomed to fail.
| 4 | 0 | 0 |
I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but:
I already do this, thank you.
I know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail.
Even if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once
So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.
|
protecting against sql injection attacks beyond parameter binding
| 0 | 1 | 0 | 621 |
10,329,486 |
2012-04-26T08:09:00.000
| -1 | 0 | 0 | 0 |
python,sql,sql-injection
| 10,329,550 | 4 | false | 0 | 0 |
Well in php, I use preg_replace to protect my website from being attacked by sql injection. preg_match can also be used. Try searching an equivalent function of this in python.
| 4 | 0 | 0 |
I have done my homework in reading about protection against sql injection attacks: I know that I need to use parameter binding but:
I already do this, thank you.
I know that some of the db drivers my users use implement parameter binding in the most stupid possible way. i.e., they are prone to sql injection attacks. I could try to restrict which db driver they can use but, this strategy is doomed to fail.
Even if I use a decent db driver, I do not trust myself to not forget to use parameter binding at least once
So, I would like to add an extra layer of protection by adding extra sanitization of http-facing user input. The trick is that I know that this is hard to do in general so I would rather use a well-audited well-designed third-party library that was written by security professionals to escape input strings into less dangerous content but I could not find any obvious candidate. I use python so, I would be interested in python-based solutions but other suggestions are fine if I can bind them to python.
|
protecting against sql injection attacks beyond parameter binding
| -0.049958 | 1 | 0 | 621 |
10,331,518 |
2012-04-26T10:21:00.000
| 0 | 0 | 0 | 1 |
django,wxpython,sql-server-2008-r2,vmware,python-2.7
| 10,331,810 | 2 | true | 1 | 0 |
Maybe this could help you a bit, although my set-up is slightly different. I am running an ASP.NET web app developed on Windows7 via VMware fusion on OS X. I access the web app from outside the VM (browser of Mac or other computers/phones within the network).
Here are the needed settings:
Network adapter set to (Bridged), so that the VM has its own IP address
Configure the VM to have a static IP
At this point, the VM is acting as its own machine, so you can access it as if it were another server sitting on the network.
| 1 | 0 | 0 |
We have developed an application using DJango 1.3.1, Python 2.7.2 using Database as SQL server 2008. All these are hosted in Win 2008 R2 operating system on VM. The clients has windows 7 as o/s.
We developed application keeping in view with out VM, all of sudden client has come back saying they can only host the application on VM. Now the challnege is to access application from client to server which is on VM.
If anyone has done this kind of applications, request them share step to access the applicaiton on VM.
As I am good at standalone systems, not having knowledge on VM accessbility.
We have done all project and waiting to someone to respond ASAP.
Thanks in advance for your guidence.
Regards,
Shiva.
|
Steps to access Django application hosted in VM from Windows 7 client
| 1.2 | 1 | 0 | 422 |
10,332,337 |
2012-04-26T11:18:00.000
| 4 | 0 | 0 | 0 |
python,eclipse,pydev,webfaction
| 10,332,409 | 1 | true | 1 | 0 |
Don't do that. Your host is for hosting. Your personal machine is for developing.
Edit and run your code locally. When it's ready, upload it to Webfaction. Don't edit code on your server.
| 1 | 2 | 0 |
This is my first time purchasing a hosting and I opted for Webfaction.com to host my Django application. So far, i've been using Eclipse to write all my code and manage my Django application and I'm not ready to use VIM as a text editor yet. Now my question is, how can I use Eclipse to write my code and manage all my files while being connected to my webfaction account?
|
Eclipse with Webfaction and Django
| 1.2 | 0 | 0 | 146 |
10,335,259 |
2012-04-26T14:12:00.000
| 1 | 1 | 0 | 1 |
python
| 10,335,348 | 1 | false | 0 | 0 |
The one in /usr/bin is in your PATH and can be executed by calling its filename in a shell.
The second one is in library directory referenced by PYTHONPATH or sys.path and can be used as a module in python scripts.
They are probably hard or symlinks if they have the same content.
| 1 | 0 | 0 |
After installing python on Linux, smtpd.py will be installed under /usr/bin directory. Why does this module exist here? How about the other one under directory /usr/lib/python2.x? What's the difference?
|
Why two smtpd.py are installed?
| 0.197375 | 0 | 0 | 181 |
10,336,582 |
2012-04-26T15:28:00.000
| 0 | 0 | 0 | 0 |
python,django
| 10,336,728 | 1 | true | 1 | 0 |
If i'm reading your question correctly the first part wants to make a stylesheet that is dynamic???
I am unable to figure out how to make my stylesheet dynamic for front
end
For that you could use something like
Django admin follows convention of adding {% block extra_head %} (or something similar, sorry don't remember specifics)
Which is exactly what it sounds like a block that is in the <head> tag. This will let you load a stylesheet from any template. Just define that block in your base_site.html and implement it when you extend base_site.html
But then at the end of your question you it seems you want to define style sheet in one place and include that stylesheet for every request?
My only aim is to how i define my app's stylesheet on one place and
applicable through out my application.
Perhaps you could set up a directive in your settings.py 'DEFAULT_STYLESHEET and include that in your base_site.html template. Put the css in the block extra_head. If you need to override it just implement that block and viola!
| 1 | 0 | 0 |
I am new to Django framework and kindly consider if my question is novice.
I have created a polls application using the django framwork. I am unable to figure out how to make my stylesheet dynamic for front end. As i dont want to call it in my base_site.html or index.html files as I am also multiple views render different template files. My only aim is to how i define my app's stylesheet on one place and applicable through out my application.
|
Django Application Assign Stylesheet -- don't want to add it to app's index file? Can it be dynamic?
| 1.2 | 0 | 0 | 76 |
10,337,029 |
2012-04-26T15:56:00.000
| 3 | 0 | 1 | 0 |
python,pip,pymssql,yolk
| 10,344,015 | 3 | true | 0 | 0 |
yolk only search trough PyPI XMLRPC API as far as I know, while pip crawls the web looking for the "best" package that fits - the seed page is http://pypi.python.org/simple/<PACKAGE_NAME>.
| 2 | 3 | 0 |
I need some help understanding how pip and yolk work
I ran pip install pymssql, which installed pymssql version 2.0.0b1-dev-20111019, but then decided that I'd like to revert to an older version.
I ran yolk -V pymssql to check which versions I have available, but it only returns pymssql 1.0.2. Shouldn't the version that I installed appear too?
Searching pypi through the website reveals that 1.0.2 is the only version available. Does this mean pip is using sources other than pypi?
|
Packages as seen through yolk or pip
| 1.2 | 0 | 0 | 949 |
10,337,029 |
2012-04-26T15:56:00.000
| 1 | 0 | 1 | 0 |
python,pip,pymssql,yolk
| 10,337,504 | 3 | false | 0 | 0 |
Yeah, look at the pip.log file to see where it's searching for packages. I think yolk is just looking at what's registered on pypi, but pip is looking all over the place for the most up to date versions it can find.
| 2 | 3 | 0 |
I need some help understanding how pip and yolk work
I ran pip install pymssql, which installed pymssql version 2.0.0b1-dev-20111019, but then decided that I'd like to revert to an older version.
I ran yolk -V pymssql to check which versions I have available, but it only returns pymssql 1.0.2. Shouldn't the version that I installed appear too?
Searching pypi through the website reveals that 1.0.2 is the only version available. Does this mean pip is using sources other than pypi?
|
Packages as seen through yolk or pip
| 0.066568 | 0 | 0 | 949 |
10,337,451 |
2012-04-26T16:24:00.000
| 2 | 0 | 1 | 1 |
python,shell,loops,subprocess
| 10,337,484 | 2 | true | 0 | 0 |
You subprocess.call probably blocked on whatever your command was. I doubt its your python script, but rather whatever the shell command might be (taking too long).
You can tell if your command is completing or not by checking the return code:
print subprocess.call(["command","param"])
It should print 0 if it was successful, or raise an exception if the command has problems. But if you never see consecutive prints, then its never returning from the call.
| 2 | 0 | 0 |
The following little script is supposed to run a shell command with a parameter every 10 minutes. It's ran correctly once (30 minutes ago) however isn't playing ball now (should have done the process another 2 times since). Have I made an error?
while(True):
subprocess.call(["command","param"])
time.sleep(600)
|
Python invoking shell command with params loop
| 1.2 | 0 | 0 | 488 |
10,337,451 |
2012-04-26T16:24:00.000
| 1 | 0 | 1 | 1 |
python,shell,loops,subprocess
| 10,337,506 | 2 | false | 0 | 0 |
Try subprocess.Popen if you don't need to wait for the command to complete.
From the docs,
subprocess.call: Run the command described by args. Wait for command to complete, then return the returncode attribute.
| 2 | 0 | 0 |
The following little script is supposed to run a shell command with a parameter every 10 minutes. It's ran correctly once (30 minutes ago) however isn't playing ball now (should have done the process another 2 times since). Have I made an error?
while(True):
subprocess.call(["command","param"])
time.sleep(600)
|
Python invoking shell command with params loop
| 0.099668 | 0 | 0 | 488 |
10,340,141 |
2012-04-26T19:33:00.000
| 5 | 0 | 1 | 1 |
python,macos,emacs
| 10,340,315 | 1 | true | 0 | 0 |
In general I would do M-x describe-function RET python-mode--by default bound to C-h f-- and the first line in the info window is: python-mode is an interactive compiled Lisp function in ``python.el'.
And that python.el is clickable, for me, and takes me to the file that it was defined in, at which point M-x pwd works.
| 1 | 1 | 0 |
I'm looking to play with python mode for emacs on mac os x, but I can't seem to find the source files for the mode.
What are the standard locations, where a default installation of emacs might have put its modes when installed on Mac OS X?
(I'm using GNU Emacs 24.0.95.1 (i386-apple-darwin11.3.0, NS apple-appkit-1138.32))
|
Where can I find the source for "python mode" when editing emacs configuration for mac os x?
| 1.2 | 0 | 0 | 108 |
10,341,707 |
2012-04-26T21:30:00.000
| 13 | 0 | 0 | 0 |
python,django,virtualenv
| 10,341,733 | 1 | true | 1 | 0 |
In case you are using pip for package management, you can easily recreate the virtualenv on another system:
On system1, run pip freeze --local > requirements.txt and copy that file to system2. Over there, create and activate the virtualenv and use pip install -r requirements.txt to install all packages that were installed in the previous virtualenv.
Your python code can be simply copied to the new system; I'd find -name '*.pyc' -delete though since you usually do not want to move compiled code (even if it's just python bytecode) between machines.
| 1 | 5 | 0 |
I would like to know how to setup a complex python website, that is currently running in production environment, into a local machine for development?
Currently the site uses python combined with Django apps (registration + cms modules) in a virtual environment.
|
How to migrate a python site to another machine?
| 1.2 | 0 | 0 | 2,657 |
10,343,052 |
2012-04-27T00:01:00.000
| 0 | 0 | 0 | 0 |
python,animation,pygame
| 11,236,094 | 3 | false | 0 | 1 |
You could just make something called xscroll that is added to everything that is supposed to scroll across the screen. Then, when you reach a certain distance from the center, instead of adding your players movespeed to his position, you add or subtract the movespeed from xscroll. This makes everything move very smoothly back at the same speed your character would move. I use this in all of my games and I have never had a problem with it.
| 1 | 1 | 0 |
My game is a platform game. I want the player to move when it is X pixels away from the center, moving left or right.
I understand pygame doesn't have anything that would make a camera move.
When the player has reached the point where it is X pixels away from the center, stop the player movement and have the terrain move in the opposite direction to display the illusion of a movable terrain, acting like camera motion.
|
How would I go about making a camera like movement in pygame?
| 0 | 0 | 0 | 2,011 |
10,345,327 |
2012-04-27T05:41:00.000
| 34 | 0 | 0 | 0 |
python,sqlalchemy
| 12,837,029 | 4 | false | 0 | 0 |
If you need the proper return type, just return session.query(MyObject).filter(sqlalchemy.sql.false()).
When evaluated, this will still hit the DB, but it should be fast.
If you don't have an ORM class to "query", you can use false() for that as well:
session.query(sqlalchemy.false()).filter(sqlalchemy.false())
| 1 | 36 | 0 |
What's the best way to create an intentionally empty query in SQLAlchemy?
For example, I've got a few functions which build up the query (adding WHERE clauses, for example), and at some points I know that the the result will be empty.
What's the best way to create a query that won't return any rows? Something like Django's QuerySet.none().
|
SQLAlchemy: create an intentionally empty query?
| 1 | 1 | 0 | 9,094 |
10,345,821 |
2012-04-27T06:31:00.000
| 4 | 0 | 0 | 0 |
python,database,sqlite
| 10,345,847 | 3 | true | 1 | 0 |
Consider just doing a commit after every 1000 records or so
| 3 | 1 | 0 |
I must parse HTML files, which can be up to 500 000 links.
Of which 400 000 will be desired by me.
Should I put all the links that satisfy the condition for the new list and then for the elements of this list and put it into the database.
Or when I find links to satisfy the condition to add it to the database (sqlite) (and commit it).
Is that a large number of commits is not a problem?
I do not want to lose data in case of failure such as power. Thats why i want commit after insert to the database.
How best to place a large number of items in the database?
|
Python big list and input to database
| 1.2 | 0 | 0 | 122 |
10,345,821 |
2012-04-27T06:31:00.000
| 1 | 0 | 0 | 0 |
python,database,sqlite
| 10,345,975 | 3 | false | 1 | 0 |
If these many links are spread across several files, what about a commit after processing each file? Then you as well could remember which files you have processed.
In the case of a single file, record the file offset after each commit for clean continuation.
| 3 | 1 | 0 |
I must parse HTML files, which can be up to 500 000 links.
Of which 400 000 will be desired by me.
Should I put all the links that satisfy the condition for the new list and then for the elements of this list and put it into the database.
Or when I find links to satisfy the condition to add it to the database (sqlite) (and commit it).
Is that a large number of commits is not a problem?
I do not want to lose data in case of failure such as power. Thats why i want commit after insert to the database.
How best to place a large number of items in the database?
|
Python big list and input to database
| 0.066568 | 0 | 0 | 122 |
10,345,821 |
2012-04-27T06:31:00.000
| 0 | 0 | 0 | 0 |
python,database,sqlite
| 10,346,693 | 3 | false | 1 | 0 |
You can try to use noSQL database like mongo. With mongo I add 500.000 documents with 6 fields in each added about 15 seconds (on my old laptop), and about 0.023 sec on not difficult queries.
| 3 | 1 | 0 |
I must parse HTML files, which can be up to 500 000 links.
Of which 400 000 will be desired by me.
Should I put all the links that satisfy the condition for the new list and then for the elements of this list and put it into the database.
Or when I find links to satisfy the condition to add it to the database (sqlite) (and commit it).
Is that a large number of commits is not a problem?
I do not want to lose data in case of failure such as power. Thats why i want commit after insert to the database.
How best to place a large number of items in the database?
|
Python big list and input to database
| 0 | 0 | 0 | 122 |
10,346,394 |
2012-04-27T07:20:00.000
| 2 | 0 | 1 | 0 |
python,parsing,exception-handling
| 10,346,480 | 3 | true | 0 | 0 |
Since you seem to have a need for quite some validation I would simply wrap all validation 'errors' on a given input in a ValidationSummary object / instance (which contains a list of everything which went wrong) and pass that around instead.
On a more general note: contrary to other languages, structuring program flow by exception handling is common and accepted in Python. A Python idiom related to this is called EAFP (It's Easier to ask forgiveness then permission).
| 2 | 0 | 0 |
Let's say I have some isolated Python code that processes data produced by some other entity (E.g. a client).
The data I receive may be in incorrect form (e.g. due to client's sloppiness, data corruption, you name it), so that processing in my Python code will somehow fail, and that will lead to some exception being raised. Let's assume that the code downstream is just interested in knowing if processing was correct and wrong, and not why it was wrong.
My concern is the following: what is the best practice for raising such exception on a complex bad input? How to organize the exceptions in this case?
Data can turn out to be incorrect in a lot of ways, especially if the correct formatting of the data is complex. In some cases I may easily catch the error myself (e.g. if I find an incorrect magic value, in which case I may raise my FancyCustomizedException), but in others some generic exception could get raised as well (e.g. some ValueException).
Is it OK to say that processing was wrong if any exception is raised (in which case the code downstream will use very a generic (and ugly) try: ... except: ...)?
Is it better to catch all generic exceptions and hide them inside my FancyCustomizedException (in which case the code downstream will use very a less generic try: ... except FancyCustomizedException, e: ..., but I will litter my code with try: ... except: ...)?
|
Python exceptions and parsing of complex data
| 1.2 | 0 | 0 | 253 |
10,346,394 |
2012-04-27T07:20:00.000
| 0 | 0 | 1 | 0 |
python,parsing,exception-handling
| 10,346,983 | 3 | false | 0 | 0 |
In my opinion, using exceptions is more reliable than just returning the error codes from functions. The reason is that testing the returned value makes the code around the calls more unreadable and one tends not to check it. So, go for exceptions.
If you are not sure, use only the generic exceptions -- i.e. do not create your own exception classes. You can always add them later. It is probably better to think more about of what kind the exception should be (the existing one) than to create a lot of new of them.
You can write the code so that it will have the core that complains about everything that goes wrong (via exceptions). Then you can write a layer that is more forgiving and tries to be clever to solve some of the situations. (However, sometimes the cleverness of the software may make the user crazy :)
| 2 | 0 | 0 |
Let's say I have some isolated Python code that processes data produced by some other entity (E.g. a client).
The data I receive may be in incorrect form (e.g. due to client's sloppiness, data corruption, you name it), so that processing in my Python code will somehow fail, and that will lead to some exception being raised. Let's assume that the code downstream is just interested in knowing if processing was correct and wrong, and not why it was wrong.
My concern is the following: what is the best practice for raising such exception on a complex bad input? How to organize the exceptions in this case?
Data can turn out to be incorrect in a lot of ways, especially if the correct formatting of the data is complex. In some cases I may easily catch the error myself (e.g. if I find an incorrect magic value, in which case I may raise my FancyCustomizedException), but in others some generic exception could get raised as well (e.g. some ValueException).
Is it OK to say that processing was wrong if any exception is raised (in which case the code downstream will use very a generic (and ugly) try: ... except: ...)?
Is it better to catch all generic exceptions and hide them inside my FancyCustomizedException (in which case the code downstream will use very a less generic try: ... except FancyCustomizedException, e: ..., but I will litter my code with try: ... except: ...)?
|
Python exceptions and parsing of complex data
| 0 | 0 | 0 | 253 |
10,348,653 |
2012-04-27T10:02:00.000
| 1 | 0 | 1 | 0 |
python
| 10,349,053 | 4 | false | 0 | 0 |
Recently our class at school used all of the above programs. about a handful of students had trouble installing like you described. Fortunally I didnt not have this problen but I can suggest you use Administration priviledges.
Make sure you download the correct version.
Go to your download folder and look at the file you have downloaded (do this via my computer not from your web browser)
Right click on the file and then click run as an administrator
| 2 | 3 | 0 |
I have a problem with the following command:
setup.py install.
I know it should work, I have tried it on a laptop but I don't have access to it at the moment. I need to complete a homework so I tried the same on my PC. And when I type the same command into cmd it just runs pyscripter as if I would use right click on setup.py and click edit with pyscripter. It does nothing else. I am sure that I am in the right folder in cmd.
My python version is 2.7 and my pyscripter version is v2.5.3. My OS is win7. I have tried to install other modules but I get the same response.
Has anyone encountered the same problem? I have searched the internet but I haven't found any answers to this problem.
|
Can't install python modules
| 0.049958 | 0 | 0 | 12,880 |
10,348,653 |
2012-04-27T10:02:00.000
| 1 | 0 | 1 | 0 |
python
| 10,348,797 | 4 | false | 0 | 0 |
Do python setup.py install instead.
Windows is probably not set up to recognize .py files as executable.
| 2 | 3 | 0 |
I have a problem with the following command:
setup.py install.
I know it should work, I have tried it on a laptop but I don't have access to it at the moment. I need to complete a homework so I tried the same on my PC. And when I type the same command into cmd it just runs pyscripter as if I would use right click on setup.py and click edit with pyscripter. It does nothing else. I am sure that I am in the right folder in cmd.
My python version is 2.7 and my pyscripter version is v2.5.3. My OS is win7. I have tried to install other modules but I get the same response.
Has anyone encountered the same problem? I have searched the internet but I haven't found any answers to this problem.
|
Can't install python modules
| 0.049958 | 0 | 0 | 12,880 |
10,348,770 |
2012-04-27T10:10:00.000
| 0 | 0 | 1 | 1 |
python,mechanize,lxml
| 10,475,706 | 6 | false | 0 | 0 |
I'm not happy with any of the solutions offered. Hopefully someone will come along with a better one but for now it seems safe to say that there is no simple solution for installing python + mechanize + lxml + arbitrary other libraries on windows.
| 1 | 3 | 0 |
What is the easiest way to install python 2 plus lxml plus mechanize on windows? I'm looking for a solution that is easy to follow and also makes it easy to install other libraries (eggs?) in the future.
Edit
I want to be able to install libraries which require a compiler. Ruby for windows has a dev kit which allows you to easily install gems that require a compiler. I'm looking for a similar setup for Python.
|
install python + mechanize + lxml on windows
| 0 | 0 | 0 | 8,933 |
10,349,093 |
2012-04-27T10:35:00.000
| 1 | 0 | 0 | 0 |
authentication,ldap,splunk,python-ldap
| 10,397,477 | 3 | false | 0 | 0 |
Typically you would search using the username value provided on uid or cn values within the LDAP Tree.
-jim
| 2 | 1 | 0 |
I have set up a Ldap Server somewhere. I can bind to it, can add, modify, delete entry in the database. Now when it come to authentication isnt it as simple as giving the username and password to the server, asking it to search for an entry matching the two? And furthermore, isnt it the 'userPassword' field that contains the password for a user in there?
Now,
I tried to set up splunk to authenticate from my Ldap server, i provided the username and password, but it failed authentication. Isnt it that 'userPassword' field that splunk checks? What should be the possible reason?
|
how to do Ldap Server Authentication?
| 0.066568 | 0 | 0 | 1,711 |
10,349,093 |
2012-04-27T10:35:00.000
| 2 | 0 | 0 | 0 |
authentication,ldap,splunk,python-ldap
| 10,349,171 | 3 | true | 0 | 0 |
LDAP servers are generally not going to allow you to search on the userPassword attribute, for obvious security reasons. (and the password attribute is likely stored in hashed form anyway, so a straight search would not work.)
Instead, the usual way to do LDAP authentication is:
prompt for username & password
Bind to LDAP with your application's account, search for username to get the full distinguished name (dn) of the user's LDAP entry
Make a new LDAP connection, and attempt to bind using the user's dn & password
(If you know how to construct the dn from the username, you can skip step 2, but it's generally a good idea to search first - that way you're less sensitive to things like changes in the OU structure of the LDAP directory)
| 2 | 1 | 0 |
I have set up a Ldap Server somewhere. I can bind to it, can add, modify, delete entry in the database. Now when it come to authentication isnt it as simple as giving the username and password to the server, asking it to search for an entry matching the two? And furthermore, isnt it the 'userPassword' field that contains the password for a user in there?
Now,
I tried to set up splunk to authenticate from my Ldap server, i provided the username and password, but it failed authentication. Isnt it that 'userPassword' field that splunk checks? What should be the possible reason?
|
how to do Ldap Server Authentication?
| 1.2 | 0 | 0 | 1,711 |
10,351,450 |
2012-04-27T13:23:00.000
| 3 | 0 | 0 | 0 |
python,c,numpy
| 10,352,335 | 2 | true | 0 | 0 |
I would say it depends on your skills/experience and your project.
If this is very ponctual and you are profficient in C/C++ and you have already written python wrapper, then write your own extension and interface it.
If you are going to work with Numpy on other project, then go for the Numpy C-API, it's extensive and rather well documented but it is also quite a lot of documentation to process.
At least I had a lot of difficulty processing it, but then again I suck at C.
If you're not really sure go Cython, far less time consuming and the performance are in most cases very good. (my choice)
From my point of view you need to be a good C coder to do better than Cython with the 2 previous implementation, and it will be much more complexe and time consuming.
So are you a great C coder ?
Also it might be worth your while to look into pycuda or some other GPGPU stuff if you're looking for performance, depending on your hardware of course.
| 1 | 0 | 1 |
As there are multitude of ways to write binary modules for python, i was hopping those of you with experience could advice on the best approach if i wish to improve the performance of some segments of the code as much as possible.
As i understand, one can either write an extension using the python/numpy C-api, or wrap some already written pure C/C++/Fortran function to be called from the python code.
Naturally, tools like Cython are the easiest way to go, but i assume that writing the code by hand gives better control and provide better performance.
The question, and it may be to general, is which approach to use. Write a C or C++ extension? wrap external C/C++ functions or use callback to python functions?
I write this question after reading chapter 10 in Langtangen's "Python scripting for computational science" where there is a comparison of several methods to interface between python and C.
|
best way to extend python / numpy performancewise
| 1.2 | 0 | 0 | 1,000 |
10,355,953 |
2012-04-27T18:30:00.000
| 6 | 1 | 0 | 0 |
python,plc,siemens,s7-1200
| 10,782,983 | 7 | true | 1 | 0 |
After failing with libnodave and OPC, I created a TCON,TSEND and TRECV communication thing. It transmits a byte over TCP and it works.
| 3 | 10 | 0 |
I am running a process on a S7-1200 plc and I need it to send a start signal to my python script, after the script is done running it needs to send something back to the plc to initiate the next phase. Oh, and it has to be done in ladder.
Is there a quick and dirty way to send things over profibus or am I better off using just a RS232 thing?
|
How can I communicate between a Siemens S7-1200 and python?
| 1.2 | 0 | 0 | 38,421 |
10,355,953 |
2012-04-27T18:30:00.000
| 2 | 1 | 0 | 0 |
python,plc,siemens,s7-1200
| 24,056,273 | 7 | false | 1 | 0 |
There is a commercial library called "S7connector" by Rothenbacher GmbH (obviously it's not the "s7connector" on sourceforge).
It is for the .NET framework, so could be used with IronPython.
It does work with S7-1200 PLCs. You just have to make sure a DB you want to read from / write to is not an optimized S7-1200 style DB, but a S7-300/400 compatible one, an option which you can set when creating a DB in TIA portal.
This lib also allows to read and write all I/O ports - the "shadow registers" (not sure what they're called officially) and directly as well, overriding the former.
| 3 | 10 | 0 |
I am running a process on a S7-1200 plc and I need it to send a start signal to my python script, after the script is done running it needs to send something back to the plc to initiate the next phase. Oh, and it has to be done in ladder.
Is there a quick and dirty way to send things over profibus or am I better off using just a RS232 thing?
|
How can I communicate between a Siemens S7-1200 and python?
| 0.057081 | 0 | 0 | 38,421 |
10,355,953 |
2012-04-27T18:30:00.000
| 1 | 1 | 0 | 0 |
python,plc,siemens,s7-1200
| 10,773,413 | 7 | false | 1 | 0 |
Ther best way to communicate with S7-1200 PLC cpu's is with OPC UA or Classic OPC (ommonly known as OPC DA. ) Libnodave is made for S7-300 and S7-400 not for S71200 (2.x firmware).
If you use a third party solution to communicate with S7-1200 (or S7-1500) you have to decrease the security level at the PLC by allowing the put and get mechanism. Put and get are pure evil to use. You open the memory of the CPU for every process. Don’t use them anymore. Siemens should actually block this.
This applies for all firmware release for S7-1200.
Siemens pushes people you use OPC UA as default communication from PLC. What makes sense, because OPC UA is the protocol for industry 4.0 and IIoT.
Edit: rewrite everything. Info was heavily outdated.
If you use a firmware 2 or 3 1200, consider replacement or upgrade. These versions are no longer supported and contains the worm issue.
| 3 | 10 | 0 |
I am running a process on a S7-1200 plc and I need it to send a start signal to my python script, after the script is done running it needs to send something back to the plc to initiate the next phase. Oh, and it has to be done in ladder.
Is there a quick and dirty way to send things over profibus or am I better off using just a RS232 thing?
|
How can I communicate between a Siemens S7-1200 and python?
| 0.028564 | 0 | 0 | 38,421 |
10,356,581 |
2012-04-27T19:17:00.000
| 1 | 0 | 0 | 0 |
python,django,facebook,sqlite
| 10,486,708 | 1 | true | 1 | 0 |
You should use django-facebook instead, it does that and more and it is actively supported :)
| 1 | 0 | 0 |
I'm using django-1.4 , sqlite3 , django-facebookconnect
Following instructions in Wiki to setup .
"python manage.py syncdb" throws an error .
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_user_permissions
Creating table auth_user_groups
Creating table auth_user
Creating table django_content_type
Creating table django_session
Creating table django_site
Creating table blog_post
Creating table blog_comment
Creating table django_admin_log
Traceback (most recent call last):
File "manage.py", line 10, in
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/init.py", line 443, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/init.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.dict)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 371, in handle
return self.handle_noargs(**options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/syncdb.py", line 91, in handle_noargs
sql, references = connection.creation.sql_create_model(model, self.style, seen_models)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/creation.py", line 44, in sql_create_model
col_type = f.db_type(connection=self.connection)
TypeError: db_type() got an unexpected keyword argument 'connection'
Is there any solution ??
|
Getting db_type() error while using django-facebook connect for DjangoApp
| 1.2 | 1 | 0 | 282 |
10,356,870 |
2012-04-27T19:42:00.000
| 3 | 0 | 1 | 0 |
python,api,dropbox-api,api-key
| 10,357,811 | 3 | false | 0 | 0 |
Plain text. Any obfuscation attempt is futile if the code gets distributed.
| 3 | 8 | 0 |
In my case I'm using the Dropbox API. Currently I'm storing the key and secret in a JSON file, just so that I can gitignore it and keep it out of the Github repo, but obviously that's no better than having it in the code from a security standpoint. There have been lots of questions about protecting/obfuscating Python before (usually for commercial reasons) and the answer is always "Don't, Python's not meant for that."
Thus, I'm not looking for a way of protecting the code but just a solution that will let me distribute my app without disclosing my API details.
|
How should I store API keys in a Python app?
| 0.197375 | 0 | 1 | 5,852 |
10,356,870 |
2012-04-27T19:42:00.000
| 2 | 0 | 1 | 0 |
python,api,dropbox-api,api-key
| 10,360,373 | 3 | false | 0 | 0 |
Don't know if this is feasible in your case. But you can access the API via a proxy that you host.
The requests from the Python APP go to the proxy and the proxy makes the requests to the Dropbox API and returns the response to the Python app. This way your api key will be at the proxy that you're hosting. The access to the proxy can be controlled by any means you prefer. (For example username and password )
| 3 | 8 | 0 |
In my case I'm using the Dropbox API. Currently I'm storing the key and secret in a JSON file, just so that I can gitignore it and keep it out of the Github repo, but obviously that's no better than having it in the code from a security standpoint. There have been lots of questions about protecting/obfuscating Python before (usually for commercial reasons) and the answer is always "Don't, Python's not meant for that."
Thus, I'm not looking for a way of protecting the code but just a solution that will let me distribute my app without disclosing my API details.
|
How should I store API keys in a Python app?
| 0.132549 | 0 | 1 | 5,852 |
10,356,870 |
2012-04-27T19:42:00.000
| 2 | 0 | 1 | 0 |
python,api,dropbox-api,api-key
| 11,080,881 | 3 | false | 0 | 0 |
There are two ways depending on your scenario:
If you are developing a web application for end users, just host it in a way that your API key does not come to disclosure. So keeping it gitignored in a separate file and only upload it to your server should be fine (as long there is no breach to your server). Any obfuscation will not add any practical benefit, it will just give a false feeling of security.
If you are developing a framework/library for developers or a client application for end users, ask them to generate an API key on their own.
| 3 | 8 | 0 |
In my case I'm using the Dropbox API. Currently I'm storing the key and secret in a JSON file, just so that I can gitignore it and keep it out of the Github repo, but obviously that's no better than having it in the code from a security standpoint. There have been lots of questions about protecting/obfuscating Python before (usually for commercial reasons) and the answer is always "Don't, Python's not meant for that."
Thus, I'm not looking for a way of protecting the code but just a solution that will let me distribute my app without disclosing my API details.
|
How should I store API keys in a Python app?
| 0.132549 | 0 | 1 | 5,852 |
10,359,617 |
2012-04-28T00:57:00.000
| 1 | 0 | 0 | 0 |
python,postgresql,sqlsoup
| 10,360,094 | 1 | false | 0 | 0 |
After talking with some folks, it's pretty clear the better answer is to use Pig to process and aggregate my data locally. At the scale, I'm operating it wasn't clear Hadoop was the appropriate tool to be reaching for. One person I talked to about this suggests Pig will be orders of magnitude faster than in-DB operations at the scale I'm operating at which is about 10^7 records.
| 1 | 0 | 0 |
I have a large dataset of events in a Postgres database that is too large to analyze in memory. Therefore I would like to quantize the datetimes to a regular interval and perform group by operations within the database prior to returning results. I thought I would use SqlSoup to iterate through the records in the appropriate table and make the necessary transformations. Unfortunately I can't figure out how to perform the iteration in such a way that I'm not loading references to every record into memory at once. Is there some way of getting one record reference at a time in order to access the data and update each record as needed?
Any suggestions would be most appreciated!
Chris
|
Data Transformation in Postgres Using SqlSoup
| 0.197375 | 1 | 0 | 333 |
10,363,438 |
2012-04-28T12:20:00.000
| 0 | 0 | 1 | 0 |
python,cherrypy,kill-process
| 10,370,708 | 2 | false | 0 | 0 |
If your process is using CherryPy to block (via quickstart or engine.block), then you could simply call: cherrypy.engine.exit() from your page handler. That would be the cleanest option since it would properly terminate CherryPy and plugins you may have subscribed to.
| 1 | 0 | 0 |
I am using cherrypy in python script.I think I have to register a callback method from the main application so that i can stop cherrypy main process from a worker thread,but how do i kill the main process within the that process.
So i want to know how to stop cherrypy from within the main process.
|
How to kill the cherrypy process?
| 0 | 0 | 0 | 1,787 |
10,364,032 |
2012-04-28T13:41:00.000
| 2 | 0 | 1 | 0 |
python,memory
| 10,364,138 | 3 | false | 0 | 0 |
Have you tried the built in ctypes module?
It provides a memset function which should give you what you need.
| 2 | 8 | 0 |
Lets say that memory address 0A7F03E4 stores a value 124. How to change it to 300 using python? Is there a module supplied for this kind of task?
|
How to change the value stored in memory address?
| 0.132549 | 0 | 0 | 9,238 |
10,364,032 |
2012-04-28T13:41:00.000
| 1 | 0 | 1 | 0 |
python,memory
| 10,364,171 | 3 | false | 0 | 0 |
On UNIX, you can try to open() the /dev/mem device to access the physical memory, and then use the seek() method of the file object set the file pointer to the right location. Then use the write() method to change the value. Don't forget to encode the number as a raw string!
Generally, only the root user has access to this device, though.
| 2 | 8 | 0 |
Lets say that memory address 0A7F03E4 stores a value 124. How to change it to 300 using python? Is there a module supplied for this kind of task?
|
How to change the value stored in memory address?
| 0.066568 | 0 | 0 | 9,238 |
10,364,900 |
2012-04-28T15:25:00.000
| 0 | 0 | 0 | 0 |
python,events,wxpython,wxwidgets
| 10,385,658 | 4 | false | 0 | 1 |
I don't think there is such an event, but you can try wx.EVT_SET_CURSOR. Alternatively, you can catch wx.EVT_CHAR or one of the EVT_KEY_* events and use the TextCtrl's GetInsertionPoint() method to know where the cursor is. You may need to call the method when you click around in the text control using mouse events as well.
| 1 | 1 | 0 |
What event is called when the caret inside a TextCtrl / Styled TextCtrl has its position changed? I need to bind the event to show in the status bar, the current position of the caret.
|
wxPython caret move event
| 0 | 0 | 0 | 1,551 |
10,366,424 |
2012-04-28T18:36:00.000
| 0 | 0 | 0 | 0 |
python,python-2.7
| 10,366,467 | 5 | false | 0 | 0 |
Just derive to a new class and override the insert function. In the overwriting function, check last insert time and call father's insert method if it has been more than five minutes, and of course update the most recent insert time.
| 2 | 1 | 0 |
I have a python script that gets data from a USB weather station, now it puts the data into MySQL whenever the data is received from the station.
I have a MySQL class with an insert function, what i want i that the function checks if it has been run the last 5 minutes if it has, quit.
Could not find any code on the internet that does this.
Maybe I need to have a sub-process, but I am not familiar with that at all.
Does anyone have an example that I can use?
|
Python, function quit if it has been run the last 5 minutes
| 0 | 1 | 0 | 489 |
10,366,424 |
2012-04-28T18:36:00.000
| 0 | 0 | 0 | 0 |
python,python-2.7
| 10,366,452 | 5 | false | 0 | 0 |
Each time the function is run save a file with the current time. When the function is run again check the time stored in the file and make sure it is old enough.
| 2 | 1 | 0 |
I have a python script that gets data from a USB weather station, now it puts the data into MySQL whenever the data is received from the station.
I have a MySQL class with an insert function, what i want i that the function checks if it has been run the last 5 minutes if it has, quit.
Could not find any code on the internet that does this.
Maybe I need to have a sub-process, but I am not familiar with that at all.
Does anyone have an example that I can use?
|
Python, function quit if it has been run the last 5 minutes
| 0 | 1 | 0 | 489 |
10,366,687 |
2012-04-28T19:11:00.000
| 1 | 0 | 0 | 0 |
python,django,literals,base
| 10,366,811 | 2 | false | 1 | 0 |
What kind of field is assignedTo? I'd guess it's a foreign key, in which case it's trying to do the filter on an id, when you're passing a string. Am I right?
The problem here is that assignedTo is being treated as an int (either it is an int, or is being compared on the basis of an int, such as a foreign key id), and you're passing a string to compare it to, which is invalid.
| 1 | 0 | 0 |
I am using django/python.
Here's where the data comes from:
User selects one or more options from a <select name="op_assignedTo">
I also use <option value="username">
The data is sent via POST and collected in a django view:
op_assignedTo = request.POST.getlist('op_assignedTo')
But the following line is give me the error:
assignedTo_list = Item.objects.filter(assignedTo__in=op_assignedTo)
I got the above line from numerous other answers to other questions on stackoverflow.
I am confused at the error, because even the line
temp = Item.objects.filter(assignedTo='matthew')
gives the same error, "Value Error" - invalid literal for int() with base 10: 'matthew'.
If the first part of my post doesn't quite make sense, please just look at the last line of code I posted.
Thanks all!
|
Why am I getting error: "Value Error" - invalid literal for int() with base 10: 'matthew'
| 0.099668 | 0 | 0 | 965 |
10,367,358 |
2012-04-28T20:41:00.000
| 1 | 0 | 1 | 0 |
python,package,bitbucket,easy-install,pypi
| 10,368,439 | 1 | true | 0 | 0 |
PyPi meta-data changes can be done by simply redoing the python setup.py register step again, so you can simply edit your setup.py to change the download URL and then repeat the registration step.
Keep in mind that automated install tools like easy_install, pip, etc. can generally scan a page linked to by the download url for the latest downloadable distribution (according to version numbering standards) so you don't need to explicitly link to your distribution file.
If you're going to do this, I also recommend you manually go to PyPi and delete or hide the old source distribution you've uploaded so that new users don't get a version containing the old setup.py.
| 1 | 3 | 0 |
I'm developing a python package in bitbucket and would like to index it in pypi. The operations I do whenever I have a new download seems quite inefficient, which trigger this question.
I've uploaded a new package into pypi using
python setup.py register sdist upload
Then I've configured the new package also to appear in bitbucket's downloads.
Now I want to update the pypi download URL to point at bitbucket.
Can (3) be done after I've done (1),(2) without recreating the package?
What is the proper way to do so without generating the package twice?
|
Is it possible just to update the details in the pypi index, without recreating package?
| 1.2 | 0 | 0 | 83 |
10,368,134 |
2012-04-28T22:30:00.000
| 1 | 0 | 1 | 0 |
python,data-structures
| 10,368,172 | 2 | false | 0 | 0 |
Python dictionary object is one of the most optimized parts of the whole Python language and the reason is that dictionaries are used everywhere.
For example normally every object instance of every class uses a dictionary to keep instance data members content, the class is a dictionary containing the methods, the modules use a dictionary to keep the globals, the system uses a dictionary to keep and lookup the modules and so on.
For keeping a counter using a dictionary is a good approach in Python.
| 1 | 1 | 0 |
I am new to python. I need a data structure to store counts of some objects. For example, I want to store the most visited webpages. Lets say. I have 100 the most visited webpages. I keep the counts of visits to each webpage. I may need to update the list. I will definitely update the visit-counts. It does not have to be ordered. I will look at the associated visit-count given the webpage ID. I am planning to use a dictionary. Is there a faster way of doing this in python?
|
python dictionary structure, speed concerns
| 0.099668 | 0 | 0 | 249 |
10,368,361 |
2012-04-28T23:08:00.000
| 2 | 1 | 0 | 1 |
python,profile,.bash-profile
| 10,368,402 | 2 | true | 0 | 0 |
python works out of the box on OS X (as does ruby, for that matter). The only changes I would recommend for a beginner are:
1) Python likes to be reassured that the terminal can handle UTF-8 before it will print Unicode strings. Add export LANG=en_US.UTF-8 to .profile. (It may be that the .UTF-8 part is already present by default on Lion - I haven't checked since Snow Leopard.) Of course, this is something that will help you in debugging, but you shouldn't rely on it being set this way on other machines.
2) Install pip by doing easy_install pip (add sudo if necessary). After that, install Python packages using pip install; this way, you can easily remove them using pip uninstall.
| 1 | 5 | 0 |
I new to Python and to programming in general. I'm a novice, and do not work in programming, just trying to teach myself how to program as a hobby. Prior to Python, I worked with Ruby for a bit and I learned that one of the biggest challenges was actually properly setting up my computer.
Background: I'm on a Macbook with OSX 10.7.
With Ruby, you have to (or rather, you should), edit your ./profile and add PATH info. When you install and use RVM, there are additional items you need to add to your bash_profile.
Do you have to make similar changes with Python? What are the best practices as I'm installing/getting started to ensure I can install modules and packages correctly?
|
Proper Unix (.profile, .bash_profile) changes for Python usage
| 1.2 | 0 | 0 | 479 |
10,369,219 |
2012-04-29T02:26:00.000
| 2 | 0 | 1 | 0 |
python,linux,memory-management
| 10,369,327 | 2 | false | 0 | 0 |
Think about the multiprocessing module as just syntax sugar around os.fork().
Now what is fork? When a process forks, the operating system creates a new child process with a new process ID, duplicating the state of the parent process (memory,environment variables, and more).
| 1 | 2 | 0 |
I am using multiprocessing module of python and have some confusions regarding the same.
Basically, I store some data initially in the Main process, and that is around 16GB (main memory size) as shown in the top command. I have stored these data as global Variables.
Then multiprocessing is done on this data, and processed accordingly and differently accordingly.
Now I see that multiprocessing is happening i.e. all processes has its own CPU utilization, but the memory of all the processes in 16 GB each.. why so.?? Isn't it should use same memory that I send through pass by reference of global variables.. Please some thoughts.
The output of top command is as follows.:-
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13908 admin 20 0 16.7g 16g 848 R 100.0 17.3 0:32.92 python
13429 admin 20 0 16.7g 16g 3336 S 0.0 17.3 15:06.97 python
13910 admin 20 0 16.7g 16g 848 R 100.3 17.3 0:32.94 python
13911 admin 20 0 16.7g 16g 840 R 100.0 17.3 0:33.02 python
13912 admin 20 0 16.7g 16g 836 R 99.6 17.3 0:33.00 python
13907 admin 20 0 16.7g 16g 796 R 100.0 17.3 0:33.06 python
13909 admin 20 0 16.7g 16g 796 R 99.6 17.3 0:32.93 python
|
Multiprocessing module showing memory for each child process same as Main process.
| 0.197375 | 0 | 0 | 2,566 |
10,369,496 |
2012-04-29T03:36:00.000
| 1 | 0 | 0 | 0 |
python,gtk,gtk3,pygobject
| 11,075,153 | 1 | false | 0 | 1 |
I don't think that is possible. The status icon is not a widget and the icon is going to be scaled by the window manager. Even if you used Cairo or PIL to generate an image on the fly to use as the icon pixbuf, it wouldn't have the effect of an embedded label in the system tray. It would instead be tiny, unreadable text smushed into the size of the other icons.
| 1 | 6 | 0 |
I'd like to create a Gtk.StatusIcon with custom text. Ideally I'd like to append this to an existing image, but text-only is ok, too. How can I achieve this?
I've seen some posts about getting a Gtk.Label's pixbuf but those methods seem to be removed from Gtk3 (pixbuf_get_from_drawable)
|
How do I set a Gtk.StatusIcon as Text
| 0.197375 | 0 | 0 | 377 |
10,372,216 |
2012-04-29T12:44:00.000
| 0 | 0 | 1 | 1 |
python,windows,ubuntu,cross-compiling
| 10,456,295 | 3 | false | 0 | 0 |
I have no experience deploying applications on Linux - but can't you add dependencies when you package the software for apt-get? I install packages that bring in other libraries all the time. Seems like you could do this for wx.
| 1 | 5 | 0 |
I've created a program using Python on Windows. How do you turn it into Linux executable? To be specific Linux Ubuntu 9.10.
|
Cross compiling a python script on windows into linux executable
| 0 | 0 | 0 | 7,825 |
10,372,287 |
2012-04-29T12:54:00.000
| 3 | 0 | 0 | 0 |
python,sockets
| 10,372,331 | 1 | true | 0 | 0 |
You can use makefile if you find the file interface of Python convenient. For example, you can then use methods like readlines on the socket (you'd have to implement it manually when using recv). This can be more convenient if sending text data on the socket, but YMMV.
| 1 | 4 | 0 |
Whats the purporse of using makefile() when working with Python Sockets??
I can make a program works just with send() and recv() functions on Python. But i read that is better to use the makefile() method to buffer the data. I didn't understand this relation and differences...any help?
Tks !
|
Differences between makefile() and send() recv() Python
| 1.2 | 0 | 1 | 3,003 |
10,372,355 |
2012-04-29T13:05:00.000
| 6 | 0 | 1 | 0 |
python,performance,printing
| 10,372,392 | 3 | true | 0 | 0 |
Quite a few, but another important (or even the most important) bottleneck is unrelated to the CPU: I/O overhead. Once the bytecode instruction(s) have been dispatched and all arguments have been converted to strings, a function is called to write those strings to sys.stdout. Depending on your system and how you run the program, this may be:
A file on disk
A pipe from a terminal emulator
Some Python objects that capture the output (this is what IDLE does IIRC) to do who-knows-what with it (capture it, put it into a GUI, etc).
In case #1, disk I/O is involved and that's easily an order of magnitude slower than writing to RAM. And RAM is already awfully slow compared to today's CPUs. As noted in a comment, this is less of an issue due to extensive buffering by the OS and by Python, but still takes time to issue the write and (depending on implementation details I don't know much about) it may still take some time if someone flushes any buffers prematurely.
In case #2, everything remains in memory, but it's still a system call, some copying, and the other end has to read it and do something with it for you to notice (e.g. render it in a fancy terminal emulator with an anti-aliased font, which is a complex task itself). Less of an issue as it can happen concurrently, but it nevertheless places load on the CPU.
In case #3, all bets are off. It may hash the output with bcrypt and send it to the moon for all we know. Do you happen to use IDLE? I recall a complaint that IDLE was (is?) very slow with redirecting output, especially with lots of tine pieces. It has to capture the output, concatenate it with the output so far, and let Tkinter render that.
| 3 | 0 | 0 |
I ask this question because when a loop is debugged with repeated print statements, it slows down the program much more that I would have originally expected. I have gotten used to this, but now I am curious as to the technical reasons why this would be the case? It seems to me that the various calculations and assignings of variables would be more expensive than outputting strings.
|
How many computer instructions are involved in a Python print statement?
| 1.2 | 0 | 0 | 262 |
10,372,355 |
2012-04-29T13:05:00.000
| 1 | 0 | 1 | 0 |
python,performance,printing
| 10,372,382 | 3 | false | 0 | 0 |
This is not a matter of CPU instructions, at least not the CPU instructions in your Python program. When you do a print with a terminal emulator (command window) as output, the string to be printed is copied into a kernel buffer, then to the terminal process's memory. The overhead is in the context switching (both processes doing system calls, i.e. jumping into kernel mode) and the copying of the string in memory.
| 3 | 0 | 0 |
I ask this question because when a loop is debugged with repeated print statements, it slows down the program much more that I would have originally expected. I have gotten used to this, but now I am curious as to the technical reasons why this would be the case? It seems to me that the various calculations and assignings of variables would be more expensive than outputting strings.
|
How many computer instructions are involved in a Python print statement?
| 0.066568 | 0 | 0 | 262 |
10,372,355 |
2012-04-29T13:05:00.000
| 2 | 0 | 1 | 0 |
python,performance,printing
| 10,372,745 | 3 | false | 0 | 0 |
A huge, huge, huge number, especially if the output is visible on screen, such as in a terminal emulator window on a modern multitasking system.
Firstly, if you’re outputting numbers in decimal, there’s a divmod for each digit, which is a relatively expensive operation compared to, say, adding. (If you output in hexadecimal, it can be a bit cheaper, as each digit can be extracted using shifting and masking only.) If you output floats, there’s some more calculation involved; with dates and times, there’s months of various lengths, leap years, leap seconds, DST, and timezones all to be considered.
But that’s all just calculation and logic, so it’s dwarfed by what’s to come.
Next Python has to send the output text to the terminal for display, which means the operating system has to step in to transfer the data through a buffer, then wake up the other process. The terminal process scans its input for control sequences to move the cursor or change colours. Then a text renderer scans the text for characters that need special handling: maybe there’s some combining accents to be applied, or some right-to-left script that needs to be rearranged for display.
Once the text is laid out, the terminal tells the window manager which area of its window needs to be redrawn, and the window manager checks whether it’s visible – it might be minimized or hidden behind another window. The terminal is told which area actually needs painting, and finally draws the text, in its proper font and colours, kerning and antialiasing. Does the window have a cool-looking transparent background? That has to be merged in too.
Depending on the windowing system, the pixels could then go on another trip through operating system buffers to a compositing manager, which actually draws the window contents onto the screen, taking into account window transparency.
FInally, the pixels arrive on screen, where they barely have time to be seen before being swept away by their millions of successors, as you watch the output streaming past far too fast to read.
It’s amazing how much work our computers do for us.
| 3 | 0 | 0 |
I ask this question because when a loop is debugged with repeated print statements, it slows down the program much more that I would have originally expected. I have gotten used to this, but now I am curious as to the technical reasons why this would be the case? It seems to me that the various calculations and assignings of variables would be more expensive than outputting strings.
|
How many computer instructions are involved in a Python print statement?
| 0.132549 | 0 | 0 | 262 |
10,374,645 |
2012-04-29T18:11:00.000
| 3 | 0 | 1 | 0 |
python,xcode
| 10,374,662 | 3 | false | 0 | 0 |
1/6 is integer division, which becomes 0. Try using 1.0/6 instead.
| 2 | 3 | 0 |
Tried in both Objective-C (Xcode) and Python (terminal) and (1/6)*(66.900009-62.852596) evaluates to zero both times. Anyone know why this is? Shouldn't it be 0.26246?
|
Why is "(1/6)*(66.900009-62.852596)" evaluating to zero?
| 0.197375 | 0 | 0 | 216 |
10,374,645 |
2012-04-29T18:11:00.000
| 11 | 0 | 1 | 0 |
python,xcode
| 10,374,661 | 3 | true | 0 | 0 |
You are doing integer arithmetic on 1/6, and the floor of 1/6 is 0. Try 1.0/6 instead.
| 2 | 3 | 0 |
Tried in both Objective-C (Xcode) and Python (terminal) and (1/6)*(66.900009-62.852596) evaluates to zero both times. Anyone know why this is? Shouldn't it be 0.26246?
|
Why is "(1/6)*(66.900009-62.852596)" evaluating to zero?
| 1.2 | 0 | 0 | 216 |
10,376,129 |
2012-04-29T21:27:00.000
| 1 | 0 | 1 | 0 |
python,sage
| 10,376,372 | 3 | false | 0 | 0 |
In the case of Sage, it's easy. Sage has complete control of its own REPL (read-evaluate-print loop), so it can parse the commands you give it and make the parts of your expression into whatever classes it wants. It is not so easy to have standard Python automatically use your integer type for integer literals, however. Simply reassigning the built-in int() to some other type won't do it. You could probably do it with an import filter, that scans each file imported for (say) integer literals and replaces them with MyInt(42) or whatever.
| 1 | 1 | 0 |
I have started playing with Sage recently, and I've come to suspect that the standard Python int is wrapped in a customized class called Integer in Sage. If I type in type(1) in Python, I get <type 'int'>, however, if I type in the same thing in the sage prompt I get <type 'sage.rings.integer.Integer'>.
If I wanted to replace Python int (or list or dict) with my own custom class, how might it be done? How difficult would it be (e.g. could I do it entirely in Python)?
|
Creating a customized language using Python
| 0.066568 | 0 | 0 | 123 |
10,377,131 |
2012-04-30T00:07:00.000
| 1 | 0 | 0 | 0 |
python,openerp
| 10,379,449 | 4 | false | 1 | 0 |
Depending the logged in user :
You can use the variable 'uid' but I don't think you can do 'uid.name' or 'uid.groups_id'. So the easier method will be the second.
Depending on the groups
Example : We have some users who are managers and others not, create a group 'Manager' (in a xml file !!!) and add this group to managers. Now change the field in the xml like this :
<field name="name" string="this is the string" groups="my_module.my_reference_to_the_group"/>
The field will only be visible for managers
| 1 | 1 | 0 |
I am using openerp 5.16 web.
Is there any way we can hide button depending upon the logged in user.
or how can i control the group visibility depending upon the user group.
|
button visibility in openerp
| 0.049958 | 0 | 0 | 2,413 |
10,378,539 |
2012-04-30T04:34:00.000
| 1 | 0 | 1 | 0 |
python,c,parsing,scala,lisp
| 10,397,259 | 2 | true | 0 | 0 |
In general, this is not possible without having a nearly complete language implementation.
There is a rudimentary preprocessor in C, which could allow to mask function declarations from an ad hoc scanning. There is a powerful metaprogramming in Lisp, which means you can only extract the definitions using a full-featured Lisp compiler, simple parsing won't help at all.
Scala is the simplest of these three, but still its syntax is over-bloated and you'll need at least a complete parser. Python is not nearly a right tool for doing this sort of things any way.
| 1 | 1 | 0 |
In the title I mention 3 different languages in which I would like to find out if a python package exists which can give me a list of identifiers for a program in any of those; so doesn't have to be all three of them as I doubt it there would be one like that. So my question is does a function or class exist in python that allows me too get a list of identifiers for a specific program in a language, preferably one in the 3 I listed in the title. Any help appreciated.
|
Python package to parse identifiers in a program (C, Scala, Lisp)?
| 1.2 | 0 | 0 | 294 |
10,378,591 |
2012-04-30T04:43:00.000
| 4 | 0 | 0 | 0 |
python,google-app-engine,python-2.7,jinja2
| 10,386,640 | 1 | true | 1 | 0 |
Looks like you're confused between NDB keys and db keys. The db.Key class (here shown as datastore_types.Key) does not have a get() method. However the NDB Key class (which would be google.appengine.ext.ndb.key.Key) does.
| 1 | 0 | 0 |
I have a list of keys and I'm trying to get the object(s) in a Jinja2 template:
{{item.cities[0].get().name}}
UndefinedError: 'google.appengine.api.datastore_types.Key object' has no attribute 'get'
I thought one could use get() on a key even in a template but here I get the error. Is it true that it can't be done?
|
Can I use get on a key in a jinja template?
| 1.2 | 0 | 0 | 1,136 |
10,380,922 |
2012-04-30T08:57:00.000
| 1 | 0 | 0 | 0 |
python,django,apache,heroku,social-networking
| 10,381,862 | 1 | false | 1 | 0 |
apache Solr for fast indexing,
virtual-env ,
a library that provides connection pooling (SQLAlchemy),
django-evolution or south for migration.
| 1 | 1 | 0 |
Iam in a plan to develop a social networking site in *python/django.*I have decided to use following technologies to implement this.I have some doubt regarding these technologies which i had planned to use. If anyone can help me regarding this it will be helpful.I want to avoid the bottle necks when it is scale into the thousands of connections .
Apache as web-server
Mailgun cloud-based email service (Heroku addon)
RabbitMQ as a message queue(Heroku addon)if required
MySQL 5.1 as database system.(Xeround addon)
Git as file content management
Memcache to reduce database load (optional)
Heroku as a cloud based plattform(staging and live)
Which storage i have to use for static files delivery or any heroku addon is there for static or content delivery?
Please advice.
Thanking you in advance
|
Django social networking site with heroku
| 0.197375 | 0 | 0 | 675 |
10,381,594 |
2012-04-30T09:48:00.000
| 0 | 0 | 0 | 0 |
python,string,random,passwords,md5
| 10,382,646 | 3 | false | 1 | 0 |
short answer: you can't
longr answer:
a md5 hashsum contains 128 bits of information, so to store that you also need 128 bits. the closest you get from that to a human readable form would probably be to base64 encode it, that will leave you with 22 characters (24 with padding). that's probably as short as it gets.*
where does the randomness in your md5 hash come from anyway? md5 hashes arn't random, so you're probably hashing something random (what?) to get them (and by doing so you can't increase the entropy in any way, only decrease it).
*you could probably create you own way to encode the checksum using a larger range of characters from the unicode range... but that would mean you had to select a suitable set of characters that anybody will know how to pronounce...
something like ☺ ⚓ ⚔ ☂ ☏ would seem fairly clear, but some symbols like ♨ no so much...
| 1 | 0 | 0 |
I have a md5 checksum in python; like s = '14966ba801aed57c2771c7487c7b194a'.
What I want is to shorten it and make it a string in the form 'a-zA-Z0-9_.-', without loosing entropy of my random md5 checksum.
The output have to be pronounceable, so I cant just do binascii.unhexlify(s). Nor can I do base64.encodestring(s) and cut it because then I will loose entropy.
Any ideas on how to solve this without mapping an insane number (256) of hex pair (00->FF) to different letters?
The reason I want this is to be able to say a whole md5 checksum over the phone, but use the whole alphabet+numbers+some special characters.
|
Generating random string based on some hex
| 0 | 0 | 0 | 730 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.