Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,376,188 |
2012-09-11T18:58:00.000
| 34 | 0 | 1 | 0 |
python,python-3.x,installation
| 12,376,330 | 1 | true | 0 | 0 |
Since the bytecode is unlikely to change regardless of how many times it is compiled, the interpreter can take advantage of the small speedup gain. Unless you are very short of hard drive space, you should select this option.
| 1 | 31 | 0 |
I'm installing Python 3.2 32bit on Win7 machine, there is the following option:
Compile .py Files to Byte Code after Installation
Should I leave option unchecked or is the compilation recommended?
|
Should I "Compile .py Files to Byte Code after Installation"?
| 1.2 | 0 | 0 | 10,315 |
12,379,289 |
2012-09-11T23:26:00.000
| 2 | 0 | 1 | 0 |
ipython
| 12,379,433 | 1 | true | 0 | 0 |
The first parameter to %logstartshould be the path to the logfile. For example:
%logstart ~/mylog.log
Try ?%logstart to read about the available options.
| 1 | 1 | 0 |
I run ipython on Ununtu 10.4 and log session by using %logstart.
I would like to define a path to a log file where the session would be logged. At the moment I have ipython_log.py in my home dorectory
Thanks
|
change path to log file from ipython
| 1.2 | 0 | 0 | 230 |
12,381,692 |
2012-09-12T05:26:00.000
| 10 | 0 | 0 | 0 |
emacs,python-mode
| 37,505,278 | 4 | false | 0 | 0 |
How to uncomment code block in emacs python-mode?
Select code, e.g. with Ctrl-Space to mark and cursor over desired code.
Then, meta-semicolon: Meta-;
That's escape then ;s or hold down Alt-;
The same method will also comment code.
| 3 | 10 | 0 |
I just started using python-mode in emacs and I noticed that while the major mode has an option for commenting out a region ((py-comment-region) which is bound (C-c #))there is no option to uncomment the code block which is already commented. I checked all the active keybinds in python-mode and could not find any relevant key. Am I missing something?
I did think of a couple of work arounds like using (delete-rectangular) (bound to C-x r d) to delete the comments.
Another method would be to bind the (comment-or-uncomment-region) to some key and start using that.
But is there any option provided in python-mode itself by default?
|
How to uncomment code block in emacs python-mode?
| 1 | 0 | 0 | 13,149 |
12,381,692 |
2012-09-12T05:26:00.000
| 17 | 0 | 0 | 0 |
emacs,python-mode
| 18,323,802 | 4 | false | 0 | 0 |
Not sure about your setup but I use M-; and it works for me.
| 3 | 10 | 0 |
I just started using python-mode in emacs and I noticed that while the major mode has an option for commenting out a region ((py-comment-region) which is bound (C-c #))there is no option to uncomment the code block which is already commented. I checked all the active keybinds in python-mode and could not find any relevant key. Am I missing something?
I did think of a couple of work arounds like using (delete-rectangular) (bound to C-x r d) to delete the comments.
Another method would be to bind the (comment-or-uncomment-region) to some key and start using that.
But is there any option provided in python-mode itself by default?
|
How to uncomment code block in emacs python-mode?
| 1 | 0 | 0 | 13,149 |
12,381,692 |
2012-09-12T05:26:00.000
| 2 | 0 | 0 | 0 |
emacs,python-mode
| 12,456,968 | 4 | false | 0 | 0 |
Most comment region functions will uncomment a region with C-u comment-region-function
| 3 | 10 | 0 |
I just started using python-mode in emacs and I noticed that while the major mode has an option for commenting out a region ((py-comment-region) which is bound (C-c #))there is no option to uncomment the code block which is already commented. I checked all the active keybinds in python-mode and could not find any relevant key. Am I missing something?
I did think of a couple of work arounds like using (delete-rectangular) (bound to C-x r d) to delete the comments.
Another method would be to bind the (comment-or-uncomment-region) to some key and start using that.
But is there any option provided in python-mode itself by default?
|
How to uncomment code block in emacs python-mode?
| 0.099668 | 0 | 0 | 13,149 |
12,382,229 |
2012-09-12T06:19:00.000
| 0 | 0 | 0 | 0 |
python,multithreading,python-2.7,exit,kill
| 12,390,276 | 1 | true | 0 | 0 |
I guess you are launching the threads and then the main thread is waiting to join them on termination.
You should catch the exception generated by Ctrl-C in the main thread, in order to signal the spawned threads to terminate (changing a flag in each thread, for instance). In this manner, all the children thread will terminate and the main thread will complete the join call, reaching the bottom of your main.
| 1 | 0 | 0 |
I would like to know why doesn't python2.7 drop blocking operations when ctrl+c is pressed, I am unable to kill my threaded application, there are several socket waits, semaphore waits and so on. In python3 ctrl+c dropped every blocking operation and garbage-collected everything, released all the sockets and whatsoever ... Is there (I am convinced there is, I just yet don't know how) a way to acomplish this? Signal handle? Thanks guys
|
Terminate python application waiting on semaphore
| 1.2 | 0 | 1 | 487 |
12,383,270 |
2012-09-12T07:36:00.000
| 0 | 1 | 0 | 0 |
python,c,rabbitmq
| 20,113,247 | 1 | false | 0 | 0 |
To get a good throughput one should monitor:
flow control : memory based, ensure alert levels are set correctly to avoid connections blocking. Connection based ,check publisher and consumer rates are appropriate to avoid flow control)
Setting appropriate Qos values for consumers.
| 1 | 0 | 0 |
I have installed rabbitmq, use pika in python and rabbitmq-c in C for testing.
I have done nothing to rabbitmq except that i modify the listener port to my own one.
The producer works the whole night to put enough messages into rabbitmq, about 1000K durable messages.
The customer is written both in C and python, but its qps is just 80 per queue.
The articles on internet says that their single queue can reach 15000 qps, so what's wrong with mine? Do i need to configure some essential things about rabbitmq?
Each message is about 100 Btyes long, I use consume ack, and the queue and message are both durable.
|
My rabbitmq's qps is only 80 in piki 1000 in rabbitmq-c, what's wrong with it?
| 0 | 0 | 0 | 319 |
12,383,272 |
2012-09-12T07:36:00.000
| 6 | 0 | 0 | 0 |
python,django,websocket
| 12,393,867 | 3 | true | 1 | 0 |
There are few options how to handle it:
Create simple REST API in your Tornado server and post your updates from Django using this API;
Use Redis. Tornado can subscribe to the update key and Django can publish updates to this key when something happens;
Use ZeroMQ (AMQP, etc) to send updates from the Django to the Tornado backend (variation of the 1 and 2).
In most of the cases, it is either first or second option. Some people prefer using 3rd option though.
| 1 | 6 | 0 |
I use https://github.com/mrjoes/sockjs-tornado for a Django app. I can send messages from javascript console very easy. But I want to create a signal in Django and send json string once the signal is active.
Could anyone give me a way to send a certain message in Python to sockjs-tornado socket server?
|
Sockjs - Send message to sockjs-tornado in Python code
| 1.2 | 0 | 0 | 2,233 |
12,383,540 |
2012-09-12T07:53:00.000
| 3 | 0 | 0 | 0 |
python,django,authentication
| 12,383,605 | 6 | false | 1 | 0 |
There's no need to write an authentication backend for the use case you have written. Writing an IP based dispatcher in the middleware layer will likely be sufficient
If your app's url(s) is/are matched, process_request should check for an authenticated django user and match that user to a whitelist.
| 1 | 18 | 0 |
I have a small Django application with a view that I want to restrict to certain users. Anyone from a specific network should be able to see that view without any further authentication, based on IP address alone. Anyone else from outside this IP range should be asked for a password and authenticated against the default Django user management.
I assume I have to write a custom authentication backend for that, but the documentation confuses me as the authenticate() function seems to expect a username/password combination or a token. It is not clear to me how to authenticate using IP addresses here.
What would be the proper way to implement IP address-based authentication in Django? I'd prefer to use as much existing library functions as possible for security-related code instead of writing all of it myself.
|
Authenticate by IP address in Django
| 0.099668 | 0 | 0 | 16,527 |
12,383,697 |
2012-09-12T08:04:00.000
| 11 | 0 | 0 | 1 |
python,cookies,tornado
| 12,385,159 | 1 | true | 1 | 0 |
It seems to me that you are really on the right track. You try lower and lower values, and the cookie has a lower and lower expiration time.
Pass expires_days=None to make it a session cookie (which expires when the browser is closed).
| 1 | 7 | 0 |
How can I set in Tornado a secure cookie that expires when the browser is closed?
If I use set_cookie I can do this without passing extra arguments (I just set the cookie), but how if I have to use set_secure_cookie?
I tried almost everything:
passing nothing: expiration is set to its default value, that is 1 month
passing an integer value: the value is considered as day, i.e. 1 means 1 day
passing a float value: it works, for example setting 0.1 it means almost one hour and a half
|
Tornado secure cookie expiration (aka secure session cookie)
| 1.2 | 0 | 0 | 4,626 |
12,384,056 |
2012-09-12T08:29:00.000
| 1 | 0 | 0 | 0 |
python,urllib2
| 12,384,339 | 1 | false | 1 | 0 |
You have to figured out the call to that second page, including parameters sent, so you can make that call yourself from your python code, best way is navigate first page with google chrome page inspector opened, then go to Network tab where the POST call would be captured and you can see the parameters sent and all. Then just recreate that same POST call from urllib2.
| 1 | 0 | 0 |
I am trying to fetch the HTML content of a website using urllib2. The site has a body onload event that submit a form on this site and hence it goes to a destination site and render the details I need.
response = urllib2.urlopen('www.xyz.com?var=999-999')
www.xyz.com contains a form that is posted to "www.abc.com", this
action value varies depending upon the content in url 'var=999-999'
which means action value will change if the var value changes to
'888-888'
response.read()
this still gives me the html content of "www.xyz.com" , but I want
that of resulting action url. Any suggestions of fetching the html
content from the final page?
Thanks in advance
|
Fetch html content from a destination url that is on onload of the first site in urllib2
| 0.197375 | 0 | 1 | 296 |
12,387,707 |
2012-09-12T11:59:00.000
| 0 | 0 | 0 | 1 |
python,django,apache,comet,wsgi
| 12,390,045 | 3 | false | 1 | 0 |
If one user is all you need to bring your webserver down then the problem is not apache or mod_wsgi.
First you should optimize your tiling routines and check if you really only deliver the data a user actually sees.
After that a faster cpu, more ram, a ssd and aggressive caching will give you more performance.
At last you may get some extra points for using another webserver, but dont expect too much from that.
| 2 | 4 | 0 |
Similar to a tiling server for spatial image data, I want to view many on-the-fly generated images in my Django based web application (merge images, color change, etc.). Since one client can easily request many (>100) images in a short time, it is easy to bring the web server (Apache + mod_wsgi) down.
Hence, I am looking for alternative ways. Since we already use Celery, it might be a good idea to do this image processing asynchronously and push the generated data to the client. To get started with that I switched the WSGI server to be gevent with Apache used as a proxy. However, I haven't managed to get the push thing working yet and I am not quite sure if this is the right direction anyway. Based on that I have three questions:
Do you think this (Celery, gevent, Socket.IO) is a sensible way to allow many clients to use the application without bringing the web server down? Do you see alternatives?
If I hand over the image processing to Celery and let it push the image data to the browser when it is done, the connection won't go through Apache, will it?
If some kind of pushing to the client is used, would it be better to use one connection or one for each image (and close it when done)?
Background:
The Django application I am working on allows a user to display very large images. This is done by tiling the large images before and show only the currently relevant tiles in a grid to the user. From what I understand this is the standard way to serve data in the field of mapping and spatial image data (e.g. OpenStreetMap). But unlike mapping data, we also have many slices in Z a user can scroll through (biological images).
All this works fine when the tiles are statically served. Now I added the option to generate those tiles on the fly -- different images are merged, color corrected, …. This works, but is some heavy load for the web server as one image takes about 0.1s to be generated. Currently we use Apache with mod_wsgi (WSGIRestrictedEmbedded On) and it is easy to bring the server down. Just browsing through the image stack will lead to a hanging web server. I already tried to adjust MaxClients, etc. and turned KeepAlive off. I also tried different thread/processes combinations for mod_wsgi. However, nothing helped enough to allow usage for more than one user. Therefore, I thought a Comet/WebSocket way could help here.
|
Serving many on-the-fly generated images with Django
| 0 | 0 | 0 | 676 |
12,387,707 |
2012-09-12T11:59:00.000
| 1 | 0 | 0 | 1 |
python,django,apache,comet,wsgi
| 12,390,401 | 3 | false | 1 | 0 |
All this works fine when the tiles are statically served. Now I added
the option to generate those tiles on the fly -- different images are
merged, color corrected, …. This works, but is some heavy load for the
web server as one image takes about 0.1s to be generated.
You need a load balancer, with image requests being sent to a front-end server (e.g. NginX) that will multiplex (and cache!) as many requests as needed, provided you supply enough backend servers to do the heavy lifting.
This looks like a classic case for Amazon distributed computing: you could store the tiles in S3 storage (or maybe NFS over EBS). All the image manipulation servers get the data from a single image repository.
At the beginning, you can have both the Web application and one instance of the image manipulation server on the same machine. But basically your processes are three:
Web serving that calculates image URLs (you'll need some way to encode the manipulation as parameters in the URLs, otherwise you'll have to use cookies and session storage, which is ickier)
image server that receives the "image formula" and provides the JPEG tile
file server that allows access to the large images or single original tiles
I have worked at several such architectures, wherein our image layers were stored in a single image file (e.g. five zoom levels, each fifteen channels from FIR to UV, for a total of 75 "images" up to 100K pixels on a side, and the client could request 'Zoom level 2, red channel plus double of difference between UV-1 channel and green, tiles from X=157, Y=195 to X=167,Y=205').
| 2 | 4 | 0 |
Similar to a tiling server for spatial image data, I want to view many on-the-fly generated images in my Django based web application (merge images, color change, etc.). Since one client can easily request many (>100) images in a short time, it is easy to bring the web server (Apache + mod_wsgi) down.
Hence, I am looking for alternative ways. Since we already use Celery, it might be a good idea to do this image processing asynchronously and push the generated data to the client. To get started with that I switched the WSGI server to be gevent with Apache used as a proxy. However, I haven't managed to get the push thing working yet and I am not quite sure if this is the right direction anyway. Based on that I have three questions:
Do you think this (Celery, gevent, Socket.IO) is a sensible way to allow many clients to use the application without bringing the web server down? Do you see alternatives?
If I hand over the image processing to Celery and let it push the image data to the browser when it is done, the connection won't go through Apache, will it?
If some kind of pushing to the client is used, would it be better to use one connection or one for each image (and close it when done)?
Background:
The Django application I am working on allows a user to display very large images. This is done by tiling the large images before and show only the currently relevant tiles in a grid to the user. From what I understand this is the standard way to serve data in the field of mapping and spatial image data (e.g. OpenStreetMap). But unlike mapping data, we also have many slices in Z a user can scroll through (biological images).
All this works fine when the tiles are statically served. Now I added the option to generate those tiles on the fly -- different images are merged, color corrected, …. This works, but is some heavy load for the web server as one image takes about 0.1s to be generated. Currently we use Apache with mod_wsgi (WSGIRestrictedEmbedded On) and it is easy to bring the server down. Just browsing through the image stack will lead to a hanging web server. I already tried to adjust MaxClients, etc. and turned KeepAlive off. I also tried different thread/processes combinations for mod_wsgi. However, nothing helped enough to allow usage for more than one user. Therefore, I thought a Comet/WebSocket way could help here.
|
Serving many on-the-fly generated images with Django
| 0.066568 | 0 | 0 | 676 |
12,391,736 |
2012-09-12T15:34:00.000
| 0 | 0 | 0 | 0 |
python,session,sessionid,webapp2
| 12,562,691 | 3 | false | 1 | 0 |
Make modification to what you already do: when user logs in, create unique/random token and store it in the user object and set a cookie in the browser with it. When user's session is requested, check that the two tokens (from the request cookie and user object) match and if not, burn the session.
It's just the same but instead of remote_addr use a random token that you generate and set as cookie on login.
| 2 | 1 | 0 |
I have the following requirement in a webapp2 application. When a user leaves his machine or browser, that user's previous authentication session should be terminated.
I am able to do this when a user logs in from a different machine, by storing the remote_addr in the User object at login. When the user's session is requested I check the remote_addr from the request against the user's remote_addr at login.
I am not happy with this solution, as it will not work when the user is behind a proxy server and also, it will not work when the user uses different browsers.
Does webapp2 store a session id somewhere, so I can use that to see if the user has logged on in a new session?
|
Webapp2 - Invalidate user login session, when the user logs in from a different browser
| 0 | 0 | 0 | 891 |
12,391,736 |
2012-09-12T15:34:00.000
| 0 | 0 | 0 | 0 |
python,session,sessionid,webapp2
| 12,605,535 | 3 | false | 1 | 0 |
When you open a website for the first time in a browser session a site session is created.
When he logs in you just store the session id in the database.
You need to have a nice table with active logins.
You can also set a cookie in his browser if you want to keep him logged in if he closes and restarts the browser later.
Obviously if the cookie exists modify the session id to the one in the cookie.
Cookies are not shared between browsers so in this case if he logs in from a new browser you changes the session id in the active logins table.
Also you need to have a small ajax that checks if current session is still active every 5 min or so and logs him out if not.
| 2 | 1 | 0 |
I have the following requirement in a webapp2 application. When a user leaves his machine or browser, that user's previous authentication session should be terminated.
I am able to do this when a user logs in from a different machine, by storing the remote_addr in the User object at login. When the user's session is requested I check the remote_addr from the request against the user's remote_addr at login.
I am not happy with this solution, as it will not work when the user is behind a proxy server and also, it will not work when the user uses different browsers.
Does webapp2 store a session id somewhere, so I can use that to see if the user has logged on in a new session?
|
Webapp2 - Invalidate user login session, when the user logs in from a different browser
| 0 | 0 | 0 | 891 |
12,392,699 |
2012-09-12T16:34:00.000
| 0 | 0 | 0 | 1 |
python,macos,python-3.x,easy-install
| 18,519,381 | 2 | false | 0 | 0 |
For what its worth on my install of python3 (using homebrew), calling the correct binary was all that was required. easy_install3 was already on the system path, as was easy_install-3.3.
| 1 | 2 | 0 |
I am a windows 7 user, so pardon me for my ignorance. I have been trying to help my friend get easy_install working on her Mac OS X laptop. We managed to get everything working for 2.7 with these commands in the terminal:
python distribute_setup.py (which installs "distribute")
easy_install
We tried the same thing for Python 3.2.3:
python3.2 distribute_setup.py
easy_install
But the package gets installed for python 2.7 instead of 3.2.3. From what I know, this is because easy_install only works with 2.7.
On my windows 7, I managed to do all these by going into the command prompt, python32 directory and doing:
python distribute_setup.py
Then going into the python32/script directory and running easy_install.exe directly:
easy_install
This installs the package to python 3.2.3 with no problems.
Question:
What should we be doing for Mac OS X? Is there a Mac equivalent of running "easy_install.exe"?
|
Python 3.2.3, easy_install, Mac OS X
| 0 | 0 | 0 | 5,950 |
12,394,528 |
2012-09-12T18:54:00.000
| 2 | 0 | 0 | 0 |
python,database,python-3.x,redis
| 12,394,712 | 1 | true | 1 | 0 |
The method entirely depends on the requirements.
If there is only one client reading and modifying the properties, this is a rather simple problem. When modifying data, just change the instance attributes in your current Python program and -- at the same time -- keep the DB in sync while keeping your program responsive. To that end, you should outsource blocking calls to another thread or make use of greenlets. If there is only one client, there definitely is no need to fetch a property from the DB on each value lookup.
If there are multiple clients reading the data and only one client modifying the data, you have to think about which level of synchronization you need. If you need 100 % synchronization, you will have to fetch data from the DB on each value lookup.
If there are multiple clients changing the data in the database you better look into a rock-solid industry standard solution rather than writing your own DB cache/mapper.
Your distinction between (2) and (3) does not really make sense. If you fetch data on every lookup, there is no need to 'store' data. You see, if there can be multiple clients involved these things quickly become quite complex and it's really hard to get it right.
| 1 | 0 | 0 |
For example, I have object user stored in database (Redis)
It has several fields:
String nick
String password
String email
List posts
List comments
Set followers
and so on...
In Python programm I have class (User) with same fields for this object. Instances of this class maps to object in database. The question is how to get data from DB for best performance:
Load values for each field on instance creating and initialize fields with it.
Load field value each time on field value requesting.
As second one but after value load replace field property by loaded value.
p.s. redis runs in localhost
|
Which one data load method is the best for perfomance?
| 1.2 | 0 | 0 | 96 |
12,396,721 |
2012-09-12T21:59:00.000
| 2 | 0 | 1 | 0 |
python,import,names
| 12,396,773 | 4 | false | 0 | 0 |
You could always remove the current directory from sys.path, but that's very hackish, and not reliable. After testing, I realized that this can works if the file is being run (as __main__, but not if it's being imported, which is very unreliable).
I think the best thing you could do is not name your file with a name that is used by a std lib package.
| 1 | 4 | 0 |
Lets say you're in a file called logging.py. If you try to import the standard logging module, you'll end up importing the file you're in. How can you import the standard logging module from here?
|
Importing a file with the same name as the file you're in
| 0.099668 | 0 | 0 | 512 |
12,398,517 |
2012-09-13T02:12:00.000
| 0 | 0 | 0 | 0 |
python,protocol-buffers,pcap
| 12,399,100 | 1 | false | 0 | 0 |
So you want to reconstruct what .proto messages were being passed over the application-layer protocol?
This isn't as easy as it sounds. First, .proto messages can't be sent raw over the wire, as the receiver needs to know how long they are. They need to be encapsulated somehow, maybe in an HTTP POST or with a raw 4-byte size prepended. I don't know what it would be for your application, but you'll need to deal with that.
Second, you can't reconstruct the full .proto from the messages alone. You only get tag numbers and types, not names. In addition, you will lose information about submessages - submessages and plain strings are encoded identically (you could probably tell which is which by eyeballing them, but I don't think you could do it automatically). You also will never know about optional items that never got sent. But you could parse the buffer without the proto and get some reasonable data (ints, repeated strings, and such).
Third, you need to reconstruct the application byte stream from the pcap log. I'm not sure how to do that, but I suspect there are tools that would do that for you.
| 1 | 0 | 0 |
I want to parse application layer protocols from network trace using Google protocol buffer and replay the trace (I am using python). I need suggestions to automatically generate protocol message description (in .proto file) from a network trace.
|
Google protocol buffer for parsing Text and Binary protocol messages in network trace (PCAP)
| 0 | 0 | 1 | 455 |
12,407,485 |
2012-09-13T13:31:00.000
| 1 | 1 | 0 | 0 |
python,rabbitmq,messaging,pika
| 12,478,098 | 2 | true | 0 | 0 |
After some researching it seems that this is not possible. If you look at the tutorial on RabbitMQ.com you see that there is an id for the call which, as far as I understand gets consumed.
I've choosen to go another way, which is reading the log-files and aggregating the data.
| 1 | 3 | 0 |
I have a consumer which listens for messages, if the flow of messages is more than the consumer can handle I want to start another instance of this consumer.
But I also want to be able to poll for information from the consumer(s), my thought was that I could use RPC to request this information from the producers by using a fanout exchange so all the producers gets the RPC-call.
My question is first of all is this possible and secondly is it reasonable?
|
RPC calls to multiple consumers
| 1.2 | 0 | 1 | 2,271 |
12,407,939 |
2012-09-13T13:56:00.000
| 0 | 0 | 0 | 1 |
python,apache,tcp,broken-pipe,sigpipe
| 12,423,190 | 1 | true | 0 | 0 |
I had to set apache settings to following:
KeepAlive On
MaxKeepAliveRequests 0
KeepAliveTimeout 5
I will further investigate the problem and see if this is proper solution.
| 1 | 0 | 0 |
My TCP Server is written in Qt 4.7, works well with TCP Client also written in Qt 4.7.
I am trying to connect and communicate with Server with client written in python 2.7.3. I start the Server process via apache http request with subprocess.call(path_to_server). I am using mod_wsgi 3.3 and django 1.4.
Connection is established without a problem. I am receiving [Errno 32] Broken pipe exception on socket.send() randomly (I can spam same msg for 10 times and it will be sent 0-10 times). Same happens to socket.shutdown() & socket.close(), I can spam disconnect command and it will randomly disconnect, otherwise receiving [Errno 107] Transport endpoint is not connected exception.
netstat -nap says connection is established.
When I try running same client script using python2.7 shell everything works fine.
What am I missing here?
EDIT
Everything works on Windows 7, running same apache,mod_wsgi,python,django configuration. TCP Server is also Windows compatible. Error happens on centos6.2 32bit.
|
Errno 32 Broken pipe, Errno 107 Transport endpoint is not connected python socket
| 1.2 | 0 | 0 | 2,488 |
12,410,113 |
2012-09-13T15:47:00.000
| 9 | 0 | 1 | 0 |
python,git,github,virtualenv
| 12,410,239 | 4 | false | 1 | 0 |
That's because you're not even supposed to move virtualenvs to different locations on one system (there's relocation support, but it's experimental), let alone from one system to another. Create a new virtualenv:
Install virtualenv on the other system
Get a requirements.txt, either by writing one or by storing the output of pip freeze (and editing the output)
Move the requirements.txt to the other system, create a new virtualenv, and install the libraries via pip install -r requirements.txt.
Clone the git repository on the other system
For more advanced needs, you can create a bootstrapping script which includes virtualenv + custom code to set up anything else.
EDIT: Having the root of the virtualenv and the root of your repository in the same directory seems like a pretty bad idea to me. Put the repository in a directory inside the virtualenv root, or put them into completely separate trees. Not only you avoid git (rightfully -- usually, everything not tracked by git is fair game to delete) complaining about existing files, you can also use the virtualenv for multiple repositories and avoid name collisions.
| 3 | 8 | 0 |
I primarily work these days with Python 2.7 and Django 1.3.3 (hosted on Heroku) and I have multiple projects that I maintain. I've been working on a Desktop with Ubuntu running inside of a VirtualBox, but recently had to take a trip and wanted to get everything loaded up on my notebook. But, what I quickly discovered was that virtualenv + Github is really easy for creating projects, but I struggled to try and get them moved over to my notebook. The approach that I sort of came up with was to create new virtualenv and then clone the code from github. But, I couldn't do it in the folder that I really wanted because it would say the folder is not empty. So, I would clone it to a tmp folder than them cut/paste the everthing into where I really wanted it. Not TERRIBLE, but I just feel like I'm missing something here and that it should be easier. Maybe clone first, then mkvirtualenv?
It's not a crushing problem, but I'm thinking about making some more changes (like getting ride of the VirtualBox and just going with a Dual boot system) and it would be great if I could make it a bit smoother. :)
Finally, I found and read a few posts about moving git repos between computers, but I didn't see any dealing with Virtualenv (maybe I just missed it).
EDIT: Just to be clear and avoid confusion, I'm not try to "move" the virtualenv. I'm just talking about best way to create a new one. Install the packages, and then clone the repo from github.
|
Migrating virtualenv and Github between computers
| 1 | 0 | 0 | 8,279 |
12,410,113 |
2012-09-13T15:47:00.000
| 1 | 0 | 1 | 0 |
python,git,github,virtualenv
| 12,410,172 | 4 | false | 1 | 0 |
The nice thing about a virtualenv is that you can describe how to make one, and you can make it repeatedly on multiple platforms.
So, instead of cloning the whole thing, clone a method to create the virtualenv consistently, and have that in your git repository. This way you avoid platform-specific nasties.
| 3 | 8 | 0 |
I primarily work these days with Python 2.7 and Django 1.3.3 (hosted on Heroku) and I have multiple projects that I maintain. I've been working on a Desktop with Ubuntu running inside of a VirtualBox, but recently had to take a trip and wanted to get everything loaded up on my notebook. But, what I quickly discovered was that virtualenv + Github is really easy for creating projects, but I struggled to try and get them moved over to my notebook. The approach that I sort of came up with was to create new virtualenv and then clone the code from github. But, I couldn't do it in the folder that I really wanted because it would say the folder is not empty. So, I would clone it to a tmp folder than them cut/paste the everthing into where I really wanted it. Not TERRIBLE, but I just feel like I'm missing something here and that it should be easier. Maybe clone first, then mkvirtualenv?
It's not a crushing problem, but I'm thinking about making some more changes (like getting ride of the VirtualBox and just going with a Dual boot system) and it would be great if I could make it a bit smoother. :)
Finally, I found and read a few posts about moving git repos between computers, but I didn't see any dealing with Virtualenv (maybe I just missed it).
EDIT: Just to be clear and avoid confusion, I'm not try to "move" the virtualenv. I'm just talking about best way to create a new one. Install the packages, and then clone the repo from github.
|
Migrating virtualenv and Github between computers
| 0.049958 | 0 | 0 | 8,279 |
12,410,113 |
2012-09-13T15:47:00.000
| 3 | 0 | 1 | 0 |
python,git,github,virtualenv
| 12,410,230 | 4 | false | 1 | 0 |
In addition to scripting creating a new virtualenv, you should make a requirements.txt file that has all of your dependencies (e.g Django1.3), you can then run pip install -r requirements.txt and this will install all of your dependencies for you.
You can even have pip create this for you by doing pip freeze > stable-req.txt which will print out you dependencies as there are in your current virtualenv. You can then keep the requirements.txt under version control.
| 3 | 8 | 0 |
I primarily work these days with Python 2.7 and Django 1.3.3 (hosted on Heroku) and I have multiple projects that I maintain. I've been working on a Desktop with Ubuntu running inside of a VirtualBox, but recently had to take a trip and wanted to get everything loaded up on my notebook. But, what I quickly discovered was that virtualenv + Github is really easy for creating projects, but I struggled to try and get them moved over to my notebook. The approach that I sort of came up with was to create new virtualenv and then clone the code from github. But, I couldn't do it in the folder that I really wanted because it would say the folder is not empty. So, I would clone it to a tmp folder than them cut/paste the everthing into where I really wanted it. Not TERRIBLE, but I just feel like I'm missing something here and that it should be easier. Maybe clone first, then mkvirtualenv?
It's not a crushing problem, but I'm thinking about making some more changes (like getting ride of the VirtualBox and just going with a Dual boot system) and it would be great if I could make it a bit smoother. :)
Finally, I found and read a few posts about moving git repos between computers, but I didn't see any dealing with Virtualenv (maybe I just missed it).
EDIT: Just to be clear and avoid confusion, I'm not try to "move" the virtualenv. I'm just talking about best way to create a new one. Install the packages, and then clone the repo from github.
|
Migrating virtualenv and Github between computers
| 0.148885 | 0 | 0 | 8,279 |
12,410,617 |
2012-09-13T16:19:00.000
| 0 | 0 | 1 | 0 |
python,regex
| 12,411,195 | 2 | false | 0 | 0 |
As Kash said, use \.{3} rather than [...]. The problem is that the period is a special character in regular expressions. It means "any character". So, in order to represent a literal period, you much escape it with the backslash. \.{3} looks for any occurances of exactly three periods in a row. \.+ looks for any occurrence of one or more periods in a row (careful, will look for single periods). \.\.+ will find all instances of two or more periods in a row, which I expect is what you're looking for.
| 2 | 0 | 0 |
I downloaded this tool (http://www3.telus.net/pfrank/) to rename thousands of file names (that I inherited) that has bunch of illegal characters. I am trying to find ... in any where in the file name and replace it with nothing.
I tried [...] but it only removes from the end of file name. Any suggestion ?
|
Search pattern (Regex) python
| 0 | 0 | 0 | 96 |
12,410,617 |
2012-09-13T16:19:00.000
| 0 | 0 | 1 | 0 |
python,regex
| 12,411,978 | 2 | true | 0 | 0 |
Use \.+ instead of [...].
If you are looking for specifically 3 dots, then \.{3}
| 2 | 0 | 0 |
I downloaded this tool (http://www3.telus.net/pfrank/) to rename thousands of file names (that I inherited) that has bunch of illegal characters. I am trying to find ... in any where in the file name and replace it with nothing.
I tried [...] but it only removes from the end of file name. Any suggestion ?
|
Search pattern (Regex) python
| 1.2 | 0 | 0 | 96 |
12,413,559 |
2012-09-13T19:37:00.000
| 3 | 0 | 1 | 0 |
python,glob
| 12,413,647 | 1 | true | 0 | 0 |
Personally I would copy the regexp. It's not like the definition of glob patterns will ever change, forcing you to change it in your code. The fact that the method is not made available externally by the stdlib means that there is no promise that it won't change in the future. I wouldn't worry about it CHANGING in the future (for the same reason as above: the definition of a glob pattern won't change) but I would be worried that it might be REMOVED if the module's implementation were refactored so that it was not required anymore.
| 1 | 5 | 0 |
I have a bit of code where I'd like to know if a path has shell wildcards, which seems like something that would be defined in a central location. I've found that glob.has_magic() provides this (it's just a regex: '[*?[]'). But this method is not listed in the module's __all__ list, and does not appear in the pydoc.
Should I just copy this regex into my code? (I'd prefer not to)
Is there a risk of this method being removed in future versions of python, since it does not show up in the documentation?
|
Is it ok to use glob.has_magic?
| 1.2 | 0 | 0 | 998 |
12,414,570 |
2012-09-13T20:49:00.000
| 3 | 0 | 1 | 0 |
python,search-path
| 12,414,602 | 2 | true | 0 | 0 |
The module search path always exists, even before you import the sys module. The sys module just makes it available for you.
It reflects the contents of the system variable $PYTHONPATH, or a system default, if you have not set that environment variable.
| 1 | 1 | 0 |
I'm new to python, and I find that to see the import search paths, you have to import the sys module and than access the list of paths using sys.path, if this list is not available until I explicitly import the sys module, so how the interpreter figure out where this module resides.
thanks for any explanation.
|
how the python interpreter find the modules path?
| 1.2 | 0 | 0 | 1,511 |
12,416,328 |
2012-09-13T23:42:00.000
| 3 | 0 | 0 | 0 |
python,django,e-commerce
| 12,416,400 | 1 | false | 1 | 0 |
Better idea to use sites framework?
Yes.
I don't think so but I could be wrong?
No.
How would django handle many websites?
vhosts.
Could it be solved through some kind of middleware?
As well. Sure.
How to go about that?
Pay a programmer
| 1 | 1 | 0 |
I am currently planning to build a SaaS e-commerce application in python (django).
The application will create new e-commerce websites as requested. Each e-commerce needs its own templates/configuration but the core functions stay the same. Each owner decides what goes where and the page layout.
So, from what I understood, I would have to create apps separately from the projects so that they can be reused across all the sites but each website has its own project with a different config. since each website has its own database.
My questions are the following:
Would it be a better idea to use the sites framework given by django in this case? I don't think so but I could be wrong?
How would django handle many websites? The web port can only be used once so spawning more than one django server is not a possibility. Could it be solved through some kind of middleware? And in which case, how would I go about that?
I am really interested to learn and would really appreciate all the help I can receive!
Thank you very much for your time. :-)
|
Django - multi site
| 0.53705 | 0 | 0 | 330 |
12,418,384 |
2012-09-14T04:59:00.000
| 0 | 0 | 1 | 0 |
python,celery,gevent
| 12,425,472 | 1 | false | 0 | 0 |
For one you must make sure that you aren't making any blocking calls in your code,
as that will also block everything else from running, slowing the entire system.
Reasons for blocking include tight loops or IO that has not been patched by eventlet's monkey patch (e.g. C extensions).
Celery supports using eventlet & gevent, and that is probably the recommended concurrency
option for what you are doing (web request IO). Celery may not make your code run faster though, but it enables you to easily distribute the work to many machines.
To optimize you should always profile your code to find out what the bottleneck is. It could be many things, e.g. slow network, slow host, slow DNS or something else entirely.
| 1 | 0 | 0 |
The scenario is save the response of an API request using RMDB id as a parameter.
I want to grab all the movie info from imdv-id tt0000001 to tt9999999.
Now I'm using gevent to run several threads(gevent.joinall(threads)), it's not so fast.
Is there other solutions for this kind of problems, like using Celery+RabbitMQ?
|
How to grab contents asynchronously using Python(Gevent)?
| 0 | 0 | 0 | 321 |
12,418,822 |
2012-09-14T05:50:00.000
| 0 | 1 | 0 | 0 |
javascript,python,extjs,embedded
| 12,436,279 | 1 | true | 1 | 0 |
What you need to do, when the "all-classes.js" file is requested, is to return the content of "all-classes.js.gzip" with the additional "Content-Encoding: gzip" HTTP header.
But it's only possible if the request contained the "Accept-Encoding: gzip" HTTP header in the first place...
| 1 | 3 | 0 |
I have to deploy a heavily JS based project to a embedded device. Its disk size is no more than 16Mb. The problem is the size of my minified js file all-classes.js is about 3Mb. If I compress it using gzip I get a 560k file which saves about 2.4M. Now I want to store all-classes.js as all-classes.js.gz so I can save space and it can be uncompressed by browser very well. All I have to do is handle the headers.
Now the question is how do I include the .gz file so browser understands and decompresses? Well i am aware that a .gz file contains file structure information while browser accepts only raw gzipped data. In that I would like to store the raw gzipped data. It'd some sort of caching!
|
Use compressed JavaScript file (not run time compression)
| 1.2 | 0 | 0 | 332 |
12,420,338 |
2012-09-14T07:59:00.000
| 0 | 0 | 1 | 0 |
python,sqlite
| 12,420,541 | 2 | false | 0 | 0 |
As I understand you would like to install python form sources. To make sqlite module available you have to install sqlite package and its dev files (for example sqlite-devel for CentOS). That's it. YOu have to re-configure your sources after installing the required packages.
Btw you will face up the same problem with some other modules.
| 1 | 0 | 0 |
Okay,
I kinda asked this question already, but noticed that i might have not been as clear as i could have been, and might have made some errors myself.
I have also noticed many people having the same or similar problems with sqlite3 in python. So i thought i would ask this as clearly as i could, so it could possibly help others with the same issues aswell.
What does python need to find when compiling, so the module is enabled and working?
(In detail, i mean exact files, not just "sqlite dev-files")?
And if it needs a library, it propably needs to be compiled with the right architecture?
|
What does Python need to install sqlite3 module?
| 0 | 1 | 0 | 3,032 |
12,421,574 |
2012-09-14T09:20:00.000
| 2 | 0 | 0 | 1 |
python,macos,macports,osx-mountain-lion
| 12,421,681 | 2 | true | 0 | 0 |
The sudo port select command only switches what /usr/local/bin/python points to, and does not touch the /usr/bin/python path at all.
The /usr/bin/python executable is still the default Apple install. Your $PATH variable may still look in /usr/local/bin before /usr/bin though when you type in python at your Terminal prompt.
| 1 | 4 | 0 |
I installed Python through MacPorts, and then changed the path to that one.
/opt/local/bin/python
using this command
sudo port select python python27
But now i want to revert to the Mac one at this path
/usr/bin/python
How can I go about doing this?
EDIT:
I uninstalled the MacPort Python, restarted the terminal and everything went back to normal. Strange. But I sill don't know why/how.
|
Changing the default python in OSX Mountain Lion
| 1.2 | 0 | 0 | 4,810 |
12,425,407 |
2012-09-14T13:17:00.000
| 0 | 1 | 1 | 1 |
python,ide,text-editor
| 12,425,441 | 4 | false | 0 | 0 |
Komodo is a good commercial IDE.And Eric is a free python IDE which is written with python.
| 1 | 0 | 0 |
I am running python on linux and am currently using vim for my single-file programs, and gedit for multi-file programs. I have seen development environments like eclipse and was basically wondering if there's a similar thing on ubuntu designed for python.
|
development environment for python projects
| 0 | 0 | 0 | 210 |
12,425,433 |
2012-09-14T13:20:00.000
| 1 | 1 | 1 | 0 |
python,jsonpickle
| 12,426,079 | 1 | true | 0 | 0 |
OS and python version?
Please use pip. Always.
pydev seems to ignore your package. It should be in /usr/share/pythonX.Y/site-packages/jsonpickle, or, if on Windows, c:\pythonxx[...].
If using Linux, please try to find a distro package for jsonpickle.
| 1 | 0 | 0 |
I am using PyDev via eclipse and have used easy_install to get jsonpickle. No matter what I do I can't seem to get the import to work.
What I have tried thus far:
I have removed it from easy_install.pth and deleted the egg and installed again.
Add my python lib, dll, etc folders to a PYTHONPATH system variable
Restarted eclipse
Other imports are working fine. Not sure what I am doing wrong?
EDIT:
Sorry should have included OS / Python version.
OS: Windows 7
Python: 2.7
Any suggestions greatly appreciated
|
Can't get import to be recognized - jsonpickle
| 1.2 | 0 | 0 | 800 |
12,426,106 |
2012-09-14T14:00:00.000
| 1 | 0 | 1 | 0 |
python,debugging,pudb
| 22,172,912 | 4 | false | 0 | 0 |
You can just get to a python/ipython shell by pressing "!" . Then you can play around with your variables (view them, change them, etc.)
| 3 | 5 | 0 |
How could I inspect complex variable (list, dict, object) value with python debugger, I am new to python, I tried pudb, it looks like when the variable type is complex type, the debugger only show type of the variable, not the value.
Is it possible to inspect value with pudb? or is there any other python debugger can do this?
|
inspect complex variable in python debugger, like pudb
| 0.049958 | 0 | 0 | 2,502 |
12,426,106 |
2012-09-14T14:00:00.000
| 1 | 0 | 1 | 0 |
python,debugging,pudb
| 50,848,102 | 4 | false | 0 | 0 |
To show the contents of all the variables on the variable list by default you can go to Preferences by pressing Ctrl+P, and under Variable Stringifier select str() or repr() for a Python interpreter-like display of variables.
Otherwise, you can toggle a selected variable in the variable list (which is accessible by Right arrow keyboard key) by pressing s or r for str() and repr() and t to get back to display its type. With a variable set to show its type you can expand its contents in an orderly tree fashion typing '\' (backslash).
If your variable is a global one or you don't see it for some reason you will have to explicitly state that you wish to watch it by hitting n and then type its name.
| 3 | 5 | 0 |
How could I inspect complex variable (list, dict, object) value with python debugger, I am new to python, I tried pudb, it looks like when the variable type is complex type, the debugger only show type of the variable, not the value.
Is it possible to inspect value with pudb? or is there any other python debugger can do this?
|
inspect complex variable in python debugger, like pudb
| 0.049958 | 0 | 0 | 2,502 |
12,426,106 |
2012-09-14T14:00:00.000
| 11 | 0 | 1 | 0 |
python,debugging,pudb
| 12,464,017 | 4 | true | 0 | 0 |
To see the contents of a complex data type in pudb:
Use the right arrow to move the cursor to the Variables box on the right.
Use the up and down arrows to move the cursor to the variable you're interested in.
Use the backslash '\' to show/hide the contents of the data structure.
| 3 | 5 | 0 |
How could I inspect complex variable (list, dict, object) value with python debugger, I am new to python, I tried pudb, it looks like when the variable type is complex type, the debugger only show type of the variable, not the value.
Is it possible to inspect value with pudb? or is there any other python debugger can do this?
|
inspect complex variable in python debugger, like pudb
| 1.2 | 0 | 0 | 2,502 |
12,426,866 |
2012-09-14T14:47:00.000
| 3 | 0 | 1 | 0 |
python
| 12,426,891 | 1 | true | 0 | 0 |
When the Python interpreter exits, it cleans up after itself (and after your program) and will close open files.
| 1 | 0 | 0 |
What are the consequences of ending an interactive Python session without first closing an open file? I understand that closing files is necessary to free up memory and system resources, but would they continue to use such resources after the session has been ended?
|
End a Python interactive session without closing an open file?
| 1.2 | 0 | 0 | 360 |
12,427,782 |
2012-09-14T15:40:00.000
| 0 | 0 | 0 | 0 |
python,onchange,openerp
| 12,600,879 | 3 | false | 1 | 0 |
sometimes error '0' will be generate so we have to write a code with like this.
In case of above Example
lis.append((0,0,res))
| 1 | 2 | 0 |
I have a field amount on the Journal Entries (account move form) and I need to define an onChange event which automatically inserts the lines once I fill in the amount. But I am not sure how.
|
Openerp: onChange event to create lines on account move
| 0 | 0 | 0 | 2,338 |
12,429,556 |
2012-09-14T17:50:00.000
| 0 | 0 | 0 | 0 |
python,django,redis,multiplayer,gevent
| 12,430,890 | 1 | false | 1 | 0 |
You have described your options well enough. Probably you need to combine both approaches.
Ensure that you have as little shared state as possible.
Use queue for modifications to whatever shared state remains.
| 1 | 2 | 0 |
I'm building a multiplayer card game with Python, gevent and django-socketio and I'm wondering about the best way to maintain state on things, bearing in mind that there'll be multiple clients connecting at once and doing things.
I'm using Redis as a datastore for the in game bits, with light object models on top (Redisco at the mo).
I'm concerned about defending against race conditions and therefore keeping the game state safe and consistent with so many clients trying to do things at once. I'm thinking that my main options are:
(1) - Ensure that all operations are safe with more that one client doing things at once (eg, a player can only interact with certain properties of their own player model, and there's some objective game state via another thread or something which does anything else.)
(2) - Use a queue with some global lock to ensure client operations all happen in a certain guaranteed order, and one finishes before the next one starts.
I'm using Python, Django, django-socketio, gevent, but think this applies more broadly.
Is this the "threadsafe" thing that people refer to?
I guess in theory I think I prefer the idea of (1), and I think that I can ensure safe operations by just modifying a single Redis key at a time, or safe sets of atomic operations, but I guess I'd either need to throw away the Redisco models or be very careful about understanding when things get saved and written. I think that's fine for just a couple of us working on things but might be dangerous longer term with more people in the codebase.
Thanks!
|
Safe objective multiplayer game state with multiple threads
| 0 | 0 | 0 | 414 |
12,431,755 |
2012-09-14T20:46:00.000
| 0 | 0 | 1 | 0 |
python,python-2.7
| 12,431,831 | 2 | false | 0 | 0 |
Yes, defining a function inside another function has a slight performance penalty over defining functions at module scope.
| 1 | 3 | 0 |
I'm writing a function in Python that I'm planning to run for 10 000 or more times for each script execution. The function currently contains 3 sub-functions but will probably contain 20 or more when the script is complete. I'm just wondering; Will declaring those functions over and over (since the parent function will be run thousands of times) have a recurring performance cost, or is that optimised and not an issue?
Would separating all those sub-functions into a class help with performance?
(I intend to test this and post the results here if nobody knows the answer on the top of their heads.)
|
Does creating functions inside functions have a recurring cost?
| 0 | 0 | 0 | 118 |
12,431,847 |
2012-09-14T20:55:00.000
| 2 | 0 | 1 | 1 |
python,python-3.x,python-2.7,pypy
| 12,432,095 | 2 | false | 0 | 0 |
pypy is a compliant alternative implementation of the python language. This means that there are few (intentional) differences. One of the few differences is pypy does not use reference counting. This means, for instance, you have to manually close your files, they will not be automatically closed when your file variable goes out of scope as in CPython.
| 1 | 8 | 0 |
Is there a difference in python programming while using just python and while using pypy compiler? I wanted to try using pypy so that my program execution time becomes faster. Does all the syntax that work in python works in pypy too? If there is no difference, can you tell me how can i install pypy on debian lunux and some usage examples on pypy? Google does not contain much info on pypy other than its description.
|
Usage of pypy compiler
| 0.197375 | 0 | 0 | 7,679 |
12,431,871 |
2012-09-14T20:58:00.000
| 3 | 0 | 1 | 1 |
python,twisted
| 12,433,363 | 2 | true | 0 | 0 |
In Python 2,
str → a sequence of bytes, which is sometimes used as ASCII text
bytes → an alias for str (available in python 2.6 and later)
unicode → a sequence of unicode code units (UCS-2 or UCS-4, depending on compile time options, UCS-2 by default)
In Python 3,
str → a sequence of unicode code units (UCS-4)
bytes → a sequence of bytes
unicode → no such thing any more, you mean str
Think of the type passed to dataReceived as bytes. It is bytes in Python 2.x, it will be bytes when Twisted has been ported to Python 3.x.
Therefore, the length in bytes of the received segment is simply len(data).
| 2 | 2 | 0 |
I am using Twisted to receive data from a socket.
My protocol class inherits from Protocol.
As there are no byte type in Python 2.*, the type of received data is str.
Of course, len (data) gives me the length of the string but how can I know
the number of bytes received ? There is not sizeof or something equivalent that allows
me to know the number of bytes ?
Or should I consider that whatever the platform, the number of bytes will be 2 * len (data) ?
thanks in advance
|
How many bytes are received with dataReceived?
| 1.2 | 0 | 0 | 296 |
12,431,871 |
2012-09-14T20:58:00.000
| 4 | 0 | 1 | 1 |
python,twisted
| 12,431,966 | 2 | false | 0 | 0 |
The length of the string is the length in bytes.
| 2 | 2 | 0 |
I am using Twisted to receive data from a socket.
My protocol class inherits from Protocol.
As there are no byte type in Python 2.*, the type of received data is str.
Of course, len (data) gives me the length of the string but how can I know
the number of bytes received ? There is not sizeof or something equivalent that allows
me to know the number of bytes ?
Or should I consider that whatever the platform, the number of bytes will be 2 * len (data) ?
thanks in advance
|
How many bytes are received with dataReceived?
| 0.379949 | 0 | 0 | 296 |
12,432,130 |
2012-09-14T21:18:00.000
| 4 | 1 | 0 | 0 |
python,apache,caching,wsgi
| 12,432,255 | 2 | false | 1 | 0 |
Its a very bad setting from a performance point of view, but what I do in my http.conf is set MaxRequestsPerChild to 1. This has the effect of each apache process handles a single request before dying. It kills throughput (so don't run benchmarks with that setting, or use it on a production site), but it has the effect of giving python a clean environment for every request.
| 1 | 2 | 0 |
Developing in Python using mod-python mod-wsgi on Apache 2.
All running fine, but if I do any change on my PY file, the changes are not propagated until I restart Apache /etc/init.d/apache2 restart.
This is annoying since I can't SSH and restart Apache service everytime in development.
Is there any way to disable Apache caching?
Thank you.
|
Disable caching in Apache 2 for Python Development
| 0.379949 | 0 | 0 | 1,461 |
12,434,403 |
2012-09-15T04:04:00.000
| 0 | 0 | 1 | 0 |
python,python-imaging-library
| 12,960,389 | 2 | false | 0 | 0 |
I hacked it by symlinking display to "xv" in $path
| 1 | 0 | 0 |
How does PIL find the viewer to use for imshow() on Ubuntu?
I notice it's trying to use "xv", but I only have "display" available
In my previous installation of Python where it correctly found "display" with no hacking from me. Any idea what environment vars/settings I need to check?
I'm Python 2.6.5, Ubuntu 10.04, PIL 1.1.6
|
Specifying the viewer used by imshow()
| 0 | 0 | 0 | 330 |
12,435,923 |
2012-09-15T08:46:00.000
| 1 | 1 | 0 | 0 |
python,serial-port,communication
| 12,436,940 | 1 | false | 1 | 0 |
Although part 1 is no direct answer to your question:
There are devices, which just have a autodetection (called Auto-bauding) method included, that means: Send a character using your current settings (9k6, 115k2, ..) to the device and chances are high that the device will answer with your (!) settings. I've seen this on HP switches.
Second approach: try to re-order the connection possibilities. E.g. chances are high that the other end uses 9k6 with no hardware handshake, but less that it uses 38k4 with software Xon/Xoff.
If you break down your tries into just a few, the "brute force" method will be much more efficient.
| 1 | 4 | 0 |
From time to time I suddenly have a need to connect to a device's console via its serial port. The problem is, I never remember what port settings (baud rate, data bits, stop bits, etc...) to use with each particular device, and documentation never seems to be lying around when it's really needed.
I wrote a Python script, which uses a simple brute-force method (i.e. iterates over all possible settings, sends some test input and displays the response for a human to decide if it makes sense ), but:
it takes a long time to complete
does not always work (perhaps port reset/timeout issues)
just does not seem like a proper way to do this :)
So the question is: does anyone know of a procedure to auto-detect what port settings the remote device is using?
|
Detecting serial port settings
| 0.197375 | 0 | 0 | 2,135 |
12,436,551 |
2012-09-15T10:24:00.000
| 1 | 0 | 0 | 0 |
python,web-scraping
| 12,437,002 | 1 | true | 1 | 0 |
Download the complete webpage, extract the style elements and the stylesheet link elements and download the files referenced the latter. That should give you the CSS used on the page.
| 1 | 0 | 0 |
I am building a screen clipping app.
So far:
I can get the html mark up of the part of the web page the user has selected including images and videos.
I then send them to a server to process the html with BeautifulSoup to sanitize the html and convert all relative paths if any to absolute paths
Now I need to render the part of the page. But I have no way to render the styling. Is there any library to help me in this matter or any other way in python ?
One way would be to fetch the whole webpage with urllib2 and remove the parts of the body I don't need and then render it.
But there must be a more pythonic way :)
Note: I don't want a screenshot. I am trying to render proper html with styling.
Thanks :)
|
Python : Rendering part of webpage with proper styling from server
| 1.2 | 0 | 1 | 118 |
12,438,153 |
2012-09-15T14:20:00.000
| 7 | 0 | 1 | 0 |
c#,python,parameter-passing
| 12,438,226 | 5 | true | 0 | 0 |
C# passes parameters by value unless you specify that you want it differently. If the parameter type is a struct, its value is copied, otherwise the reference to the object is copied. The same goes for return values.
You can modify this behavior using the ref or out modifier, which must be specified both in the method declaration and in the method call. Both change the behavior for that parameter to pass-by-reference. That means you can no longer pass in more complex expressions. The difference between ref and out is that when passing a variable to a ref parameter, it must have been initialized already, while a variable passed to an out parameter doesn't have to be initialized. In the method, the out parameter is treated as uninitialized variable and must be assigned a value before returning.
| 1 | 4 | 0 |
What are the main differences, if any, of Python's argument passing rules vs C#'s argument passing rules?
I'm very familiar with Python and only starting to learn C#. I was wondering if I could think of the rule set as to when an object is passed by reference or by value the same for C# as it is in Python, or if there are some key differences I need to keep in mind.
|
C# vs Python argument passing
| 1.2 | 0 | 0 | 4,158 |
12,440,044 |
2012-09-15T18:39:00.000
| -3 | 0 | 0 | 0 |
python,unit-testing,sqlalchemy,rollback
| 12,443,800 | 1 | true | 0 | 0 |
Postgres does not rollback advances in a sequence even if the sequence is used in a transaction which is rolled back. (To see why, consider what should happen if, before one transaction is rolled back, another using the same sequence is committed.)
But in any case, an in-memory database (SQLite makes this easy) is the best choice for unit tests.
| 1 | 4 | 0 |
For my database project, I am using SQL Alchemy. I have a unit test that adds the object to the table, finds it, updates it, and deletes it. After it goes through that, I assumed I would call the session.rollback method in order to revert the database changes. It does not work because my sequences are not reverted. My plan for the project is to have one database, I do not want to create a test database.
I could not find in the documentation on SQL Alchemy on how to properly rollback the database changes. Does anyone know how to rollback the database transaction?
|
How to rollback the database in SQL Alchemy?
| 1.2 | 1 | 0 | 3,017 |
12,443,510 |
2012-09-16T00:25:00.000
| -1 | 0 | 1 | 0 |
python,python-2.7,python-idle,python-2.5,coexistence
| 12,443,582 | 2 | false | 0 | 0 |
Unless you need Python 2.7 for some reason, the simplest way to achieve this in Windows is to uninstall Python and reinstall Python 2.5 again.
| 1 | 2 | 0 |
Ok, so I just installed Python 2.7, but I all ready had python 2.5. I realized that because I installed Python 2.7 last, IDLE automatically opens Python 2.7 IDLE, which I don't want. Is there any way to set the Python 2.5 IDLE to automatically open when I use the right click option on a python source file? Thanks.
|
Can I set IDLE to start Python 2.5 by default?
| -0.099668 | 0 | 0 | 2,051 |
12,444,496 |
2012-09-16T04:55:00.000
| 0 | 0 | 0 | 0 |
python,django,nlp,summarization
| 55,256,456 | 2 | false | 1 | 0 |
About papers, I would like to add to the previous answer next ones:
"Text Data Management and Analysis" by ChengXiang Zhai and Sean Massung, chapter 16.
"Texts in Computer Science: Fundamentals of Predictive Text Mining" by Sholom M. Weiss,
Nitin Indurkhya and Tong Zhang (second edition), chapter 9.
| 2 | 2 | 0 |
I have decided to develop a Auto Text Summarization Tool using Python/Django.
Can someone please recommend books or articles on how to get started?
Is there any open source algorithm or made project in the Auto Text Summarization so that I can gain the idea?
Also, would you like to suggest me the new challenging FYP for me in Django/Python?
|
Auto Text Summarization
| 0 | 0 | 0 | 1,797 |
12,444,496 |
2012-09-16T04:55:00.000
| 2 | 0 | 0 | 0 |
python,django,nlp,summarization
| 43,989,780 | 2 | false | 1 | 0 |
First off for Paper, I recommend:
1- Recent automatic text summarization techniques: a survey by M.Gambhir and V.Gupta
2- A Survey of Text Summarization Techniques, A.Nenkova
As for tools for Python, I suggest taking a look at these tools:
The Conqueror: NLTK
The Prince: TextBlob
The Mercenary: Stanford CoreNLP
The Usurper: spaCy
The Admiral: gensim
First off learn about different kinds of summarizations and what suits you best. Also, remember to make sure you have a proper preprocessing tool for the language you are targeting as this is very important for the quality of your summarizer.
| 2 | 2 | 0 |
I have decided to develop a Auto Text Summarization Tool using Python/Django.
Can someone please recommend books or articles on how to get started?
Is there any open source algorithm or made project in the Auto Text Summarization so that I can gain the idea?
Also, would you like to suggest me the new challenging FYP for me in Django/Python?
|
Auto Text Summarization
| 0.197375 | 0 | 0 | 1,797 |
12,446,340 |
2012-09-16T11:02:00.000
| 2 | 0 | 1 | 0 |
python,opencv,compilation,cuda,tbb
| 16,136,096 | 1 | true | 0 | 0 |
OpenCV-Python is just a wrapper around underlying C++ code. So if you compile with IPP,TBB, your python code also should make use of it.
But regarding CUDA, OpenCV has separate functions for GPU operations. And those functions don't have Python bindings until now. So you won't be able to access them from Python. ( of course, they are planning to create wrapper for GPU functions also, so in future, you can use it, but not now).
Now if you have made all possible optimizations and still thinks code is slow, you will have to use other methods like cython, or write your codes on C and call it from C etc.
| 1 | 2 | 0 |
I'm using the Python bindings for OpenCV, which is basically done just by compiling the OpenCV package and placing a .pyd file in my Python distribution.
My question is: If I compile the OpenCV package with Intel IPP, TBB and CUDA
, will it affect the Python bindings? And if yes, could I just get the .pyd file from someone who did the compilation (since I'm having some troubles doing this)
|
OpenCV - IPP, TBB and CUDA in Python bindings
| 1.2 | 0 | 0 | 1,658 |
12,447,933 |
2012-09-16T14:57:00.000
| 2 | 0 | 1 | 0 |
python
| 12,447,960 | 3 | false | 0 | 0 |
Use zip to combine the two lists into a list of tuples [(A,1),(B,3),...,(E,5)] and then use sort with a custom cmp method to compare the numbers, and then use map to pull the letters back out.
| 1 | 2 | 0 |
I feel really dumb not to be able to solve something like this, but I'm point blank. I need to come up with a short and elegant way to do this, and for some reason I just can't!
The concept is very simple
I have a list with [4,3,5,2,1] and I have five individuals A, B, C, D, E
A=4 B=3 C=5 D=2 E=1
Now, I need to arrange them in ascending order based on their numbers so they become
['E', 'D', 'B', 'A', 'C']
I seriously don't get why I can't figure this one out D:
|
Really easy python standoff
| 0.132549 | 0 | 0 | 118 |
12,450,704 |
2012-09-16T21:02:00.000
| 0 | 0 | 1 | 0 |
python
| 42,498,269 | 15 | false | 0 | 0 |
Sometimes, pprint() in pprint module works wonder, especially for dict variables.
| 4 | 10 | 0 |
In C++, \n is used, but what do I use in Python?
I don't want to have to use:
print (" ").
This doesn't seem very elegant.
Any help will be greatly appreciated!
|
How to print spaces in Python?
| 0 | 0 | 0 | 266,436 |
12,450,704 |
2012-09-16T21:02:00.000
| 4 | 0 | 1 | 0 |
python
| 48,596,577 | 15 | false | 0 | 0 |
print("hello" + ' '*50 + "world")
| 4 | 10 | 0 |
In C++, \n is used, but what do I use in Python?
I don't want to have to use:
print (" ").
This doesn't seem very elegant.
Any help will be greatly appreciated!
|
How to print spaces in Python?
| 0.053283 | 0 | 0 | 266,436 |
12,450,704 |
2012-09-16T21:02:00.000
| 0 | 0 | 1 | 0 |
python
| 55,833,510 | 15 | false | 0 | 0 |
To print any amount of lines between printed text use:
print("Hello" + '\n' *insert number of whitespace lines+ "World!")
'\n' can be used to make whitespace, multiplied, it will make multiple whitespace lines.
| 4 | 10 | 0 |
In C++, \n is used, but what do I use in Python?
I don't want to have to use:
print (" ").
This doesn't seem very elegant.
Any help will be greatly appreciated!
|
How to print spaces in Python?
| 0 | 0 | 0 | 266,436 |
12,450,704 |
2012-09-16T21:02:00.000
| 0 | 0 | 1 | 0 |
python
| 71,345,576 | 15 | false | 0 | 0 |
A lot of users gave you answers, but you haven't marked any as an answer.
You add an empty line with print().
You can force a new line inside your string with '\n' like in print('This is one line\nAnd this is another'), therefore you can print 10 empty lines with print('\n'*10)
You can add 50 spaces inside a sting by replicating a one-space string 50 times, you can do that with multiplication 'Before' + ' '*50 + 'after 50 spaces!'
You can pad strings to the left or right, with spaces or a specific character, for that you can use .ljust() or .rjust() for example, you can have 'Hi' and 'Carmen' on new lines, padded with spaces to the left and justified to the right with 'Hi'.rjust(10) + '\n' + 'Carmen'.rjust(10)
I believe these should answer your question.
| 4 | 10 | 0 |
In C++, \n is used, but what do I use in Python?
I don't want to have to use:
print (" ").
This doesn't seem very elegant.
Any help will be greatly appreciated!
|
How to print spaces in Python?
| 0 | 0 | 0 | 266,436 |
12,453,675 |
2012-09-17T05:42:00.000
| 1 | 0 | 0 | 0 |
python,numpy,tkinter,py2app
| 17,708,863 | 2 | false | 0 | 1 |
To pull this back from the void.
I was having a similar issue. The MAC I was developing on was running 10.8.Something. The target machine was running 10.6+ and I was getting classic environment is no longer supported errors. I looked into architecture flags to no avail. I did find my issues. When emailing the .app (drag and drop into gmail in chrome) to the client the filesize was only 1kb. On the development machine the filesize showed 25Mb+. Pulling this 1kb file from the emails and launching it on the development machine I also got the same error. Turns out drag and drop is not sufficient. I successfully zipped the .app and was able to eliminate this error.
| 2 | 1 | 0 |
So, I created a simple GUI app using Tkinter, py2app, and numpy. When I run it on my computer it works fine. However, I tested it on a few other computers and kept getting the error:
"You can't open the application because the classic environment is no longer supported."
I'm not sure I understand the error. The other computers had the same python versions and OS versions as I do? Is there something additional I need to do to make my app work on other machines?
Thanks!
|
py2app/Tkinter application error: "classic environment is no longer supported"
| 0.099668 | 0 | 0 | 643 |
12,453,675 |
2012-09-17T05:42:00.000
| 1 | 0 | 0 | 0 |
python,numpy,tkinter,py2app
| 12,457,682 | 2 | false | 0 | 1 |
In Mac world classic environment is a software abstraction layer that allowed old Mac apps (e.g G5 hardware architecture) to be executed on new Mac architecture (Intel hardware architecture). Classic environment was supported on pre-10.5 versions of Mac OS X and then dropped in newer versions.
py2app supports command-line arguments for building executables to support different architectures, look closer at --arch parameter of py2app.
| 2 | 1 | 0 |
So, I created a simple GUI app using Tkinter, py2app, and numpy. When I run it on my computer it works fine. However, I tested it on a few other computers and kept getting the error:
"You can't open the application because the classic environment is no longer supported."
I'm not sure I understand the error. The other computers had the same python versions and OS versions as I do? Is there something additional I need to do to make my app work on other machines?
Thanks!
|
py2app/Tkinter application error: "classic environment is no longer supported"
| 0.099668 | 0 | 0 | 643 |
12,454,913 |
2012-09-17T07:36:00.000
| 2 | 0 | 1 | 0 |
python-3.x,debian
| 12,457,310 | 3 | true | 0 | 0 |
You need to install the readline library before compiling Python.
| 1 | 0 | 0 |
Hello I am using Python3 in my Debian Squeeze 6. However, I am not been able to access histories and take advantage of left and right arrows. I see these characters when I press left right up and down. [[A^[[B^[[B^[[C^[[D. I don't have problem in default Python2.6 interpreter. How do I fix this?
P.S I open interpreter as python3.
|
No history, left and right in Python3 Interactive Shell(Debian)?
| 1.2 | 0 | 0 | 674 |
12,455,069 |
2012-09-17T07:49:00.000
| 0 | 0 | 1 | 0 |
python,speech-recognition
| 18,925,412 | 2 | false | 0 | 0 |
Both PySpeech and Dragonfly are relatively thin wrappers over SAPI. Unfortunately, both of them use the shared recognizer, which doesn't support input selection. While I'm familiar with SAPI, I'm not that familiar with Python, so I haven't been able to assist anyone with moving PySpeech/Dragonfly over to an in-process recognizer.
| 1 | 3 | 0 |
I have seen the documentation of pyspeech and dragonfly, but don't know how to input an audio file to be converted into text. I have tried it with microphone via speaking to it and the speech is converted into text, but If I want to input a previously recorded audio file. Can anyone help with an example?
|
How to input and process audio files to convert to text via pyspeech or dragonfly
| 0 | 0 | 0 | 1,965 |
12,458,228 |
2012-09-17T11:27:00.000
| 1 | 0 | 1 | 0 |
python,ubuntu,emacs,virtualbox,flymake
| 12,458,671 | 1 | true | 0 | 0 |
Install pyflakes. The error says that it is missing.
| 1 | 0 | 0 |
I'm using emacs 23 on ubuntu 12.04(virtualbox) on windows 7. And i am getting an error like this:
Flymake:Failed to launch syntax check process 'pyflakes' with args (views_flymake.py): Searching for program: no such file or directory,pyflakes. Flymake will be switched OFF
I couldn't find the solution. Any help will be appreciated.
|
emacs flymake error on ubuntu (virtualbox)
| 1.2 | 0 | 0 | 192 |
12,461,724 |
2012-09-17T14:44:00.000
| 0 | 0 | 0 | 1 |
java,python,web-applications,timer,clock
| 12,462,388 | 2 | false | 1 | 0 |
Single master clock that is always running (no users need be logged in for clock to continue running)
That's not hard.
The variance between what any given viewer sees and the actual time on the master clock can not be greater than 1 second.
That's pretty much impossible. You can take into account network delays and such , but you can't ensure this.
Any changes made to the master clock/countdown timer/countup timer need to be seen by all viewers near instantly.
You could do that with sockets, or you could just keep polling the server...
do a websearch for "javascript ntp". There are a handful of libraries that will do most of what you want ( and i'd argue, enough of what you want ).
most work like this:
try to calculate the offset of the local clock to the master clock
continually poll master clock for time, trying to figure out the average delay
show the time based on fancy math of local vs master clock.
years ago i worked on some flash-based chat rooms. a SWF established a socket connection to a TwistedPython server. that worked well enough for our needs, but we didn't care about latency.
| 1 | 1 | 0 |
I help with streaming video production on a weekly basis. We stream live video to a number of satellite locations in the Dallas area. In order to ensure that all of the receiving locations are on the same schedule as the broadcasting location we use a desktop clock/timer application and the remote locations VNC into that desktop to see the clock.
I would like to replace the current timer application with a web based one so that we can get rid of the inherently fragile VNC solution.
Here are my requirements:
Single master clock that is always running (no users need be logged in for clock to continue running)
The variance between what any given viewer sees and the actual time on the master clock can not be greater than 1 second.
Any changes made to the master clock/countdown timer/countup timer need to be seen by all viewers near instantly.
Here is my question:
I know enough java and python to be dangerous. But I've never written a web app that requires real time syncing between the server and the client like this. I'm looking for some recommendations on how to architect a web application that meets the above requirements. Any suggestions on languages, libraries, articles, or blogs that can point me in the right direction would be appreciated. One caveat though: I would prefer to avoid using Java EE or .Net if possible.
|
Design recommendations for a multi-user web based count down timer
| 0 | 0 | 0 | 414 |
12,462,227 |
2012-09-17T15:13:00.000
| 1 | 1 | 0 | 0 |
php,python,cython
| 12,462,519 | 1 | true | 0 | 0 |
The short answer is no. Cython extensions use the Python C API, so they can't be loaded and called directly from PHP. They will typically take and return PyObject structs as arguments (Python objects). You'll need a Python <-> PHP binding to load the .so and do object conversion.
| 1 | 0 | 0 |
Can this be done?
No idea if the Cython .so extension can be dynamic loaded from a php script or does it needs any extra manage?
|
Use a Cython extension from a compiled python in php?
| 1.2 | 0 | 0 | 320 |
12,462,796 |
2012-09-17T15:48:00.000
| 0 | 0 | 0 | 0 |
python,qt4,pyqt4
| 12,462,897 | 2 | false | 0 | 1 |
Run a VNC server on the machine; it will start an instance of Xfb, a in-memory version of an X server.
| 1 | 0 | 0 |
I would like to run a Python script that normally opens a Qt window remotely over a connection with no X11 forwarding. Is there any way to create some kind of virtual display that the window drawing can be sent to? (some x11-equivalent of /dev/null). The purpose of this is to check that a script works with the API of PyQt4 for different versions of PyQt4, but I want to be able to run this remotely on a server with no display. Any ideas?
|
Running a PyQt4 script without a display
| 0 | 0 | 0 | 846 |
12,463,556 |
2012-09-17T16:35:00.000
| 0 | 0 | 1 | 0 |
python,django,virtualenv
| 12,463,788 | 3 | false | 0 | 0 |
add a directory to $HOME named lib add a directory to that named python
add your new directory to your path/pythonpath .bashrc and .bash_profile
nano ~/.bashrc then add export PYTHONPATH=$PYTHONPATH:~/lib/python ,this sets up you environment when not accessed through the shell (eg through a website)
add same line to .bash_profile , this controls your environment when you log in through the shell
logout of terminal and log back in
echo $PYTHONPATH to make sure it has your ~/lib/python folder
download desired python package using wget https://download or git clone package/repo
if you downloaded a zip unzip it tar -xvf somefile.tar.bz
change to the directory where you unzipped it
cd some_package\
run setup.py with the --HOME tag
python setup.py install --HOME=~
test it
python -c "import <package>;print <package>.VERSION;"
congratulations you just installed custom packages :)
On a Side note I find virtualenv to be a much more robust solution, however occasionally it can be hard to get properly setup. this is just something if you only need a few custom packages...
| 1 | 2 | 0 |
If I am working on a shared web server with Python and some other packages like virtualenv already installed. Can I use virtualenv to install some additional packages I need in a specific directory while still using the system wide python and packages or better still can I just install the additional python packages in my own directory and use them for my website without requiring sudo permissions?
|
Can I use VirtualEnv to install just some additional packages?
| 0 | 0 | 0 | 473 |
12,466,420 |
2012-09-17T20:11:00.000
| 2 | 0 | 0 | 0 |
python,chess
| 12,471,976 | 2 | false | 0 | 0 |
Basically the idea is that alpha and beta are an upper and lower bound on the optimal result, from what you've already explored of the game tree, so that anything outside those bounds isn't worth exploring.
It's been a while since I understood minimax and alpha-beta pruning in detail, but here's the gist as I remember.
As you said, if we already know that white's move1 has score 10, and while examining move2 we find that black can respond in such a way that white is forced into a best score of 8, then it's not worth examining move2 any further; we already know that the best we can possibly do is worse than another option we know about.
But that's only one half of the minimax algorithm. Say we're now examining white's move3, and looking at all of black's responses. We explore black's moveX, and find that one of white's responses to that can force a score of at least 15. If we then start exploring black's moveY (still a response to white's original move3) and find a response by white to moveY that would force a score of at least 18, then we immediately know that the whole game-tree stemming from black's moveY is pointless; black would never make moveY, since moveX only forces black to allow white to score 15, while moveY forces black to allow white to score 18.
Alpha represents a minimum score we already know white can force by making different choices leading up to the point we're exploring. So it's not worth continuing to explore any path once we know there's no possibility of getting more than alpha, since white wouldn't allow us to reach that path.
Beta represents a maximum score we already know that black can force by making different choices leading up to the point we're exploring. So it's not worth continuing to explore any path once we know there's no possibility of getting less than beta, since black wouldn't allow us to reach that path.
| 1 | 2 | 0 |
Hi! I'm trying to implement an alpha-beta search, but i first want to understand all the logic behind it, not just implementing it using some kind of pseudo code.
What i understand is this: A white player makes a move(let's call it move1). The first move is saved as alpha (the minimum value the player is assured of). Now, if we move to the next possible move by white(move2), and see that the black player's first response results a valuation that is worse than alpha, we can skip all possible black's counter moves as we already know that when white makes move2, the worst possible result is worse than move1's worst possible result.
But, what i don't understand is that beta variable. From chess programming wiki i read : ' the maximum score the minimizing player is assured of '. But i can't really get the idea behind it.
Can somebody please explain it in very simple terms? Thank you very much.
|
beta in alpha beta search
| 0.197375 | 0 | 0 | 502 |
12,467,204 |
2012-09-17T21:15:00.000
| 0 | 0 | 0 | 1 |
python,connection,chat,tornado
| 12,549,017 | 2 | true | 0 | 0 |
I've been having this issue for like 5 - 6 days and finally found out what the problem is, well.. not exactly actually but it's solved! I've been searching on the internet but found nothing. I told in the above post that I do remember it working when I tried the same script a couple months ago, but I never mentioned using nginx back then. I've been struggling with Apache + mod_proxy but I don't know what the issue is with apache but when I tried nginx this time again it just worked!
If you have the same issue (on_connection_close not getting fired) "TRY" nginx. Thanks for your help too @Nikolay.
| 1 | 4 | 0 |
I've been searching for quite a while for a solution about this but no dice.
Edit: I didn't point out that I'm trying to make a chat server. So people log in, their id gets appended to a users and a listeners list. And they start chatting. But when one of them tries to close the tab or browser the user will never be deleted out of both lists, so he/she stays logged in.
Edit2: I thought that the numbering above was a little confusing so I posted the part in the script as well at the bottom.
So far I've tried the on_connection_close() function (which doesn't ever get fired, I don't know why), the on_finish() function (which gets fired every time when a finish() is called) so that doesn't fit the bill either.
Now I've come up with a little bit of a solution which involves the on_finish() function:
Whenever the UpdateHandler class' post() function gets called then self.done = 0 is set.
Just before the finish() function gets fired I set self.done = 1.
Now the on_finish() function gets called and I print self.done on the console and it's 1.
In the same on_finish() function I do an IF self.done = 1 statement, as expected it returns TRUE and Tornado's io_loop.add_timeout with the parameters time.time()+3 (so that it sleeps for 3 seconds to make sure if the user navigated to another page within the website or completely went away from the website) and the callback that eventually is going to be called.
After the 3 seconds I want to check whether self.done still equals 1 or if the user is still on the website then sure enough it will be 0.
btw, every 30 seconds the server finishes the connection and then sends the user a notification to initiate a new connection so that the connection never times out on it's own.
When the client closes the browser and the 30 second long timeout expires then the server tries to send a notification, if the client was still on my website then it would initiate a new connection thus calling the post() function in the UpdateHandler class I mentioned above thus setting the variable self.done back to 0. (That's why I gave the io_loop.add_timeout a margin of 3 seconds.)
Now that that's taken care of I wanted to go ahead and try and see how it works.
I started the server and opened up a browser navigated to the right url and watched how the server responded (by placing a few print statements in the script). When the user stays connected I can see that after the post() call (which shows at that time self.done = 0) it sleeps for 3 seconds, and then the callback function gets called but this one prints self.done = 1 which is strange.
I know this is not the most efficient way but it's the only solution I could come up with, which didn't even work as expected.
Conclusion:
I hope someone has a good alternative or maybe even a point in my theory that I missed which breaks the whole thing.
I really would like to know how to let Tornado know that the client closed the browser without waiting for the 30 second timeout to finish.
Maybe with pinging the open connection or something. I looked into TORNADIO for a little bit but didn't like it that much. I want to do this in pure Tornado if it's possible of course.
I'll submit the code ASAP, I've been trying for like half an hour looking at 'How to Format' etc. but when I try to submit my edit it gives an error.
Your post appears to contain code that is not properly formatted as
code. Please indent all code by 4 spaces using the code toolbar
button or the CTRL+K keyboard shortcut. For more editing help, click
the [?] toolbar icon.
|
How do I notify Python/Tornado that the client has closed the tab/browser?
| 1.2 | 0 | 0 | 1,509 |
12,467,607 |
2012-09-17T21:50:00.000
| 0 | 0 | 0 | 0 |
python,django,django-1.1
| 12,468,227 | 1 | true | 1 | 0 |
I'm still unsure as to where the reference to cmldb.static.views originated from, but I discovered that there was a missing folder in my svn database that solved the problem. The cmldb.static.views module is now in place, and the site is up and running.
| 1 | 0 | 0 |
I'm attempting to import a site from an old server that's using Django 1.1 onto a new server. For compatability reasons, I haven't been able to upgrade to the new version of Django.
When I attempted to view localhost:8080/admin/, I was able to access the login screen, but after that point I ran into a TemplateSyntaxError. The specific error that it is giving me is:
TemplateSyntaxError at /admin/
Caught ViewDoesNotExist while rendering: Could not import cmldb.static.views. Error was: No module named static.views
The error is completely correct - there is no module cmldb.static. There is one reference to cmldb.static.views in the urls.py file, though when I change this value I run across the same error. Furthermore, the site that I am importing from has the same urls.py file, yet there is no cmldb.static module in that project either, though that site runs fine.
The traceback shows all files that are located within Django package, rather than any files located within my cmldb package, so I am not sure what code, if any, to post. My main confusion is over which file is actually causing this error.
Error is:
In template /usr/local/lib/python2.7/dist-packages/django/contrib/admin/templates/admin/base.html, error at line 30
Which reads:
30 {% url django-admindocs-docroot as docsroot %}
|
Django 1.1 TemplateSyntaxError - could not import *.static.views
| 1.2 | 0 | 0 | 118 |
12,468,294 |
2012-09-17T23:07:00.000
| 2 | 0 | 1 | 0 |
python,class,dictionary
| 12,468,309 | 2 | false | 0 | 0 |
Unless you also want to encapsulate behavior with the data, you should use a dictionary. A class is used not only to store data, but also to specify operations performed upon that data.
| 2 | 5 | 0 |
It seems to me that dictionaries are encouraged over defining classes and using classes. When should I use a dictionary over a class and the other way around?
For example if I want to have a dictionary of people and each person has a name and other attributes, two simple ways would be:
Create a dictionary of the people. Each key will be the persons name and the value will be a dictionary of all the attributes that person has.
Create a class Person that contains those attributes and then have a dictionary of Persons, have the name be the key and the Person object the value.
Both solutions seem valid and accomplish the goal, but it still seems as though dictionaries in python would be the way to go. The implementations are different enough that if I wanted to switch back and forth I could run into many changes to go from a class based implementation to a dictionary based implementation and vice versa.
So what am I trading off?
|
Data containers: class vs dictionary
| 0.197375 | 0 | 0 | 480 |
12,468,294 |
2012-09-17T23:07:00.000
| 4 | 0 | 1 | 0 |
python,class,dictionary
| 12,468,481 | 2 | true | 0 | 0 |
A dictionary is a great way to get started or to experiment with approaches to solving a problem. They are not a substitute for well-designed classes. Sometimes it's the right 'final solution' and the most efficient way to handle the 'I only need to carry around some data' need. I find it useful to start with a dictionary sometimes and usually wind up writing a few functions to provide additional behaviors that I need for the specific case I'm working on. At some point I usually find that it would be neater and cleaner to switch to using a class instead of a dictionary. This is usually determined by the quantity and complexity of behaviors that are required to meet the needs of the situation. Because defining and using a class is so easily done in Python, I find that I switch from "dictionary plus functions" to a class fairly early on. (One thing I have discovered is that the 'quick throw a solution together' program takes on a life of its own in very many cases -- and it's more productive to have real classes that can be expanded and refactored rather than a large amount of code that (ab)uses dictionaries.
| 2 | 5 | 0 |
It seems to me that dictionaries are encouraged over defining classes and using classes. When should I use a dictionary over a class and the other way around?
For example if I want to have a dictionary of people and each person has a name and other attributes, two simple ways would be:
Create a dictionary of the people. Each key will be the persons name and the value will be a dictionary of all the attributes that person has.
Create a class Person that contains those attributes and then have a dictionary of Persons, have the name be the key and the Person object the value.
Both solutions seem valid and accomplish the goal, but it still seems as though dictionaries in python would be the way to go. The implementations are different enough that if I wanted to switch back and forth I could run into many changes to go from a class based implementation to a dictionary based implementation and vice versa.
So what am I trading off?
|
Data containers: class vs dictionary
| 1.2 | 0 | 0 | 480 |
12,468,335 |
2012-09-17T23:12:00.000
| 2 | 0 | 0 | 0 |
javascript,python,html,file,server-side
| 12,468,383 | 5 | false | 1 | 0 |
Browsers have lots of security that prevent this level of control over your computer. This is a good thing. You dont want random websites to be able to do this stuff on anyone's computer that visits them.
They way to do this would be to write a web application that your browser could access. The browser can submit data to this application running on your own computer and your application could manipulate the file system or do lots of other things.
So no, a browser can't do these things. And yes, you would have to use "another language" to create something which runs outside the browser itself. You can use javascript (see node.js) or python to do this, as well nearly any other programming language that exists to create such a thing. Which to choose is up to you.
| 2 | 4 | 0 |
I have an html file on my desktop that takes some input. How would I go about writing that input into a file onto my computer? Would I have to use another language to do it (i.e python or javascript?) and how would I go about doing this? On a related note, is there any way I can have javascript start an application from within an html file (the goal is to write to a file on my computer?
|
How to create a file and append data from html page ?
| 0.07983 | 0 | 0 | 6,771 |
12,468,335 |
2012-09-17T23:12:00.000
| 0 | 0 | 0 | 0 |
javascript,python,html,file,server-side
| 15,305,888 | 5 | false | 1 | 0 |
Short answer:
you can't. The web browser security and sandbox'ing prevents this.
Longer answer:
setup a small LAMP stack, Ruby on rails(local host server), Python/Django(local host server) to host your HTML/web form. The local daemon can handle appending data to a file, as you enter it into a form.
With HTML5, you get special hooks that may allow you to write to the local filesystem, but those may be browser specific and may break from time to time.
| 2 | 4 | 0 |
I have an html file on my desktop that takes some input. How would I go about writing that input into a file onto my computer? Would I have to use another language to do it (i.e python or javascript?) and how would I go about doing this? On a related note, is there any way I can have javascript start an application from within an html file (the goal is to write to a file on my computer?
|
How to create a file and append data from html page ?
| 0 | 0 | 0 | 6,771 |
12,468,644 |
2012-09-17T23:53:00.000
| 1 | 0 | 0 | 0 |
python,wxpython
| 12,470,183 | 1 | true | 0 | 1 |
Using threads in the standard solution for this type of problem. The wxPython demo under Processes and Events | Threads has a working example of using threads.
There are a few issues when running threads from wxPython (and most other guis), so you might want to read the comments in the example, and maybe the wiki to understand what's going on, etc. In particular, wxPython needs to be run from the main thread, so do your file processing in a different thread, and then your file processing should communicate with the main thread using something like wx.PostEvent or wx.CallAfter.
| 1 | 2 | 0 |
I'm creating an application with python 2.7 and wxpython 2.8 that should execute a long loop (some hours) on a list of files.
I programmed a button that should interrupt the loop as I press it, but at the moment I start the application, it freezes and I can't interact in any way until the loop ends.
I also tried to add a small period of sleep with time.sleep, up to 1 second, which is really bad for the speed and doesn't resolve the issue.
Is there a way to run this loop "in the background", so that the user can still modify some parameters and more important stop the loop?
I can say about the loop that it doesn't require a lot of resources, it just requires a lot of time, so I don't understand why it freezes.
Thanks in advance for the help!
|
wxPython app frozen during the execution
| 1.2 | 0 | 0 | 193 |
12,470,094 |
2012-09-18T03:43:00.000
| 0 | 0 | 0 | 0 |
python,sql,dna-sequence,genome
| 12,474,645 | 1 | true | 0 | 0 |
probably what you want is called "de novo assembly"
an approach would be to calculate N-mers, and use these in an index
nmers will become more important if you need partial matches / mismatches
if billion := 1E9, python might be too weak
also note that 18 bases* 2 bits := 36 bits of information to enumerate them. That is tentavely close to 32 bits and could fit into 64 bits. hashing / bitfiddling might be an option
| 1 | 1 | 1 |
I have data that needs to stay in the exact sequence it is entered in (genome sequencing) and I want to search approximately one billion nodes of around 18 members each to locate patterns.
Obviously speed is an issue with this large of a data set, and I actually don't have any data that I can currently use as a discrete key, since the basis of the search is to locate and isolate (but not remove) duplicates.
I'm looking for an algorithm that can go through the data in a relatively short amount of time to locate these patterns and similarities, and I can work out the regex expressions for comparison, but I'm not sure how to get a faster search than O(n).
Any help would be appreciated.
Thanks
|
Fast algorithm comparing unsorted data
| 1.2 | 0 | 0 | 386 |
12,470,417 |
2012-09-18T04:33:00.000
| 1 | 0 | 0 | 0 |
python,opengl,pyglet
| 12,471,143 | 2 | true | 0 | 1 |
Can you clarify what sort of starfield? 2D scrolling (for a side or top scrolling game, maybe with different layers) or 3D (like really flying through a starfield in an impossibly fast spaceship)?
In the former, a texture (or layers of textures blended additively) is probably the cleanest and fastest approach. [EDIT: Textures are by far the best approach, but if you really don't want to use textures, you can do the following, which is the next best thing:
Make a static VBO or display list of points that is maybe six times as large as it needs to be in the direction of scroll (e.g. if you're running 800x600 screen and you're scrolling horizontally, generate points on a 4800x600 grid).
Draw these points twice, offset by the width and the scrolling variable. E.g., let x be your scrolling variable (it starts at 0, then is incremented until it reaches 4800 (the width of your points), and then wraps back around and restarts at 0). Each frame, draw your points with a glTranslatef(x,0,0). Then draw them again with a glTranslatef(x+4800,0,0).
In this way your points will scroll past seemingly continuously. The constant (I chose six, above) doesn't really matter to the algorithm. Larger values will have fewer repeats but slower performance.
You can also try doing all of the above several times with different scrolling constants to give the illusion of depth (with multiple layers).
]
In the latter, there are a bunch of clever algorithms I can think of, but I'd suggest just using the raw points (point sprites if you're feeling fancy); GPUs can handle this if you put them in a static VBO or display list. If your points are small, you can probably throw a few thousand up at a time without any noticeable performance hit.
| 1 | 2 | 0 |
For a game I am working on I would like to implement a scrolling starfield. All the game so far is being drawn from OpenGL primitives and I would like to continue this and gl_points seems appropriate.
I am using pyglet (python) and know I could achieve this storing the positions of a whole bunch or points updating them and moving them manually but I was hoping there was a neater alternative.
EDIT:
In answer to Ian Mallett
I guess what I am trying to ask is if I generate a bunch of points, is there some way I can blit these onto some kind of surface or buffer and scroll this in the background.
Also as for what kind of star-field all I am trying to generate at this stage all I am trying for a simple single layer for a top down game, pretty much how you would have in asteroids
|
Scrolling starfield with gl_points in pyglet
| 1.2 | 0 | 0 | 1,710 |
12,470,633 |
2012-09-18T04:59:00.000
| 1 | 0 | 0 | 0 |
python,django,web-applications
| 12,470,648 | 3 | false | 1 | 0 |
The game will most probably have to run on the client side. You should take a look into Javascript and AJAX.
| 1 | 11 | 0 |
I'm fairly new to web development and Django so bear with me. I'm planning to make a fairly simple website in Django, that part I can manage.
I'm then looking to build a few basic 2d games into it, I fully appreciate that you can easily manage this in flash or as a java web app but I'm looking to implement them in python. I've done some research but I'm coming up blank, is there a straightforward way to create 2d python web games that would easily be integrated with django?
I'm hoping to build these games in Python so that the users can program their own individual AI's for the game, again in Python, and compete against each other. As a bit of a competition/learning exercise.
Thanks in advance and sorry if it turns out to be a stupid question.
|
Django website, basic 2d python game
| 0.066568 | 0 | 0 | 16,058 |
12,471,575 |
2012-09-18T06:37:00.000
| 1 | 0 | 1 | 0 |
python
| 12,472,796 | 1 | true | 0 | 0 |
you could
use dns instead of ip addresses
create your own name service (perhaps a rest service providing a register and a lookup function)
use a zeroconf implementation like bonjour or avahi (depending on your os)
| 1 | 2 | 0 |
I have serval python processes talk to each other via socket; each process has a specific role or function.
These processes are initially running on a wire LAN (and the ip of the machines are static ), so I assign an ip address and port number for each one of them to let them find each other and talk to each other; but when I switch a dynamic environment where the ip address of each python process is not static, it's tedious to configure the ip address of the process each time. Currently, I use SSH to login and start different processes, and there are two machines with many different processes.
How can I easily deploy these process in a distributed environment, say in a wireless LAN or across the entire Internet so that they can find each other by themselves; and I will use twitter's murder to distribute my code on these machines.
I guess there should be something like a name service, but I am not sure what I should do.
|
How to distribute several python process in a distributed environment
| 1.2 | 0 | 0 | 132 |
12,472,055 |
2012-09-18T07:11:00.000
| 0 | 0 | 1 | 0 |
python
| 12,503,045 | 1 | false | 0 | 0 |
The following doesn't work for you?
from AppKit import NSWorkspace
AppKit should contain NSWorkspace unless you're using a non-standard AppKit module
| 1 | 2 | 0 |
I am trying to import NSWorkspace from Appkit in Python2.7 on Mac 10.7.4.
I checked the pynopath in eclipse, its having pyObjc, pip packages, site packages paths but also am not able to import NSWorkspace, even though i can import Appkit.
|
Unable to import NSWorkspace from Appkit in Python
| 0 | 0 | 0 | 848 |
12,472,432 |
2012-09-18T07:38:00.000
| 1 | 0 | 0 | 1 |
python,xampp,osqa
| 13,022,630 | 1 | true | 0 | 0 |
If you're flexible about xampp, try bitnami native installer:
http://bitnami.org/stack/osqa
It took me about 10min for the installer to run, then I had running on Win7 localhost.
| 1 | 0 | 0 |
can anyone tell me that how can i install osqa on windows 7 with xampp localhost ? i don't know xampp support python.
Thanks in advance.
|
install OSQA using xampp on windows 7
| 1.2 | 0 | 0 | 201 |
12,472,952 |
2012-09-18T08:17:00.000
| 6 | 0 | 0 | 0 |
python,tic-tac-toe,challenge-response
| 12,472,989 | 1 | true | 0 | 0 |
You can't really win at tic tac toe if both players knows how to play the game. If you start at the middle, the other player can block you out by placing it at the top then left or something. Can't really remember now but unless you break the AI it's not possible to win. Sorry :(
| 1 | 4 | 0 |
I want to ask is it possible to win at tic tac toe challenge? Because the judgebot knows each and every trick and he knows how to fail the trick moves . I am only able to tie the game in both turns . If it is possible , can you guys please just give me a hint ?
|
Hackerrank.com tic tac toe challenge
| 1.2 | 0 | 0 | 1,119 |
12,473,511 |
2012-09-18T08:52:00.000
| 25 | 0 | 0 | 0 |
python,image-processing,numpy,matplotlib
| 12,473,913 | 1 | true | 0 | 0 |
interpolation='nearest' simply displays an image without trying to interpolate between pixels if the display resolution is not the same as the image resolution (which is most often the case). It will result an image in which pixels are displayed as a square of multiple pixels.
There is no relation between interpolation='nearest' and the grayscale image being displayed in color. By default imshow uses the jet colormap to display an image. If you want it to be displayed in greyscale, call the gray() method to select the gray colormap.
| 1 | 17 | 1 |
I use imshow function with interpolation='nearest' on a grayscale image and get a nice color picture as a result, looks like it does some sort of color segmentation for me, what exactly is going on there?
I would also like to get something like this for image processing, is there some function on numpy arrays like interpolate('nearest') out there?
EDIT: Please correct me if I'm wrong, it looks like it does simple pixel clustering (clusters are colors of the corresponding colormap) and the word 'nearest' says that it takes the nearest colormap color (probably in the RGB space) to decide to which cluster the pixel belongs.
|
What does matplotlib `imshow(interpolation='nearest')` do?
| 1.2 | 0 | 0 | 22,970 |
12,479,206 |
2012-09-18T14:35:00.000
| 1 | 0 | 0 | 0 |
python,django
| 12,480,077 | 1 | false | 1 | 0 |
What about storing the new levels in a prefix tree? you could use each level as a node of a branch of the tree.
When a new user wants do define a new level, the prefix tree will be updated starting from the level where the user belongs. If your problem is just about giving visibility to the user of the sub-branch of the level where he belongs, this should work.
A similar approach, maybe less intuitive, is to give to each level a number (or alpha-numeric value), so that in the end a user associated to the level "state" in your example, has a level code of 23 (let's say: "ex-country": 2 and "state": 3), so that he can add sub-levels starting with the prefix 23.
| 1 | 0 | 0 |
I am using django to create a web based app. This app will be used as a service by multiple clients.
It has several models / tables that represent a hierarchical relationship. Users are given access based on this hierarchical relationship - ex County -> Schools -> Divisions -> Classrooms.
So a user having access to a division has access to all classrooms within it etc
My question is how do I make this permissions system configurable across clients. The application should a new client to define arbitrary levels - ex country -> state -> city -> schools -> class.
Any ideas on what are good approaches ?
|
creating a configurable permissions system in django
| 0.197375 | 0 | 0 | 167 |
12,482,150 |
2012-09-18T17:36:00.000
| 0 | 0 | 0 | 0 |
python,django,gunicorn
| 12,482,179 | 1 | false | 1 | 0 |
put it in your /media/ folder
then just point to
some.url/media/html/some_static_html.html
| 1 | 2 | 0 |
The current website is running on Django, Gunicorn and nginx. I want a way to convert the current front page into a static HTML page and want nginx to serve this static page instead of going through the whole web stack. I want the front page to load faster. This can be done manually, but is there a tool integrated with Django or Gunicorn that automatically convert certain page into static and serve those pages?
|
How to convert Django dynamic page to static HTML file?
| 0 | 0 | 0 | 380 |
12,485,733 |
2012-09-18T22:02:00.000
| 1 | 0 | 0 | 0 |
python,django
| 12,485,870 | 3 | false | 1 | 0 |
There is a one-to-many relationship between Book and Page. This means that one Book has several Pages and each Page only one Book.
In a database (and therefore in an ORM mapper) you would model this by creating a foreign key in Page. This foreign key would point to the Book instance that contains these Pages.
If you want to find the Pages for a Book, you'd query the Pages table for all Pages with the same foreign key as the primary key of the Book.
| 1 | 1 | 0 |
Suppose I have two models, Book and Page, such there is a one-to-many relationship between Page to Book. I would like each Book model, to have a function which creates a bunch of Pages for the respective Book model, but I'm not sure how to associate each Page's foreign key back to the Book instance it was created in.
I currently have an associate function, which finds the correct book amongst all the possible books, and I'm sure there's a better way, but I can't seem to find it. Is it possible to set the foreignkey to self?
|
Django: Create Submodel in Model
| 0.066568 | 0 | 0 | 4,720 |
12,486,861 |
2012-09-19T00:12:00.000
| 1 | 0 | 0 | 0 |
python,openerp
| 12,493,719 | 5 | false | 1 | 0 |
Go to the "Settings" module:
Open menu option Customization -> Low Level Objects -> Window Actions.
Search for "SMS" in the Action Name and open it's form.
In the "Security" tab you can set the groups that can view this action. Add the "Administrator / Configuration" group and it will be hidden to regular users.
| 2 | 3 | 0 |
I'm trying to write a module for OpenERP 6.1 that will hide the "Send an SMS" button on the Partner form. I tried overwriting the window action's id with a different name and src_model, but only the name change appeared. I traced through the code, and it looks like the ir_values records from the base module are still linking the action to the res.partner model.
Is there a legitimate way to hide a sidebar button, or am I going to have to modify the base module? I briefly tried restricting permissions on the wizard's table, but that didn't seem to have an effect.
|
Hide sidebar button in OpenERP
| 0.039979 | 0 | 0 | 1,819 |
12,486,861 |
2012-09-19T00:12:00.000
| 0 | 0 | 0 | 0 |
python,openerp
| 12,497,448 | 5 | false | 1 | 0 |
Please try by creating a new group and provide this group to your button/link and don't add this group to any user.
| 2 | 3 | 0 |
I'm trying to write a module for OpenERP 6.1 that will hide the "Send an SMS" button on the Partner form. I tried overwriting the window action's id with a different name and src_model, but only the name change appeared. I traced through the code, and it looks like the ir_values records from the base module are still linking the action to the res.partner model.
Is there a legitimate way to hide a sidebar button, or am I going to have to modify the base module? I briefly tried restricting permissions on the wizard's table, but that didn't seem to have an effect.
|
Hide sidebar button in OpenERP
| 0 | 0 | 0 | 1,819 |
12,487,415 |
2012-09-19T01:33:00.000
| 1 | 0 | 1 | 0 |
python,regex
| 12,487,430 | 1 | true | 0 | 0 |
The basic pattern is .+?\?\?\d+. We have made the first .+ non-greedy so it won't try to match the whole string right away. Use a repeated group to capture the subsequent patterns: r'(.+?\?\?\d+)(;.+?\?\?\d+)*'
| 1 | 1 | 0 |
I would like some thoughts on how to write a regular expression which validates a pattern
ex. .??2
one of more characters followed by two question marks followed by one or more numbers and if only if there is another repeating pattern then the seperator will be a semi colon.
more examples
--??9;.??50;,??3 - in this example I have the pattern repeating and that is why the semi colon
or
*??5 - a * followed by two qnestions marks followed by a number and no semi colon as there are no repeating groups
This is what i currently have
.+\?\?\d+(;|)+
|
python regular expression repeating pattern match
| 1.2 | 0 | 0 | 3,006 |
12,487,549 |
2012-09-19T01:58:00.000
| 36 | 0 | 1 | 0 |
python
| 12,487,569 | 2 | true | 0 | 0 |
Yes, you can import module as many times as you want in one Python program, no matter what module it is. Every subsequent import after the first accesses the cached module instead of re-evaluating it.
| 1 | 16 | 0 |
I've been wondering this for a while: Is it guaranteed to be safe to import a module multiple times? Of course, if the module performs operating systems things like write to files or something, then probably not, but for the majority of simple modules, is it safe to simply perform imports willy-nilly? Is there a convention that governs the global state of a module?
|
How safe is it to import a module multiple times?
| 1.2 | 0 | 0 | 14,100 |
12,487,889 |
2012-09-19T02:46:00.000
| 1 | 1 | 0 | 1 |
python,python-3.x,game-engine
| 31,010,631 | 2 | false | 0 | 0 |
If you want a game engine in python, I would recommend these:
Kivy (multiplatform)
PyGame (multiplatform)
Blender (graphical game engine made in python, multiplatform, also used for modeling)
PyOpenGL (multiplatform, 3d game engine like blender)
These are some game engines I know. You also might want to try Unity3d.
| 2 | 2 | 0 |
What its the best game-engine 3D to Python 3.x and easy to install on Linux (Debian 7 "wheezy")?
|
Game Engine 3D to Python3.x?
| 0.099668 | 0 | 0 | 1,505 |
12,487,889 |
2012-09-19T02:46:00.000
| 2 | 1 | 0 | 1 |
python,python-3.x,game-engine
| 12,488,430 | 2 | false | 0 | 0 |
Not sure if it is the "best" - but not working on the field I am aware of few others than Blender 3D 's game engine. Blender moved to Python 3 scripting at version 2.5, so any newer version than that will use Python 3 for BGE (Blender Game Engine) scripts.
Pygame is also available for Python 3.x, and it does feature a somewhat low-level interface to OpenGL - -sou you could do 3d with it.
Both should not have any major problems running in Debian, but maybe you will have to configure some sort of PPA to get packages being installed for Python 3.
Also, be sure that your Debian's python3 is 3.2 - this distribution is known to have surprisingly obsolete packages even when one is running the most recent one.
| 2 | 2 | 0 |
What its the best game-engine 3D to Python 3.x and easy to install on Linux (Debian 7 "wheezy")?
|
Game Engine 3D to Python3.x?
| 0.197375 | 0 | 0 | 1,505 |
12,488,658 |
2012-09-19T04:36:00.000
| 0 | 0 | 1 | 0 |
python,module,compilation,installation,distribution
| 12,489,210 | 2 | false | 0 | 0 |
sdist command creates a source distribution that should not and does not contain pyc files.
install command installs in your local environment. Creating Python distributions is not its purpose.
If your package is pure Python then the source distribution is enough to install it anywhere you like; you don't need pyc files that depend on Python version and thus less general. bdist or bdist_egg (setuptools) generate among other things pyc files.
| 2 | 1 | 0 |
Friends,
I was trying to create a python distribution. I executed the commands:
python setup.py sdist followed by python setup.py install
but the distribution created showed up folders build and dist without .pyc file.
Then I tried to find that file using windows find and got that its present in
C:\Python27\Lib\site-packages
Could anybody tell me the mistake I did in setup or missed anything.
Thanks in advance,
Saurabh
|
.pyc files generated in different folder
| 0 | 0 | 0 | 1,383 |
12,488,658 |
2012-09-19T04:36:00.000
| 0 | 0 | 1 | 0 |
python,module,compilation,installation,distribution
| 12,490,129 | 2 | false | 0 | 0 |
You won't get any pyc files until the first time you run the module. A source distribution is just that. Distributing pyc files is not usually useful anyway, they are not portable. If you intended to only distribute pyc files then you are in for a lovely set of problems when different python versions, and different operating systems, are used. Always distribute the source.
For most modules, the time taken to generate them the first time they are used is trivial - it is not worth worrying about.
By the way, when you move to Python 3, the pyc files are now stored in a directory called __pycache__ (3.2).
| 2 | 1 | 0 |
Friends,
I was trying to create a python distribution. I executed the commands:
python setup.py sdist followed by python setup.py install
but the distribution created showed up folders build and dist without .pyc file.
Then I tried to find that file using windows find and got that its present in
C:\Python27\Lib\site-packages
Could anybody tell me the mistake I did in setup or missed anything.
Thanks in advance,
Saurabh
|
.pyc files generated in different folder
| 0 | 0 | 0 | 1,383 |
12,489,235 |
2012-09-19T05:52:00.000
| 3 | 0 | 0 | 0 |
python,django,api
| 12,489,306 | 1 | true | 1 | 0 |
You need to have an __init__.py inside both the mysite and yelp directories, and you need to import it as mysite.yelp.
Older versions of python will allow implicit relative imports, but in general there should be one root package that's importable, and anything inside of it should be imported with the full name. And the way django works, your whole site needs to be the importable package.
If there's an existing project available from github, though, it's usually a better idea to install the whole thing on your system instead of copying it into your project. That way if there are updates, you can keep up with what's new and update to the latest.
| 1 | 0 | 0 |
I downloaded the yelp github for python and there's three files inside the python folder. I placed these three folders inside a folder named yelp which lays inside mysite. (I'm following django's creating first app documentation). This mysite folder has the polls folder as well.
When I added yelp into the settings.py INSTALLED_APPS, why do I still get no module named yelp if I tried to do python manage.py runserver?
Please help, I'm still trying to learn how python works, what to do when you bring in a new .py file..?
thank you!
|
python/django no module named yelp
| 1.2 | 0 | 0 | 699 |
12,490,468 |
2012-09-19T07:45:00.000
| 0 | 1 | 0 | 0 |
android,python,sl4a
| 12,490,713 | 1 | true | 0 | 1 |
Yes, it is possible to use a toast and a notification at the same time.
Although it may be not the best user experience in my opinion.
Toast is a way to let the user know of something while he'she is looking at the screen and is low priority. It goes away in a while.
Notification is a way to let the user know about something which is higher priority than a toast. It may be at a time where user's primary focus is not your app, or the device is sleeping as well. User can go to the notification drawer and see what's new with your app with this.
In most use cases, one of them does the job. I'm not sure why you need both.. at the same time. Doesn't a single notification cut it ?
| 1 | 0 | 0 |
I want to execute the Python scripts(that displays a toast and notification) in Android using sl4a. Can I show a toast message and a notification simultaneously? I m using an emulator for testing.
|
Toast, notification simultaneously in Android
| 1.2 | 0 | 0 | 694 |
12,491,429 |
2012-09-19T08:57:00.000
| 3 | 0 | 1 | 1 |
python,argparse,optparse
| 12,491,541 | 2 | true | 0 | 0 |
I would stick with optparse as long as it provides the functionality you currently need (and expect to need in future).
optparse works perfectly fine, it just won't be developed further. It's still available in Python 3, so even if one day you decide to move to Python 3, it will continue to work.
| 1 | 4 | 0 |
I have a small application that runs on fairly recent Linux distributions with Python 2.7+ but also on CentOS and Scientific Linux boxes that have not yet made the switch to Python 2.7. optparse is deprecated with Python 2.7 and frankly I don't want to support optparse anyway, which is why I developed the application with argparse in mind. However, argparse does not exist on these older distributions. Moreover, the sysadmins are rather suspicious of installing a backport of argparse.
Now, that should I do? Stick with optparse? Write yet-another-wrapper around both libraries? Convince sysadmins and users (who in most cases are just able to start the application) to install an argparse backport?
|
How to support both argparse and optparse?
| 1.2 | 0 | 0 | 394 |
12,492,337 |
2012-09-19T09:50:00.000
| 0 | 0 | 1 | 0 |
python
| 12,492,718 | 3 | false | 0 | 0 |
There is probably no need to use recursion. Since you do this as a learning exercise, I will only provide you with an outline of how to progress.
The hexagonal grid will need a coordinate system, for the rows and columns.
Create a function neighbours that, given the coordinates x,y of a tile returns all the neighbours of that tile.
Loop through the all the tiles using your coordinate system. For each tile, retrieve its neighbours. If a neighbour does not have a type you can ignore it, otherwise, determine the character of a tile based on the character of its neighbours.
| 2 | 0 | 0 |
I would like to write for myself a simple map generator and I do not know how to bite. field will have to draw lots hexagonal.
When I generate random tile I must watch for neighbor. Then I have to take into account the already two neighbors, etc. Recursion? I determined that the field may be the water, the earth, the mountains - but in one field may be the transition from water to land with one of the sides.
An array will consist of a number specifying the type of tile.
I want to do it in python - for learning.
Some advice please.
|
How to write own map generator?
| 0 | 0 | 0 | 296 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.