Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
13,487,181
2012-11-21T05:51:00.000
0
0
1
0
python,sockets,shared-memory
13,513,533
2
false
0
0
mmap doesn't take a file name but rather a file descriptor. It performs the so-called memory mapping, i.e. it associates pages in the virtual memory space of the process with portions of the file-like object, represented by the file descriptor. This is a very powerful operation since it allows you: to access the content of a file simply as an array in memory; to access the memory of special I/O hardware, e.g. the buffers of a sound card or the framebuffer of a graphics adapter (this is possible since file desciptors in Unix are abstractions and they can also refer to device nodes instead of regular files); to share memory between processes by performing shared maps of the same object. The old pre-POSIX way to use shared memory on Unix was to use the System V IPC shared memory. First a shared memory segment had to be created with shmget(2) and then attached to the process with shmat(2). SysV shared memory segments (as well as other IPC objects) have no names but rather numeric IDs, so the special hash function ftok(3) is provided, which converts the combination of a pathname string and a project ID integer into a numeric key ID, but collisions are possible. The modern POSIX way to use shared memory is to open a file-like memory object with shm_open(2), resize it to the desired size with ftruncate(2) and then to mmap(2) it. Memory-mapping in this case acts like the shmat(2) call from the SysV IPC API and truncation is necessary since shm_open(2) creates objects with an initial size of zero. (these are part of the C API; what Python modules provide is more or less thin wrappers around those calls and often have nearly the same signature) It is also possible to get shared memory by memory mapping the same regular file in all processes that need to share memory. As a matter of fact, Linux implements the POSIX shared memory operations by creating files on a special tmpfs file system. The tmpfs driver implements very lightweight memory mapping by directly mapping the pages that hold the file content into the address space of the process that executes mmap(2). Since tmpfs behaves as a normal filesystem, you can examine its content using ls, cat and other shell tools. You can even create shared memory objects this way or modify the content of the existent ones. The difference between a file in tmpfs and a regular filesystem file is that the latter is persisted to storage media (hard disk, network storage, flash drive, etc.) and occasionally changes are flushed to this storage media while the former lives entirely in RAM. Solaris also provides similar RAM-based filesystem, also called tmpfs. In modern operating systems memory mapping is used extensively. Executable files are memory-mapped in order to supply the content of those pages, that hold the executable code and the static data. Also shared libraries are memory-mapped. This saves physical memory since these mappings are shared, e.g. the same physical memory that holds the content of an executable file or a shared library is mapped in the virtual memory space of each process.
2
2
0
I have a processes from several servers that send data to my local port 2222 via udp every second. I want to read this data and write it to shared memory so there can be other processes to read the data from shared memory and do things to it. I've been reading about mmap and it seems I have to use a file... which I can't seem to understand why. I have an a.py that reads the data from the socket, but how can I write it to an shm? Once once it's written, I need to write b.py, c.py, d.py, etc., to read the very same shm and do things to it. Any help or snippet of codes would greatly help.
how to write to shared memory in python from stream?
0
0
0
2,659
13,488,266
2012-11-21T07:25:00.000
2
0
0
0
python,forms,httpwebrequest,scrapy
13,490,825
2
false
1
0
if you are using FormRequest.from_response() then all hidden values are already pre-populated automatically. but in most of the cases you need to override them as well depends on website functionality and behavior.
2
0
0
I want to know that if i need to perform some search on the job site , then do i need to pass only those variables which are visible on the form or all the variables , even some hidden fields like The form is here http://www.example.com/search.php Now there are two fields on the form like searchTerm and area and there are 5 hidden fields The form submits to http://www.example.com/submit.php Now i have these doubts Do i need to open the form page with scrapy with form page url or with the post url DO i need to pass the hidden variables as well or they will automatically gets posted with the form
Do i need to pass all the hidden fields as well with the form in scrapy
0.197375
0
0
341
13,488,266
2012-11-21T07:25:00.000
1
0
0
0
python,forms,httpwebrequest,scrapy
13,492,597
2
false
1
0
Sometimes you can go without some of the hidden fields, other times - not. You cannot know the server logic. It's up to the website how it is handling each of the form fields.
2
0
0
I want to know that if i need to perform some search on the job site , then do i need to pass only those variables which are visible on the form or all the variables , even some hidden fields like The form is here http://www.example.com/search.php Now there are two fields on the form like searchTerm and area and there are 5 hidden fields The form submits to http://www.example.com/submit.php Now i have these doubts Do i need to open the form page with scrapy with form page url or with the post url DO i need to pass the hidden variables as well or they will automatically gets posted with the form
Do i need to pass all the hidden fields as well with the form in scrapy
0.099668
0
0
341
13,491,731
2012-11-21T10:55:00.000
2
0
0
0
python,numpy
13,491,927
1
true
0
0
Have you looked at the way floats are represented in text before and after? You might have a line "1.,2.,3." become "1.000000e+0, 2.000000e+0,3.000000e+0" or something like that, the two are both valid and both represent the same numbers. More likely, however, is that if the original file contained floats as values with relatively few significant digits (for example "1.1, 2.2, 3.3"), after you do normalization and scaling, you "create" more digits which are needed to represent the results of your math but do not correspond to real increase in precision (for example, normalizing the sum of values to 1.0 in the last example gives "0.1666666, 0.3333333, 0.5"). I guess the short answer is that there is no guarantee (and no requirement) for floats represented as text to occupy any particular amount of storage space, or less than the maximum possible per float; it can vary a lot even if the data remains the same, and will certainly vary if the data changes.
1
0
1
I'm extracting a large CSV file (200Mb) that was generated using R with Python (I'm the one using python). I do some tinkling with the file (normalization, scaling, removing junk columns, etc) and then save it again using numpy's savetxt with data delimiter as ',' to kee the csv property. Thing is, the new file is almost twice as large than the original (almost 400Mb). The original data as well as the new one are only arrays of floats. If it helps, it looks as if the new file has really small values, that need exponential values, which the original did not have. Any idea on why is this happening?
Numpy save file is larger than the original
1.2
0
0
280
13,495,135
2012-11-21T14:12:00.000
1
0
0
0
python,django,postgresql
13,495,557
1
true
1
0
Er, not sure how we can help you with that. One is for bash, one is for SQL. No, that's for running the development webserver, as the tutorial explains. There's no need to do that, that's what the virtualenv is for. This has nothing to do with Python versions, you simply don't seem to be in the right directory. Note that, again as the tutorial explains, manage.py isn't created until you've run django-admin.py startproject myprojectname. Have you done that? You presumably created the virtualenv using 3.2. Delete it and recreate it with 2.7. You shouldn't be "reading in a forum" about how to do the Django tutorial. You should just be following the tutorial.
1
1
0
I'm new to web development and I'm trying to get my mac set up for doing Django tutorials and helping some developers with a project that uses postgres. I will try to specify my questions as much as possible. However, it seems that there are lots of floating parts to this question and I'm not quite understanding some parts of the connection between an SQL Shell, virtual environments, paths, databases, terminals (which seem to be necessary to get running on this web development project). I will detail what I did and the error messages that appear. If you could help me with the error messages or simply post links to tutorials that help me better understand how these floating parts work together, I would very much appreciate it. I installed postgres and pgAdmin III and set it up on the default port. I created a test database. Now when I try to open it on the local server, I get an error message: 'ERROR: column "datconfig" does not exist LINE1:...b.dattablespace AS spcoid, spcname, datallowconn, dataconfig,... Here is what I did before I closed pgAdmin and then reopened it: Installation: The Setup told me that an existing data directory was found at /Library/PostgreSQL/9.2/data set to use port 5433. I loaded an .sql file that I wanted to test (I saved it on my desktop and loaded it into the database from there). I'm not sure whether this is related to the problem or not, but I also have virtual environments in a folder ~/Sites/django_test (i.e. when I tell the bash Terminal to “activate” this folder, it puts me in a an (env)). I read in a forum that I need to do the Django tutorials by running “python manage.py runserver" at the bash Terminal command line. When I do this, I get an error message saying “can't open file 'manage.py': [Errno 2] No such file or directory”. Even when I run the command in the (env), I get the error message: /Library/Frameworks/Python.framework/Versions/3.2/Resources/Python.app/Contents/MacOS/Python: can't open file 'manage.py': [Errno 2] No such file or directory (Which I presume is telling me that the path is still set on an incorrect version of Python (3.2), even though I want to use version 2.7 and trashed the 3.2 version from my system. ) I think that there are a few gaps in my understanding here: I don’t understand the difference between typing in commands into my bash Terminal versus my SQL shell Is running “python manage.py runserver” the same as running Python programs with an IDE like IDLE? How and where do I adjust your $PATH environment variable so that the correct python occurs first on the path? I think that I installed the correct Python version into the virtual environment using pip install. Why am I still receiving a “No such file or directory” error? Why does Python version 3.2 still appear in the path indicated by my error message is I trashed it? If you could help me with these questions, or simply list links with any tutorials that explain this, that would be much appreciated. And again, sorry for not being more specific. But I thought that it would be more helpful to list the problems that I have with these different pieces rather than just one, since its their interrelatedness that seems to be causing the error messages. Thanks!
postgres installation error on Mac 10.6.8
1.2
1
0
291
13,495,606
2012-11-21T14:39:00.000
0
0
0
0
python,http,session-state
13,495,800
1
false
0
0
If this is through a browser and is using cookies then this shouldn't be an issue at all. The cookie will, from what I can tell, the last session value that it is set to. If the client you are using does not use cookies then of course it will open a new session for each connection.
1
1
0
I have a problem and can't solve it. Maybe I'm making it too hard or complex or I'm just going in the wrong direction and thinking of things that don't make sense. Below is a description of what happens. (Multiple tabs opened in a browser or a page that requests some other pages at the same time for example.) I have a situation where 3 requests are received by the web application simultaneously and new user session has to be created. This session is used to store notification, XSRF token and login information when the user logs in. The application uses threads to handle requests (CherryPy under Bottle.py). The 3 threads (or processes in case or multiple application instances) start handling the 3 requests. They check the cookie, no session exists, and create a new unique token that is stored in a cookie and in Redis. This will all happen at the same time and they don't know if a session already has been created by another thread, because all 3 tokens are unique. These unused sessions will expire eventually, but it's not neat. It means everytime a client simultaneously does N requests and a new session needs to be created, N-1 session are useless. If there is a property that can be used to identify a client, like an IP address, it would be a lot easier, but an IP address is not safe to use in this case. This property can be used to atomically store a session in Redis and other requests would just pick up that session.
What are good ways to create a HTTP session with simultaneous requests?
0
0
1
484
13,496,868
2012-11-21T15:47:00.000
2
0
1
0
python
13,496,967
4
false
0
0
I have no experience of Python, but a simple Google search comes up with py2exe As for OS independent, that's impossible, that's the whole point of an interpreter. The code is write once run anywhere, but once you compile it, you have to compile it for a specific platform
2
1
0
Suppose i have a main.py for a calculator application. I used different .py files for different function like(add.py for adding, mul.py for multiplication) and all these files are imported in main.py. When I click main.py it executes successfully and do all function like adding, mul, etc. What i want to do is to make a executable file for main.py so that i can run it in the computer that doesnot have a python installed or any of that module(add.py,..) exist in the hard drive. Is that possible?? THANKS FOR THE HELP Finally I got the solution. PyInstaller works well in my case and it is easy to use too. Thank you for your help. (:
Converting .py extension to executable file
0.099668
0
0
1,412
13,496,868
2012-11-21T15:47:00.000
1
0
1
0
python
13,497,152
4
true
0
0
It is impossible to have an independent executable that runs on any OS, that's why Python is an interpreted language. What you can do is compile the Python scripts into an executable on each OS you want it to run on, using pyinstaller, so you are have a bunch of different independent programs for different OSs all built on the same Python script. Pyinstaller has a advantage over py2exe because its easier to compile all the scripts into one file, instead of one directory.
2
1
0
Suppose i have a main.py for a calculator application. I used different .py files for different function like(add.py for adding, mul.py for multiplication) and all these files are imported in main.py. When I click main.py it executes successfully and do all function like adding, mul, etc. What i want to do is to make a executable file for main.py so that i can run it in the computer that doesnot have a python installed or any of that module(add.py,..) exist in the hard drive. Is that possible?? THANKS FOR THE HELP Finally I got the solution. PyInstaller works well in my case and it is easy to use too. Thank you for your help. (:
Converting .py extension to executable file
1.2
0
0
1,412
13,497,607
2012-11-21T16:25:00.000
1
1
1
0
python,search
35,276,856
4
false
0
0
The only reason anyone would want a tool that is capable of searching 'certain parts' of a file is because what they are trying to do is analyze data that has legal restrictions on which parts of it you can read. For example, Apple has the capability of identifying the GPS location of your iPhone at any moment a text was sent or received. But, what they cannot legally do is associate that location data with anything that can be tied to you as an individual. On a broad scale you can use obscure data like this to track and analyze patterns throughout large amounts of data. You could feasibly assign a unique 'Virtual ID' to every cell phone in the USA and log all location movement; afterward you implement a method for detecting patterns of travel. Outliers could be detected through deviations in their normal travel pattern. That 'metadeta' could then be combined with data from outside sources such as names and locations of retail locations. Think of all the situations you might be able to algorithmically detect. Like the soccer dad who for 3 years has driven the same general route between work, home, restaurants, and a little league field. Only being able to search part of a file still offers enough data to detect that Soccer Dad's phone's unique signature suddenly departed from the normal routine and entered a gun shop. The possibilities are limitless. That data could be shared with local law enforcement to increase street presence in public spaces nearby; all while maintaining anonymity of the phone's owner. Capabilities like the example above are not legally possible in today's environment without the method IggY is looking for. On the other hand, it could just be that he is only looking for certain types of data in certain file types. If he knows where in the file he wants to search for the data he needs he can save major CPU time only reading the last half or first half of a file.
1
1
0
I'm making a search tool in Python. Its objective is to be able to search files by their content. (we're mostly talking about source file, text files, not images/binary - even if searching in their METADATA would be a great improvment). For now I don't use regular expression, casual plain text. This part of the algorithm works great ! The problem is that I realize I'm searching mostly in the same few folders, I'd like to find a way to build an index of the content of each files in a folder. And be able as fast as possible to know if the sentence I'm searching is in xxx.txt or if it can't be there. The idea for now is to maintain a checksum for each file that makes me able to know if it contains a particular string. Do you know any algorithm close to this ? I don't need a 100% success rate, I prefer a little index than a big one with 100% success. The idea is to provide a generic tool. EDIT : To be clear, I want to search a PART of the content of the file. So making a md5 hash of all its content & comparing it with the hash of what i'm searching isn't a good idea ;)
Create an index of the content of each file in a folder
0.049958
0
0
3,002
13,498,828
2012-11-21T17:31:00.000
2
0
0
0
python,node.js,heroku,cedar
13,785,484
1
true
1
0
After having played around a little, and also doing some reading, it seems like Heroku apps that need this have 2 main options: 1) Use some kind of back-end, that both apps can talk to. Examples would be a DB, Redis, 0mq, etc. 2) Use what I suggested above. I actually went ahead and implemented it, and it works. Just thought I'd share what I've found.
1
3
0
I am trying to build a web-app that has both a Python part and a Node.js part. The Python part is a RESTful API server, and the Node.js will use sockets.io and act as a push server. Both will need to access the same DB instance (Heroku Postgres in my case). The Python part will need to talk to the Node.js part in order to send push messages to be delivered to clients. I have the Python and DB parts built and deployed, running under a "web" dyno. I am not sure how to build the Node part -- and especially how the Python part can talk to the Node.js part. I am assuming that the Node.js will need to be a new Heroku app, so that it too can run on a 'web' dyno, so that it benefits from the HTTP routing stack, and clients can connect to it. In such a case, will my Python dynos will be accessing it using just like regular clients? What are the alternatives? How is this usually done?
Heroku Node.js + Python
1.2
0
1
2,109
13,503,553
2012-11-21T23:10:00.000
0
0
1
0
types,input,python-3.x,eval
13,521,828
3
false
0
0
To answer your second part of the question you can use the isinstance() function in python to check if a variable is of a certain type.
1
1
0
I have some functions calling for user input, sometimes string, int or whatever. so i noticed that if ENTER is pressed with NO INPUT i get an error. SO i did some research and i think i found that EVAL function may be what I'm looking for, but then again i read about its dangers. SO here are my questions: How can i check/force user input? EX: repeating the input string or maybe even warning user that he didn't enter anything? how do i check for the correct type of input (int, float, string, etc) against whatever the user types without having my scripts returning errors? I appreciate your feedback, Cheers
"Force" User Input and Checking for correct Input Type
0
0
0
1,011
13,505,194
2012-11-22T02:45:00.000
2
0
0
0
python,http,scrapy,web-crawler
13,585,472
1
false
1
0
Are you sure you are allowed to crawl the destination site at high speed? Many sites implement download threshold and "after a while" start responding slowly.
1
8
0
I am experiencing slow crawl speeds with scrapy (around 1 page / sec). I'm crawling a major website from aws servers so I don't think its a network issue. Cpu utilization is nowhere near 100 and if I start multiple scrapy processes crawl speed is much faster. Scrapy seems to crawl a bunch of pages, then hangs for several seconds, and then repeats. I've tried playing with: CONCURRENT_REQUESTS = CONCURRENT_REQUESTS_PER_DOMAIN = 500 but this doesn't really seem to move the needle past about 20.
Scrapy Crawling Speed is Slow (60 pages / min)
0.379949
0
1
4,117
13,505,819
2012-11-22T04:16:00.000
1
0
1
0
python,recursion,path,split,tuples
13,505,834
2
false
0
0
Your error is not in recursion, but rather what you're doing to concatenate the recursive results. Say you have reached ('E:', 'John', '2012', 'prac'), and the next character is 't'; you don't want to append 't' to the recursive result, you want to append it to the last word of the recursive result. Similarly, when you reach a separator, you want to initialise the new word as empty. When you're doing recursion, you will (pretty much) always have two cases: a recursive one, and a terminal one. The terminal one is usually easy, and you did it correctly (if there's no string, there's no words). But I find it helps immensely if you try to have a specific example of the recursive case, somewhere mid-computation as above, to work out exactly what needs to happen.
1
7
0
I am trying to split a path given as a string into sub-parts using the "/" as a delimiter recursively and passed into a tuple. For ex: "E:/John/2012/practice/question11" should be ('E:', 'John', '2012', 'practice', 'question11'). So I've passed every character excluding the "/" into a tuple but it is not how I wanted the sub-parts joint as displayed in the example. This is a practice question in homework and would appreciate help as I am trying to learn recursion. Thank You so much
Python Split path recursively
0.099668
0
0
3,076
13,507,434
2012-11-22T07:07:00.000
2
0
0
0
python,scrapy
13,507,452
3
false
1
0
The u symbol is added in displaying the strings to represent that the object is a Unicode string. Similarly, if you use a unicode string in your code, you can use a unicode literal by adding a u symbol next to the string itself.
1
0
0
I am using scrapy to scrap the website. My items are appearing like this {'company': [u'Resource Agility'], i am sick of this u. is that normal? i want to know that if i store my value in database, does the u also get in there? Is there any way to hide that u?
Why i see `u` in front of all the text in python text
0.132549
0
0
170
13,508,043
2012-11-22T07:51:00.000
3
0
0
0
python,python-2.7,tkinter,tkmessagebox
13,508,222
1
true
0
1
Short answer: "No!" message is a simple string, no interpretation like in some widgets of other frameworks is done. Longer answer: You could e.g. subclass+monkeypatch, to provide such feature.
1
6
0
I got a tkMEssagebox.showerror showing up in a python tkinter application, when somebody failed to login with the application. Is it possible to have a url link in the tkMessageBox.showerror? ie. tkMessageBox.showerror("Error","An error occured please visit www.blahblubbbb.com") and I want the www.blahblubbbb.com to be clickable?!
tkinter tkMessageBox html link
1.2
0
0
2,245
13,510,284
2012-11-22T10:09:00.000
3
0
1
0
python
13,510,303
3
false
0
0
No, there normally is no harm such a condition could cause. The in comparison operator doesn't alter anything. If any of the variables a, b, c or d is indeed 0 or False, then the test would succeed, otherwise it would fail (return False). The four variables collected (temporarily) into a list would not be affected themselves. (Why 0? Because bool is a subclass of int and False == 0 is True in python. So False in [0] is True as well. Try it out in your interpreter; issubclass(bool, int) returns True, and then try True + 1 for more surprises). The one exception would be if someone had made the mistake of creating a __eq__ equality hook on a custom class that altered the state of the instance. That would be a big bug in that custom class, not anything specific to the in statement itself.
1
4
0
Is there any active harm in using if False in [a, b, c, d]? It reduces the size of the code and it works in the interpreter. But is there any active harm such a condition can cause?
Any active harm in `if False in [a, b, c, d]`
0.197375
0
0
149
13,513,103
2012-11-22T12:49:00.000
-1
0
1
0
python,testing,load-testing,performance-testing
13,513,162
4
true
0
0
10,000 per second? You'll need lots of machines to do this. Write a client that can POST requests serially and then replicate it on several machines.
2
3
0
I developed REST server. I hosted that one my virtual machine nginx server. Now i want to do bench marking by sending 10,000 concurrent request per second. So any solution for this ?
How to send concurrent 10,000 post request using python?
1.2
0
1
2,276
13,513,103
2012-11-22T12:49:00.000
0
0
1
0
python,testing,load-testing,performance-testing
13,514,261
4
false
0
0
programmatically you can create threads and do a url fetch by every thread but not sure if you can create 10,000 requests.
2
3
0
I developed REST server. I hosted that one my virtual machine nginx server. Now i want to do bench marking by sending 10,000 concurrent request per second. So any solution for this ?
How to send concurrent 10,000 post request using python?
0
0
1
2,276
13,514,119
2012-11-22T13:49:00.000
4
0
1
0
python,virtualenv
13,514,167
1
true
0
0
As long as the module you wish to edit is written in pure Python, changing the source code in the virtualenv's site-packages directory should work just fine. If the module is a C-extension, then you will need to recompile the module before the changes take effect. Edit: Note that if you are working with the module in an interactive session, you will need to reload the module in the session (and re-instantiate any object instances based on that module) each time you make a change.
1
5
0
I have a module installed using pip in a virtualenv. I want to experiment with a single change on one line of its code, and wonder if it will work by going straight to the source file and changing that line? If not, what is the easiest way to do so? Download the source, change it, and run python setup.py install inside the virtualenv? But would that install the module inside the virtualenv? And can I still remove it later using pip or I need to clean it up manually?
Change installed Python module in virtualenv
1.2
0
0
1,880
13,514,509
2012-11-22T14:11:00.000
5
0
0
0
python,sqlite,search
65,373,519
4
false
0
0
Just dump the db and search it. % sqlite3 file_name .dump | grep 'my_search_string' You could instead pipe through less, and then use / to search: % sqlite3 file_name .dump | less
2
13
0
Is there a library or open source utility available to search all the tables and columns of an Sqlite database? The only input would be the name of the sqlite DB file. I am trying to write a forensics tool and want to search sqlite files for a specific string.
Search Sqlite Database - All Tables and Columns
0.244919
1
0
15,981
13,514,509
2012-11-22T14:11:00.000
4
0
0
0
python,sqlite,search
59,407,127
4
false
0
0
@MrWorf's answer didn't work for my sqlite file (an .exb file from Evernote) but this similar method worked: Open the file with DB Browser for SQLite sqlitebrowser mynotes.exb File / Export to SQL file (will create mynotes.exb.sql) grep 'STRING I WANT" mynotes.exb.sql
2
13
0
Is there a library or open source utility available to search all the tables and columns of an Sqlite database? The only input would be the name of the sqlite DB file. I am trying to write a forensics tool and want to search sqlite files for a specific string.
Search Sqlite Database - All Tables and Columns
0.197375
1
0
15,981
13,514,805
2012-11-22T14:29:00.000
3
0
0
0
python,networking,pipe
13,519,364
2
false
0
0
'Broken pipe' means you have written to a connection that has already been closed by the other end. So, you must have done that somehow.
1
1
0
I'm writing a peer-to-peer program that requires the network be fully connected. However, when I test this locally and bring up about 20 nodes, some nodes successfully create a socket to other nodes, but when writing immediately after a broken pipe error occurs. This only happens when I start all nodes one right after the other; if I sleep about a second I don't see this problem. I have logic to deal with two nodes that both open sockets to eachother, which may be buggy, though I do see it operating properly with less nodes. Is this a limitation of testing locally?
Reason for broken pipes
0.291313
0
1
2,008
13,515,048
2012-11-22T14:41:00.000
1
0
1
0
python,debugging,io,refactoring
13,516,870
5
false
0
0
In harsh circumstances (output done in some weird binary libraries) you could also use strace -e write (and more options). If you do not read strace's output, the straced program waits until you do, so you can send it a signal and see where it dies.
1
4
0
I have a program depending on a large code base that prints a lot of irrelevant and annoying messages. I would like to clean them up a bit, but since their content is dynamically generated, I can't just grep for them. Is there a way to place a hook on the print statement? (I use python 2.4, but I would be interested in results for any version). Is there another way to find from which "print" statement the output comes?
How do you find where a print statement is located?
0.039979
0
0
724
13,515,813
2012-11-22T15:26:00.000
0
1
1
0
java,python,math
13,515,894
5
false
0
0
What kind of math operations do you need? Java already provides classes like java.math.BigDecimal or java.math.BigInteger you can use to do basic stuff (addition, multiplication, etc.)
1
4
0
I am looking for some way to handle numbers that are in the tens of millions of digits and can do math at this level. I can use Java and a bit of Python. So a library for one of these would be handy, but a program that can handle these kind of numbers would also work. Does anyone have any suggestions? Thanks
Program or library to handle massive numbers
0
0
0
207
13,517,991
2012-11-22T18:02:00.000
3
0
1
0
python-3.x,urllib3
13,520,455
1
true
0
0
You need to install it for each version of Python you have - if pip installs it for Python 2.7, it won't be accessible from 3.2. There doesn't seem to be a pip-3.2 script, but you can try easy_install3 urllib3.
1
1
0
I have Python 2.7.3 installed alongside Python 3.2.3 on an Ubuntu system. I've installed urllib3 using pip and can import it from the python shell. When I open the python3 shell, I get a can't find module error when trying to import urllib3. help('modules') from within the shell also doesn't list urllib3. Any ideas on how to get python3 to recognize urllib3?
Python3 can't find urllib3
1.2
0
1
4,054
13,518,222
2012-11-22T18:21:00.000
0
0
0
0
python,django,django-models,naming-conventions
13,518,421
2
false
1
0
Depending on the structure of other parts of your code, you could use the name of the Django app, a package, or a module to further scope the names.
2
2
0
I have a pretty stupid question about model naming conventions in Django. Imagine a farmstead which has buildings which have rooms. Farmstead --> Buildings --> Rooms With Farmstead it is ok, let's call it a Farmstead. Next one: Building or FarmsteadBuilding? BuildingRoom, Room or FarmsteadBuildingRoom?
Django model naming conventions
0
0
0
931
13,518,222
2012-11-22T18:21:00.000
6
0
0
0
python,django,django-models,naming-conventions
13,518,370
2
true
1
0
If all your instances of Room belongs to a Building (and there is no another kind of models like Apartment) and all your instances of Building belongs to a Farmstead (following the same idea), so just use the name of your models like Farmstead, Building and Room. It's not necessary to specify something that is already specified in your business logic.
2
2
0
I have a pretty stupid question about model naming conventions in Django. Imagine a farmstead which has buildings which have rooms. Farmstead --> Buildings --> Rooms With Farmstead it is ok, let's call it a Farmstead. Next one: Building or FarmsteadBuilding? BuildingRoom, Room or FarmsteadBuildingRoom?
Django model naming conventions
1.2
0
0
931
13,519,086
2012-11-22T19:38:00.000
1
0
0
0
python,gstreamer,python-gstreamer
13,529,688
3
false
1
0
What does "you'll need to thread your own gst object" mean? And what does "wait until the query succeeds" mean? State changes from NULL to PAUSED or PLAYING state are asynchronous. You will usually only be able to do a successful duration query once the pipeline is prerolled (so state >= PAUSED). When you get an ASYNC_DONE message on the pipeline's (playbin2's) GstBus, then you can query.
3
0
0
I'm developing a media player that streams mp3 files. I'm using the python gstreamer module to play the streams. my player is the playbin2 element When I want to query the position (with query_position(gst.FORMAT_TIME,None)), it always returns a gst.QueryError: Query failed. The song is definetly playing. (state is not NULL) Does anyone have any experience with this? PS: I also tried replacing gst.FORMAT_TIME with gst.Format(gst.FORMAT_TIME), but gives me the same error.
element playbin2 query_position always returns query failed
0.066568
0
1
700
13,519,086
2012-11-22T19:38:00.000
0
0
0
0
python,gstreamer,python-gstreamer
13,529,066
3
false
1
0
I found it on my own. Problem was with threading. Apparently, you'll need to thread your gst object and just wait until the query succeeds.
3
0
0
I'm developing a media player that streams mp3 files. I'm using the python gstreamer module to play the streams. my player is the playbin2 element When I want to query the position (with query_position(gst.FORMAT_TIME,None)), it always returns a gst.QueryError: Query failed. The song is definetly playing. (state is not NULL) Does anyone have any experience with this? PS: I also tried replacing gst.FORMAT_TIME with gst.Format(gst.FORMAT_TIME), but gives me the same error.
element playbin2 query_position always returns query failed
0
0
1
700
13,519,086
2012-11-22T19:38:00.000
0
0
0
0
python,gstreamer,python-gstreamer
13,525,709
3
false
1
0
From what source are you streaming? If you query the position from the playbin2 I'd say you do everything right. Can you file a bug for gstreamer, include a minimal python snippet that exposes the problem and tell from which source you stream - ideally its public.
3
0
0
I'm developing a media player that streams mp3 files. I'm using the python gstreamer module to play the streams. my player is the playbin2 element When I want to query the position (with query_position(gst.FORMAT_TIME,None)), it always returns a gst.QueryError: Query failed. The song is definetly playing. (state is not NULL) Does anyone have any experience with this? PS: I also tried replacing gst.FORMAT_TIME with gst.Format(gst.FORMAT_TIME), but gives me the same error.
element playbin2 query_position always returns query failed
0
0
1
700
13,520,279
2012-11-22T21:34:00.000
0
1
0
0
python,unit-testing
13,520,332
3
false
0
0
You haven't explained what the calculations are, but I guess your program should be able to work with a subset of the big file, as well. If this is the case, make a unit test which opens up a small file , does the calculations and produces some result, which you can verify is correct.
2
7
0
I've written a script that opens up a file, reads the content and does some operations and calculations and stores them in sets and dictionaries. How would I write a unit test for such a thing? My questions specifically are: Would I test that the file opened? The file is huge (it's the unix dictionary file). How would I unit test the calculations? Do I literally have to manually calculate everything and test that the result is right? I have a feeling that this defeats the whole purpose of unit testing. I'm not taking any input through stdin.
Unit testing a script that opens a file
0
0
0
3,162
13,520,279
2012-11-22T21:34:00.000
6
1
0
0
python,unit-testing
14,733,487
3
false
0
0
You should refactor your code to be unit-testable. That, on the top of my head, would say: Take the functionality of the file opening into a separate unit. Make that new unit receive the file name, and return the stream of the contents. Make your unit receive a stream and read it, instead of opening a file and reading it. Write a unit test for your main (calculation) unit. You would need to mock a stream, e.g. from a dictionary. Write several test cases, each time provide your unit with a different stream, and make sure your unit calculates the correct data for each input. Get your code coverage as close to 100% as you can. Use nosetests for coverage. Finally, write a test for your 'stream provider'. Feed it with several files (store them in your test folder), and make sure your stream provider reads them correctly. Get the second unit test coverage as close to 100% as you can. Now, and only now, commit your code.
2
7
0
I've written a script that opens up a file, reads the content and does some operations and calculations and stores them in sets and dictionaries. How would I write a unit test for such a thing? My questions specifically are: Would I test that the file opened? The file is huge (it's the unix dictionary file). How would I unit test the calculations? Do I literally have to manually calculate everything and test that the result is right? I have a feeling that this defeats the whole purpose of unit testing. I'm not taking any input through stdin.
Unit testing a script that opens a file
1
0
0
3,162
13,520,319
2012-11-22T21:37:00.000
1
0
0
0
python,r,time-series
13,520,565
1
false
0
0
Calculate the derivation of your sample points, for example for every 5 points (THRESHOLD!) calculate the slope of the five points with Least squares methods (search on wiki if you dont know what it is. Any lineair regression function uses it). And when this slope is almost (THRESHOLD!) zero there is a peak.
1
3
1
In time series we can find peak (min and max values). There are algorithms to find peaks. My question is: In python are there libraries for peak detection in time series data? or something in R using RPy?
Peak detection in Python
0.197375
0
0
1,640
13,521,142
2012-11-22T23:08:00.000
0
0
1
0
python,nltk,context-free-grammar,part-of-speech
13,554,998
2
true
0
0
Couldnt find one. Had to implement my own parser using stacks. Honestly wasnt too much of a pain.
1
2
0
I have a text corpus that contains sentences represented as trees with their Part of Speech tags. I want to build a system that can probably learn a probabilistic grammar from this tree structure. Are there any inbuilt python modules than can tackle this or do I have to have to build a parser?
Building a grammar from trees in python
1.2
0
0
745
13,522,556
2012-11-23T02:53:00.000
7
0
1
1
python,subprocess,pipe,wait,communicate
13,849,245
1
true
0
0
I suspect (the docs don't explicitly state it as of 2.6) in the case where you don't use PIPEs communicate() is reduced to wait(). So if you don't use PIPEs it should be OK to replace wait(). In the case where you do use PIPEs you can overflow memory buffer (see communicate() note) just as you can fill up OS pipe buffer, so either one is not going to work if you're dealing with a lot of output. On a practical note I had communicate (at least in 2.4) give me one character per line from programs whose output is line-based, that wasn't useful to put it mildly. Also, what do you mean by "retcode is not needed"? -- I believe it sets Popen.returncode just as wait() does.
1
16
0
In the document of wait (http://docs.python.org/2/library/subprocess.html#subprocess.Popen.wait), it says: Warning This will deadlock when using stdout=PIPE and/or stderr=PIPE and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use communicate() to avoid that. From this, I think communicate could replace all usage of wait() if retcode is not need. And even when the stdout or stdin are not PIPE, I can also replace wait() by communicate(). Is that right? Thanks!
When should I use `wait` instead of `communicate` in subprocess?
1.2
0
0
6,735
13,522,975
2012-11-23T04:10:00.000
2
1
1
0
python-2.7,pygame
13,537,741
1
true
0
0
I found a solution. The latest version of Pygame is still able to play MPEG-1 files. The problem was that there are different encoding of MPEG-1. The ones I found that works so far is Any Video Converter and Zamzar.com online converter. The downside to Zamzar is that it outputs really small version of the original video. video.online-convert.com does not work
1
0
0
I'm making a game using Python with PyGame module. I am trying to make an introduction screen for my game using a video that I made since it was easier to make a video than coding the animation for the intro screen. The Pygame movie module does not work as stated on their site so I cannot use that. I tried using Pymedia but I have no idea how to even get a video running since their documentation weren't that helpful. Do you guys know any sample code that uses Pymedia to play a video? Or any code at all that loads a video using python. Or if there's any other video module out there that is simple, please let me know. I'm totally stumped.
Playing Video Files in Python
1.2
0
0
1,309
13,523,115
2012-11-23T04:29:00.000
0
0
0
0
python,beautifulsoup,scrapy
13,560,416
2
false
1
0
well the answer is you should try to parse couple of pages with HtmlSelector then Using beautiful Soup. and find some stats. 2ndly most of people use beautiful Soup even lxml for parsing because they already used to for using this. Scrapy basic motive is Crawling if you are not comfortable with Xpath you can go with beautiful Soup , lxml (although lxml package also support xpath) even Only Regex for Parsing.
1
0
0
I am doing crawling everything in scrapy. I have seen that many people are using beautiful Soup for parsing. I just wanted to know that is there any advantage in terms of speed , efficiency or more slectrors etc which help me in creating spiders and crawlers or scrapy alone should be enough for me
Can the use of Beautiful Soup with Scrapy increase the performance
0
0
0
1,925
13,523,789
2012-11-23T05:55:00.000
1
0
0
0
python,python-3.x
13,533,315
1
true
0
0
I encountered the same problem, and i solved it by chopping the file up and then sending the parts separately (load the file, send file[0:512], then send file[512:1024] and so on. Before sending the file i sent the length of the file to the receiver so the it would know when its done. I know this probably isn't the best way to do this, but i hope it will help you.
1
0
0
Can someone give me a brief idea on how to transfer large files over the internet? I tried with sockets, but it does not work. I am not sure what the size of receiving sockets should be. I tried with 1024 bytes. I send the data from one end and keep receiving it at the other end. Is there any other way apart from sockets, I can use in Python?
Transfer a Big File Python
1.2
0
1
1,410
13,525,032
2012-11-23T07:57:00.000
0
0
0
0
python,pygtk
13,541,432
1
false
0
1
Not on PyGTK. If you are using GTK 3 then there is a CSS class, GtkProgressBar:pulse, but I can't find any documentation on how to change the width. It might be possible if you do some digging.
1
1
0
Is there any way to change the width of the block which is moving on gtk.ProgressBar.pulse()?
ProgressBar: set the width of the pulse block
0
0
0
216
13,525,482
2012-11-23T08:39:00.000
1
1
0
0
python,google-cloud-storage,file-exists
13,644,827
13
true
1
0
I guess there is no function to check directly if the file exists given its path. I have created a function that uses the files.listdir() API function to list all the files in the bucket and match it against the file name that we want. It returns true if found and false if not.
1
40
0
I have a script where I want to check if a file exists in a bucket and if it doesn't then create one. I tried using os.path.exists(file_path) where file_path = "/gs/testbucket", but I got a file not found error. I know that I can use the files.listdir() API function to list all the files located at a path and then check if the file I want is one of them. But I was wondering whether there is another way to check whether the file exists.
How to check if file exists in Google Cloud Storage?
1.2
0
1
52,000
13,526,654
2012-11-23T09:57:00.000
0
0
0
0
python,opencv,tracking
13,530,687
2
true
0
0
For a full proof track you need to combine more than one method...following are some of the hints... if you have prior knowledge of the object then you can use template matching...but template matching is little process intensive...if you are using GPU then you might have some benefit from your write up i presume external light varies to a lesser extent...so on that ground you can use goodfeaturestotrack function of opencv and use optical flow to track only those ponits found by goodfeaturestotrack in the next frames of the video if the background is stable except some brightness variation and the object is moving comparatively more than background then you can subtract the previous frame from the present frame to get the position of the moving object...this is kind of fast and easy change detection technique... Filtering contours based on area is good idea but try to add some more features to the filtering criteria...i.e. you can try filtering based on ellpticity,aspect ratio of the bounding box etc... lastly...if you have any prior knowledge about the motion path of the object you can use kalman filter... if the background is all most not variant or to some extent variant then you can try gaussian mixture model to model the background...while the changing ball is your fore ground...
2
2
1
I want to track a multicolored object(4 colors). Currently, I am parsing the image into HSV and applying multiple color filter ranges on the camera feed and finally adding up filtered images. Then I filter the contours based on area. This method is quite stable most of the time but when the external light varies a bit, the object is not recognized as the hue values are getting messed up and it is getting difficult to track the object. Also since I am filtering the contours based on area I often have false positives and the object is not being tracked properly sometimes. Do you have any suggestion for getting rid of these problems. Could I use some other method to track it instead of filtering individually on colors and then adding up the images and searching for contours?
Tracking a multicolor object
1.2
0
0
475
13,526,654
2012-11-23T09:57:00.000
0
0
0
0
python,opencv,tracking
13,534,342
2
false
0
0
You might try having multiple or an infinite number of models of the object depending upon the light sources available, and then classifying your object as either the object with one of the light sources or not the object. Note: this is a machine learning-type approach to the problem. Filtering with a Kalman, extended Kalman filter, or particle filter (depending on your application) would be a good idea, so that you can have a "memory" of the recently tracked features and have expectations for the next tracked color/feature in the near term (i.e. if you just saw the object, there is a high likelihood that it hasn't disappeared in the next frame). In general, this is a difficult problem that I have run into a few times doing robotics research. The only robust solution is to learn models and to confirm or deny them with what your system actually sees. Any number of machine learning approaches should work, but the easiest would probably be support vector machines. The most robust would probably be something like Gaussian Processes (if you want to do an infinite number of models). Good luck and don't get too frustrated; this is not an easy problem!
2
2
1
I want to track a multicolored object(4 colors). Currently, I am parsing the image into HSV and applying multiple color filter ranges on the camera feed and finally adding up filtered images. Then I filter the contours based on area. This method is quite stable most of the time but when the external light varies a bit, the object is not recognized as the hue values are getting messed up and it is getting difficult to track the object. Also since I am filtering the contours based on area I often have false positives and the object is not being tracked properly sometimes. Do you have any suggestion for getting rid of these problems. Could I use some other method to track it instead of filtering individually on colors and then adding up the images and searching for contours?
Tracking a multicolor object
0
0
0
475
13,532,170
2012-11-23T15:44:00.000
0
0
0
0
java,python,scrapy
13,584,941
2
true
1
0
I doubt you can run twisted under jython and scrapy is based on twisted. Not sure what you want to do but I recommend to run scrapyd and use the web service interface to communicate with java. Can you give us more details on what you want to achieve?
1
0
0
Is it possible to use Scrapy from within a Java project? With Jython for example, or maybe "indirect" solutions.
Scrapy inside java?
1.2
0
0
1,443
13,534,541
2012-11-23T19:01:00.000
0
1
0
1
python,linux,bash,ubuntu,sh
13,534,648
6
false
0
0
You can call poweroff from a script, as long as it's running with superuser privileges.
1
2
0
I want to write a script which can Shutdown remote Ubuntu system. Actually i want my VM to shutdown safely when i shutdown my main machine on which my VM is installed . Is there is any of doing this with the help of Sh scripts or script written in any language like Python.
Script to Shutdown Ubuntu
0
0
0
4,767
13,535,680
2012-11-23T21:00:00.000
1
0
1
0
python,debugging
13,554,896
4
false
0
0
I don't know your case, but if you use threads or multiprocessing then your code is applicable for parallel processing (usually). In difficult cases I do everything just calling a function without pool, catch error and then go to pools again.
1
7
0
I have a python script that works with threads, processes, and connections to a database. When I run my script, python crashes. I cannot explicitly detect the case in which this happens. Now I am looking for tools to get more information when python crashes, or a viewer to see all my created processes/connections.
python debug tools for multiprocessing
0.049958
0
0
3,774
13,536,952
2012-11-23T23:39:00.000
2
1
0
0
python,winapi,pywin32,importerror,pythoncom
53,436,989
1
false
0
0
If you are using an IDE like I am (PyCharm), you should go the where the Python is installed e.g C:\Users\***\AppData\Local\Programs\Python\Python37\Lib\site-packages In this folder check for the folder name pywin32. Copy that folder and paste it to C:\Users\***\PycharmProjects\project\venv\Lib\site-packages. After that restart your IDE, then it will import the pywin32 as it did in my case. I hope it helps.
1
3
0
I am trying to import pythoncom, but it gives me this error: Traceback (most recent call last): File "F:/Documents and Settings/Emery/Desktop/Python 27/Try", line 2, in import pythoncom File "F:\Python27\lib\site-packages\pythoncom.py", line 2, in import pywintypes ImportError: No module named pywintypes I reinstalled Python win32, but it still doesn't fix it. Any help? Also, I am trying to access the pythoncom.PumpMessages() method, an alternative would be nice as well.
win32 Python - pythoncom error - ImportError: No module named pywintypes
0.379949
0
0
5,795
13,538,069
2012-11-24T03:50:00.000
1
0
0
0
python,pygtk
13,541,300
1
true
0
1
The button contains a label, which you can access with get_child() and related functions. You can change the label's font size using the normal method.
1
1
0
I am developing Ubuntu app using the Unity tools (PyGTK + Python). I can change label font-size with no problem, but I cannot find same thing for button. How can I change button font-size? The purpose of this is, i am developing an on-screen keypad, so the numerical text needs to be bigger so that it is easier to be seen.
Glade + PyGTK Change Button Font Size
1.2
0
0
2,417
13,542,698
2012-11-24T15:38:00.000
0
1
1
0
python,apt
13,552,981
2
false
0
0
The use to fork is just a possibility I think. I've already try to redirect the sys.stdout even the sys.stderr : No Joy, it won't work.
1
3
0
I use python apt library and I would like that the commit() function doesn't produce any output. I've searched on the web and saw that the fork function can do the trick but I don't know how to do that or if there exists another way. I don't use any GUI, I work via the terminal.
How to silence the commit function from the python apt library?
0
0
0
256
13,543,069
2012-11-24T16:25:00.000
1
0
0
0
python,random,graph,networkx,bellman-ford
13,544,567
4
true
0
0
I noticed that the generated graphs have always exactly one sink vertex which is the first vertex. You can reverse direction of all edges to get a graph with single source vertex.
1
5
1
I want to do a execution time analysis of the bellman ford algorithm on a large number of graphs and in order to do that I need to generate a large number of random DAGS with the possibility of having negative edge weights. I am using networkx in python. There are a lot of random graph generators in the networkx library but what will be the one that will return the directed graph with edge weights and the source vertex. I am using networkx.generators.directed.gnc_graph() but that does not quite guarantee to return only a single source vertex. Is there a way to do this with or even without networkx?
how to create random single source random acyclic directed graphs with negative edge weights in python
1.2
0
0
4,642
13,543,977
2012-11-24T17:59:00.000
4
0
1
0
python,math,geometry,trigonometry
13,544,035
2
false
0
0
Sure, math module has atan2. math.atan2(y, x) is an angle (0, 0) to (x, y). Also math.hypot(x, y) is distance form (0, 0) to (x, y).
1
2
0
First of all, I apologize to post this easy question. Probably there is a module to compute angle and distance between two points. A = (560023.44957588764,6362057.3904932579) B = (560036.44957588764,6362071.8904932579)
Python: Does a module exist which already find an angle and the distance between two points?
0.379949
0
0
4,396
13,544,061
2012-11-24T18:10:00.000
1
1
0
0
python,python-2.7,pycrypto
13,544,093
2
false
0
0
The compression method used by tar -z relies on repeating patterns in the input file, replacing these patterns by a count of how many times the pattern repeated (grossly simplified). However, when you encrypt a file, you are basically trying to hide any repeating patterns in as much 'random'-looking noise as possible. That makes your file nearly incompressible. Combine that with the overhead of the archive and compression file format (metadata, etc.) and your file actually ends up slightly larger instead. You should reverse the process; compress first, then encrypt, and you'll increase the chances you end up with a smaller payload significantly.
2
1
0
I created a encrypted file from a text file in python with beefish. beefish uses pycrypto. so my source text file is 33742 bytes and the encrypted version is 33752. thats ok so far but ... when I compress the test.enc (encrypted test file) with tar -czvf the final file is 33989 bytes. Why does the compression not work when the source file is encrypted? So far the only option then seems to compress it first and then encrypt it cause then the file stays that small.
compressed encrypted file is bigger then source
0.099668
0
0
494
13,544,061
2012-11-24T18:10:00.000
7
1
0
0
python,python-2.7,pycrypto
13,544,079
2
true
0
0
Compression works by identifying patterns in the data. Since you can't identify patterns in encrypted data (that's the whole point), you can't compress it. For a perfect encryption algorithm that produced a 33,742 byte output, ideally all you would be able to determine about the decrypted original data is that it can fit in 33,742 bytes, but no more than that. If you could compress it to, say, 31,400 bytes, then you would immediately know the input data was not, say, 32,000 bytes of random data since random data is patternless and thus incompressible. That would indicate a failure on the part of the encryption scheme. It's nobody's business whether the decrypted data is random or not.
2
1
0
I created a encrypted file from a text file in python with beefish. beefish uses pycrypto. so my source text file is 33742 bytes and the encrypted version is 33752. thats ok so far but ... when I compress the test.enc (encrypted test file) with tar -czvf the final file is 33989 bytes. Why does the compression not work when the source file is encrypted? So far the only option then seems to compress it first and then encrypt it cause then the file stays that small.
compressed encrypted file is bigger then source
1.2
0
0
494
13,544,715
2012-11-24T19:23:00.000
0
0
0
0
python,linux,apache,web2py
13,545,399
1
false
1
0
Try testing the proxy theory by ssh -D tunneling to a server outside the proxy and seeing if that works for you.
1
0
0
I have deployed a web2py application on a server that is running on Apache web server. All seems to be working fine, except for the fact that the web2py modules are not able to connect to an external website. in web2py admin page, i get the following errors : 1. Unable to check for upgrades 2. Unable to download because: I am using web2py 1.9.9, CentOS 5 I am also behind an institute proxy. I am guessing that the issue has to do something with the proxy configurations.
Web2py unable to access internet [connection refused]
0
0
1
412
13,546,052
2012-11-24T22:00:00.000
0
0
1
0
java,python,classpath
13,546,113
4
false
0
0
If you are using linux, you can use shell command: "type java" and parse output.
3
6
0
What is the best way to locate the jar containing core Java classes such as java.lang.Object? I have been using JAVA_HOME, but it turns out that that only works if the JDK is installed. How can I just find the standard library location that would be used if java were run from the commandline? Also, I need to do this from a Python script, so please don't respond with Java libraries.
Locate Java standard library
0
0
0
302
13,546,052
2012-11-24T22:00:00.000
0
0
1
0
java,python,classpath
13,546,607
4
false
0
0
Python should have the means to find the location of "java" or "java.exe" executable. Follow all symbolic links to reveal the true location. My Python is rusty, but I think shutil.which and os.path.realpath will do the job. The core classes will be in ../lib/rt.jar relative to the java executable.
3
6
0
What is the best way to locate the jar containing core Java classes such as java.lang.Object? I have been using JAVA_HOME, but it turns out that that only works if the JDK is installed. How can I just find the standard library location that would be used if java were run from the commandline? Also, I need to do this from a Python script, so please don't respond with Java libraries.
Locate Java standard library
0
0
0
302
13,546,052
2012-11-24T22:00:00.000
1
0
1
0
java,python,classpath
13,998,264
4
false
0
0
Since there are no explicit restrictions on placing internal classes, then you should try handle all the possible (or as much as possible) places. First of all, you should definitely look for java in PATH. Unfortunately, it will fail sometimes, for instance, you can often find java.exe in C:\Windows\System32. Fortunately, in case of Windows you can use registry to get a list of installed JREs. If you have found some jars which might contain basic classes, you can use zip or jar to check that java/lang/Object.class is there.
3
6
0
What is the best way to locate the jar containing core Java classes such as java.lang.Object? I have been using JAVA_HOME, but it turns out that that only works if the JDK is installed. How can I just find the standard library location that would be used if java were run from the commandline? Also, I need to do this from a Python script, so please don't respond with Java libraries.
Locate Java standard library
0.049958
0
0
302
13,547,811
2012-11-25T02:41:00.000
0
0
0
0
python,python-2.7,encryption
33,747,010
2
false
0
0
URL encryption will not save you in this situation. As your software runs in the client machine and somehow decrypts this URL and sends an HTTP request to it, any kid using Wireshark will be able to see your URL. If the design of your system requires sensitive URLs, the safer way probably involves changes in the design of your HTTP server itself! You have to structure your system in a way that the URLs are not sensitive because you cannot control them. As soon as they are used by your code, they can be captured.
2
0
0
I'm planning to compile my application to an executable file using Py2Exe. However, I have sensitive URL links in my application that I would like to remain hidden as in encrypted. Regardless if my application is decompiled, the links will still remain encrypted. How would I get say urllib2 to open the encrypted link? Any help would be appreciated, and or example code that could point me in the right direction. Thanks!
Encrypting URLs in Python
0
0
0
236
13,547,811
2012-11-25T02:41:00.000
0
0
0
0
python,python-2.7,encryption
13,547,837
2
false
0
0
I don't think urllib2 has an option like it, though what you can do is save the link somewhere else (say a simple database, encrypt them (like a password) and when urllib2 calls the link, you check the hash. Something like an user authentication.
2
0
0
I'm planning to compile my application to an executable file using Py2Exe. However, I have sensitive URL links in my application that I would like to remain hidden as in encrypted. Regardless if my application is decompiled, the links will still remain encrypted. How would I get say urllib2 to open the encrypted link? Any help would be appreciated, and or example code that could point me in the right direction. Thanks!
Encrypting URLs in Python
0
0
0
236
13,548,239
2012-11-25T04:20:00.000
-1
0
0
0
java,python,html
13,548,265
3
false
1
0
I don't know anything about Java or Python, but could you have it parse the html code and look for something like 'background-color: < color >'?
1
1
0
I am trying to detect color of different elements in a webpage(saved on machine). Currently I am trying to write a code in python. The initial approach which I followed is: find color word in html file in different tags using regular expressions. try to read the hex value. But this approach is very stupid. I am new to website design, can you please help me with this.
Detect background color of a website
-0.066568
0
0
1,034
13,548,590
2012-11-25T05:29:00.000
0
0
0
1
python,google-app-engine,web2py
13,551,914
1
false
1
0
App Engine datastore doesn't really have tables. That said, if web2py is able to make use of the datastore (I'm not familiar with it), then Kinds (a bit like tables) will only show up in the admin-console (/_ah/admin locally) once an entity has been created (i.e. tables only show up once one row has been inserted, you'll never see empty tables).
1
1
0
I have created an app using web2py and have declared certain new table in it using the syntax db.define_table() but the tables created are not visible when I run the app in Google App Engine even on my local server. The tables that web2py creates by itself like auth_user and others in auth are available. What am I missing here? I have declared the new table in db.py in my application. Thanks in advance
New tables created in web2py not seen when running in Google app Engine
0
1
0
100
13,553,174
2012-11-25T16:51:00.000
2
1
0
0
c++,python,boost,boost-python
13,558,179
1
false
0
1
Boost python is useful for exposing C++ objects to python. Since you're talking about interacting with an already running application from python, and the lifetime of the script is shorter than the lifetime of the game server, I don't think boost python is what you're looking for, but rather some form of interprocess communication. Whilst you could create your IPC mechanism in C++, and then expose it to python using boost python, I doubt this is what you want to do.
1
1
0
I read few Boost.Python tutorials and I know how to call C++ function from Python. But what I want to do is create C++ application which will be running in the background all the time and Python script that will be able to call C++ function from that instance of C++ application. The C++ application will be a game server and it has to run all the time. I know I could use sockets/shared memory etc. for this kind of communication but is it possible to make it with Boost.Python?
Boost.Python - communication with running C++ program
0.379949
0
0
426
13,554,241
2012-11-25T18:49:00.000
0
1
1
1
python,module,installation
13,555,251
2
false
0
0
Install the module: sudo pip-2.7 install guess_language Validate import and functionality: > Python2.7 >>> import guess_language >>> print guess_language.guessLanguage(u"שלום לכם") he
1
1
0
Is there any way to force python module to be installed in the following directory? /usr/lib/python2.7
Force python module to be installed in certain directory
0
0
0
70
13,555,278
2012-11-25T20:40:00.000
1
1
0
0
c++,python,c,plugins,shared-libraries
13,555,348
1
false
0
1
Write your application in Python, then you can have a folder for your plugins. Your application searches for them by checking the directory/traversing the plugin tree. Then import them via "import" or use ctypes for a .so/.dll, or even easier: you can use boost::python for creating a .so/.dll that you can 'import' like a normal python module. Don't use C++ and try to do scripting in Python - that really sucks, you will regret it. ;)
1
0
0
I want to create an application that is capable of loading plugins. The twist is that I want to be able to create plugins in both C/C++ and Python. So I've started thinking about this and have a few questions that I'm hoping people can help me with. My first question is whether I need to use C/C++ for the "core" of the application (the part that actually does the loading of the plugins)? This is my feeling at least, I would think that implementing the core in Python would result in a nasty performance hit, but it would probably simplify loading the plugins dynamically. Second question is how would I go about defining the plugin interface for C/C++ on one hand and Python on the other? The plugin interface would be rather simple, just a single method that accepts a list of image as a parameter and returns a list of images as a return value. I will probably use the OpenCV image type within the plugins which exists for both C/C++ and Python. Finally, I would like the application to dynamically discover plugins. So if you place either a .py file or a shared library file (.so/.dll) in this directory, the application would be able to produce a list of available plugins at runtime. I found something in the Boost library called Boost.Extension (http://boost-extension.redshoelace.com/docs/boost/extension/index.html) but unfortunately it doesn't seem to be a part of the official Boost library and it also seems to be a bit stale now. On top of that, I don't know how well it would work with Python, that is, how easy it would be to create Python plugins that fit into this mechanism. As a side note, I imagine the application split into two "parts". One part is a stripped down core (loading and invoking plugin instances from a "recipe"). The other part is the core plus a GUI that I plan on writing in Python (using PySide). This GUI will enable the user to define the aforementioned "recipes". This GUI part would require the core to be able to provide a list of available plugins. Sorry for the long winded "question". I guess I'm hoping for more of a discussion and of course if anybody knows of something similar that would help me I would very much appreciate a pointer. I would also appreciate any concise and to the point reading material about something similar (such as integrating C/C++ and Python etc).
Application that can load both C/C++ and Python plugins
0.197375
0
0
648
13,559,133
2012-11-26T05:28:00.000
0
1
0
1
python
13,568,384
5
false
0
0
Using catdoc/catppt with subprocess to open doc files and ppt files.
1
3
0
I want to open a ppt file using Python on linux, (like python open a .txt file). I know win32com, but I am working on linux. So, What do I need to do?
How to open ppt file using Python
0
0
0
14,562
13,560,920
2012-11-26T08:11:00.000
1
0
1
0
python,unicode
13,560,959
3
false
0
0
encode converts from a unicode string to a sequence of bytes. decode converts from a sequence of bytes to a unicode string. You want decode, because your data are already encoded. More generally, if you're reading a string from an external source, you always want to decode, because there's no such thing as a "unicode string" out there in the world. There are only representations of that unicode string in various encodings. Unicode strings are like a Platonic ideal that can only be transmitted through the corporeal medium of encodings.
1
0
0
I have some external data I need to import. How do I encode the input string as unicode/utf8? Here is an example of a probematic line >>>'Compa\xf1\xeda Dominicana de Tel\xe9fonos, C. por A. - CODETEL'.encode("utf8") Traceback (most recent call last): File "", line 1, in UnicodeDecodeError: 'ascii' codec can't decode byte 0xf1 in position 5: ordinal not in range(128)
UTF8 encoding error
0.066568
0
0
2,722
13,571,993
2012-11-26T19:47:00.000
0
1
0
1
python,sublimetext2
20,480,227
1
true
0
0
Yes, you might want to give more detail. Have you made sure you have saved the file as .py? Try something simple like Print "Hello" and then see if this works.
1
2
0
I am a new user of Sublime text. It has been working fine for a few days until it began to refuse to compile anything and I don't know where the problem is. I wrote python programs and pressed cmd+b and nothing happened. When I try to launch repl for this file - that also doesn't work. I haven't installed any plugins and before this issue all has been working well. Any suggestions on how to identify/fix the problem are greatly appreciated
Build command in sublime text has stopped functioning
1.2
0
0
182
13,572,454
2012-11-26T20:17:00.000
0
1
1
0
python,nlp,nltk
13,572,969
3
false
0
0
You certainly can't do it in a general way for all languages, because different languages render sounds to text differently. For example, the Hungarian word "vagy" looks like 2 syllables to an English speaker, but it's only one. And the English word "bike" would naturally be read as 2 syllables by speakers of many other languages. Furthermore, for English you probably can't do this very accurately without a dictionary anyway, because English has so much bizarre variation in its spelling. For example, we pronounce the "oe" in "poet" as two distinct syllables, but only one in "does". This is probably true of some other languages as well.
2
3
0
CMUdict works for the english language, but what if I want to count the syllables of content in another language?
Is there anyway in python to count syllables without the use of a dictionary?
0
0
0
1,881
13,572,454
2012-11-26T20:17:00.000
2
1
1
0
python,nlp,nltk
13,572,478
3
false
0
0
In general, no. For some languages there might be, but if you don't have a dictionary you'd need knowledge of those languages' linguistic structure. How words are divided into syllables varies from language to language.
2
3
0
CMUdict works for the english language, but what if I want to count the syllables of content in another language?
Is there anyway in python to count syllables without the use of a dictionary?
0.132549
0
0
1,881
13,573,359
2012-11-26T21:20:00.000
2
0
0
0
python,linux,mysql-python,bluehost
13,573,647
2
true
1
0
I think you upgraded your OS installation which in turn upgraded libmysqlclient and broke native extension. What you can do is reinstall libmysqlclient16 again (how to do it depends your particular OS) and that should fix your issue. Other approach would be to uninstall MySQLdb module and reinstall it again, forcing python to compile it against a newer library.
2
2
0
I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error: ... File "/****/****/.local/lib/python/django/utils/importlib.py", line 35, in import_module __import__(name) File "/****/****/.local/lib/python/django/db/backends/mysql/base.py", line 14, in raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory Of course, Bluehost support is not too helpful. They advised that 1) I use the default python install, because that has MySQLdb installed already. Or that 2) I somehow import the MySQLdb package installed on the default python, from my python(dont know if this can even be done). I am concerned that if I use the default install I wont have permission to install my other packages. Does anybody have any ideas how to get back to a working state, with as little infrastructure changes as possible?
Python module issue
1.2
1
0
2,446
13,573,359
2012-11-26T21:20:00.000
0
0
0
0
python,linux,mysql-python,bluehost
13,591,200
2
false
1
0
You were right. Bluehost upgraded MySQL. Here is what I did: 1) remove the "build" directory in the "MySQL-python-1.2.3" directory 2) remove the egg 3) build the module again "python setup.py build" 4) install the module again "python setup.py install --prefix=$HOME/.local" Morale of the story for me is to remove the old stuff when reinstalling module
2
2
0
I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error: ... File "/****/****/.local/lib/python/django/utils/importlib.py", line 35, in import_module __import__(name) File "/****/****/.local/lib/python/django/db/backends/mysql/base.py", line 14, in raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory Of course, Bluehost support is not too helpful. They advised that 1) I use the default python install, because that has MySQLdb installed already. Or that 2) I somehow import the MySQLdb package installed on the default python, from my python(dont know if this can even be done). I am concerned that if I use the default install I wont have permission to install my other packages. Does anybody have any ideas how to get back to a working state, with as little infrastructure changes as possible?
Python module issue
0
1
0
2,446
13,574,528
2012-11-26T22:51:00.000
5
0
1
0
python,string,copy
13,574,540
1
true
0
0
As long as you keep at least one reference to the string, it will not be garbage collected (and if you have no references left, then you have no way of accessing it anyway).
1
0
0
I have a class variable my_set which is a set, and into which I am adding strings: MyClass.my_set.add(self.some_string_property) My problem is that the instance of the class that runs the above may get garbage collected at any time, and when that happens I think I lose its some_string_property in my_set class variable. In order to preserve some_string_property in my_set for each instance, I need to create a copy of it and store that copy into my_set. What is the right way to do so? I've tried the copy module but it doesn't work on strings.
Create a copy of string
1.2
0
0
259
13,575,952
2012-11-27T01:13:00.000
3
0
1
0
python,random
13,575,968
2
false
0
0
I did some more research and found that I can use random.choice([1, 4, 7, 9, 13, 42]), which will randomly pick an item from the list.
1
1
0
I need to randomly select from the numbers [1, 4, 7, 9, 13, 42]. I can't use random.randint(1,42) because it would give me some of the numbers in-between. How do I select from only those numbers in my list?
How do you get a random number from a non-consecutive set?
0.291313
0
0
1,024
13,576,887
2012-11-27T03:20:00.000
1
0
0
1
python,ruby,terminal
13,577,108
2
false
0
0
Seems like you ought to just modify your $PATH to include /usr/local/Cellar before /usr/local/bin. Your shell will use the first one it finds.
2
1
0
Hey :) I'm working on a Mac with Mountain Lion, and installed both Ruby 1.9.3 and Python 2.7.3 from homebrew. However, which python and which ruby return that they are in /usr/local/bin/__, respectively. I would like them to read from /usr/local/Cellar/python or /usr/local/Cellar/ruby. How do I change their paths?
Changing default directories for python and ruby
0.099668
0
0
60
13,576,887
2012-11-27T03:20:00.000
0
0
0
1
python,ruby,terminal
13,577,066
2
false
0
0
I don't know on mac but on linux they're set up as links to /usr/local/bin/* If you wanted to change the symbolic link you could run the command ln -s /user/local/Celler/python /usr/local/bin/python which would make a new symbolic link. Whether this works on OSX I can't confirm though. Another method you might want to try is just calling the homebrew versions directly rather than making everything on your system use them. Or just making a symbolic link to something else such as ln -s /user/local/Celler/python /usr/local/bin/pythonH
2
1
0
Hey :) I'm working on a Mac with Mountain Lion, and installed both Ruby 1.9.3 and Python 2.7.3 from homebrew. However, which python and which ruby return that they are in /usr/local/bin/__, respectively. I would like them to read from /usr/local/Cellar/python or /usr/local/Cellar/ruby. How do I change their paths?
Changing default directories for python and ruby
0
0
0
60
13,577,922
2012-11-27T05:28:00.000
3
0
0
0
python,parsing,beautifulsoup,lxml
13,578,055
1
true
1
0
I don't really think this question makes a whole lot of sense. You need to give more explanation of what exactly your goals are. BeautifulSoup and lxml are two tools that in large part do the same things, but have different features and API philosophies and structure. It's not a matter of "which gives you more control," but rather "which is the right tool for the job?" I use both. I prefer the BeautifulSoup syntax, as I find it more natural, but I find that lxml is better when I'm trying to parse unknown quantities on the fly based on variables--e.g., generating XPath strings that include variable values, which I will then use to extract specific elements from varying pages. So really, it depends on what you're trying to do. TL;DR I find BeautifulSoup easier and more natural to use but lxml ultimately to be more powerful and versatile. Also, lxml wins the speed contest, no question.
1
1
0
I am learning to make spiders and crawlers. This spidering is my passion and I am going to do that for a long time. For parsing I am thinking of using BeautifulSoup. But some people say that if I use lxml, I will have more control. Now I don't know much. But I am ready to work hard even if using lxml is harder. But if that gives me full control then I am ready for it. So what is your opinion?
Will I have more control over my spider if I use lxml over BeautifulSoup?
1.2
0
1
120
13,578,170
2012-11-27T05:55:00.000
2
0
0
0
python,scrapy
13,584,851
1
true
1
0
CrawlerSpider is a sub-class of BaseSpider : This is the calls you need to extend if you want your spider to follow links according to the "Rule" list. "Crawler" is the main crawler sub-classed by CrawlerProcess. You will have to sub-class CrawlerSpider in you spider but I don't think you will have to touch Crawler.
1
3
0
I am new to Scrapy and quite confused about crawler and spider. It seems that both of them can crawl the website and parse items. There are a Crawler class(/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py) and a CrawlerSpider class(/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/crawl.py) in Scrapy. Does anyone could tell me the differences between them? And which one should I use in what conditions? Thanks a lot in advance!
differences between scrapy.crawler and scrapy.spider?
1.2
0
1
1,470
13,584,524
2012-11-27T12:43:00.000
0
0
0
0
python,django,heroku
13,678,321
2
false
1
0
Why not store all of the assets on S3? It sounds to me that they don't really need to be part of the application at all, but external resources that the application references.
2
2
0
In the old world I had a pretty ideal development setup going to work together with a webdesigner. Keep in mind we mostly do small/fast projects, so this is how it worked: I have a staging site on a server (Webfaction or other) Designer accesses that site and edits templates and assets to his satisfaction I SSH in regularly to checkin everything into source control, update files from upstream, resolve conflicts It works brilliantly because the designer does not need to learn git, python, package tools, syncdb, migrations etc. And there's only the one so we don't have any conflicts on staging either. Now the problem is in the new world under Heroku, this is not possible. Or is it? In any way, I would like your advice on a development setup that caters to those who are not technical.
Minimal effort setup of Django for webdesigner
0
0
0
215
13,584,524
2012-11-27T12:43:00.000
0
0
0
0
python,django,heroku
13,584,919
2
false
1
0
How about a static 'showcase' site where all possible UI elements, templates, etc are shown using dummy content. The designer can connect, edit stuff and you merge in the changes in the end. Another option would be a test server with the full application running (kind of like you did it before) but with the option to connect via FTP or whatever the designer prefers.
2
2
0
In the old world I had a pretty ideal development setup going to work together with a webdesigner. Keep in mind we mostly do small/fast projects, so this is how it worked: I have a staging site on a server (Webfaction or other) Designer accesses that site and edits templates and assets to his satisfaction I SSH in regularly to checkin everything into source control, update files from upstream, resolve conflicts It works brilliantly because the designer does not need to learn git, python, package tools, syncdb, migrations etc. And there's only the one so we don't have any conflicts on staging either. Now the problem is in the new world under Heroku, this is not possible. Or is it? In any way, I would like your advice on a development setup that caters to those who are not technical.
Minimal effort setup of Django for webdesigner
0
0
0
215
13,584,900
2012-11-27T13:04:00.000
1
0
0
0
python,opengl,ctypes,pyopengl
13,585,375
1
true
0
1
Are there any scenarios I'm missing? Buffer mappings obtained through glMapBuffer
1
1
1
PyOpenGL docs say: Because of the way OpenGL and ctypes handle, for instance, pointers, to array data, it is often necessary to ensure that a Python data-structure is retained (i.e. not garbage collected). This is done by storing the data in an array of data-values that are indexed by a context-specific key. The functions to provide this functionality are provided by the OpenGL.contextdata module. When exactly is it the case? One situation I've got in my mind is client-side vertex arrays back from OpenGL 1, but they have been replaced by buffer objects for years. A client side array isn't required any more after a buffer object is filled (= right after glBufferData returns, I pressume). Are there any scenarios I'm missing?
What is PyOpenGL's "context specific data"?
1.2
0
0
176
13,585,238
2012-11-27T13:25:00.000
1
0
0
1
python,multiprocessing
13,585,552
2
false
0
0
use a pipe. Ceate two processes using the subprocess module, the first reads from the serial port and writes the set of hex codes to stdout. This is piped to the second process which reads from stdin and updates the database.
1
2
0
Hi I am using Serial port communication to one of my device using python codes. It is sending a set of Hex code, Receives a set of data. process it. This data has to be stored in to a database. I have another script that has MYSQLdb library pushing it in to the database. If I do that in sequentially in one script i lose a lot in sampling rate. I can sample up to 32 data sets per second if I dont connect to a database and insert it in to the table. If I use Multiprocessing and try to run it my sampling rate goes to 0.75, because the parent process is waiting for the child to join. so how can i handle this situation. Is it possible to run them independently by using a queue to fill data?
Subprocess in Reading Serial Port Read
0.099668
0
0
2,531
13,587,602
2012-11-27T15:34:00.000
1
0
1
0
python,debugging
13,590,019
2
false
0
0
Except decorators, for Python >= 3.0 you could use new __getattribute__ method for a class, which will be called every time you call any method of the object. You could look through Lutz "Learning Python" chapters 31, 37 about it.
1
2
0
What is the easiest way to record function calls for debugging in Python? I'm usually interested in particular functions or all functions from a given class. Or sometimes even all functions called on a particular object attribute. Seeing the call arguments would be useful, too. I can imagine writing decorators for all that, but then I'd still have to modify the source code in different places. And writing a class decorator which modifies all methods isn't that straightforward. Is there a solution where I don't have to modify my source code? Ideally something which doesn't slow down Python too much.
Watching which function is called in Python
0.099668
0
0
241
13,588,125
2012-11-27T16:00:00.000
1
0
1
0
python,python-2.7,bytearray,stringio
13,588,563
1
false
0
0
Instead of using += you might have better luck creating a list and appending the data to the end of it. When all the data has been fetched you can then do a ''.join(list) to create a single string. This will preform much better since string concatenations are inefficient. When you concatenate two strings python has to allocate new memory to store the new string. If you are doing a significant amount of concatenations this can be really slow. As the size of the string grows the amount of time it takes to perform the concatenation will increase, and if you are fetching a large amount of data this way it can overwhelm the processor and cause other operations to be delayed. I had a similar issue when I built a python process that reassembled the TCP stream. Every packet I captured I was adding to a string using concatenation. Once the string grew over a few MB the packet capturing library I was using started dropping frames because the CPU was spending a lot of time doing string concatenations. Once I switched to using a list and joining the result at the end the problem went away. The reason that you do not have this problem with cStringIO.write is that it operates by creating a virtual file in memory and appends data to this file without having to reallocate space for a new string each time.
1
0
0
I met a strange problem recently, hope someone here can help me out. I'm using Python2.7 in Ubuntu12.04, both python and OS are 64-bits. In my code, I need to keep appending incoming data stream to a byte array, I use self.data += incomingdata to implement this, where incomingdata is the data I received from hardware devices. Then I will unpack the byte array some time later to parse the received data. The appending and parsing operations are all protected with lock. The problem here is, when I use "+=" to append the byte stream, the data seems to be corrupted at some points (not happen consistently). There is no memory usage error, no overflow, etc. I monitored the memory usage of the program, it looks good. Then, when I change "+=" to cStringIO.write to implement the appending operation, no problem at all, though it seems to be slower than the "+=" operation. Can anyone tell me what is the exactly difference between cStringIo.write and "+=" when they are used to operate on byte streams? Will the "+=" operation cause any potential problems?
different between stringio.write and += on byte stream
0.197375
0
0
407
13,588,454
2012-11-27T16:17:00.000
43
0
1
1
python,cygwin
13,588,963
8
true
0
0
The problem is that due to the way that the Cygwin terminal (MinTTY) behaves, the native Windows build of Python doesn't realize that stdout is a terminal device -- it thinks it's a pipe, so it runs in non-interactive mode instead of interactive mode, and it fully buffers its output instead of line-buffering it. The reason that this is new is likely because in your previous Cygwin installation, you didn't have MinTTY, and the terminal used was just the standard Windows terminal. In order to fix this, you either need to run Python from a regular Windows terminal (Cmd.exe), or install the Cygwin version of Python instead of a native Windows build of Python. The Cygwin version (installable as a package via Cygwin's setup.exe) understands Cygwin terminals and acts appropriately when run through MinTTY. If the particular version of Python you want is not available as a Cygwin package, then you can also download the source code of Python and build it yourself under Cygwin. You'll need a Cygwin compiler toolchain if you don't already have one (GCC), but then I believe it should compile with a standard ./configure && make && make install command.
2
40
0
Installing a new Windows system, I've installed CygWin and 64 bit Python (2.7.3) in their default locations (c:\cygwin and c:\Python27\python), and added both the CygWin bin and the Python directory to my path (in the user variable PATH). From the normal command window, Python starts up perfectly, but when I invoke it from bash in the CygWin environment, it hangs, never giving my the input prompt. I've done this on other machines, previously, but always with older versions of Python (32 bits) and CygWin, and with Python in a decidely non-standard location. Has anyone else had this problem, or could someone tell me what it might be due to?
Invoking python under CygWin on Windows hangs
1.2
0
0
23,262
13,588,454
2012-11-27T16:17:00.000
-1
0
1
1
python,cygwin
48,180,292
8
false
0
0
Reinstall mintty with cygwin setup. Didn't have to use python -i after that.
2
40
0
Installing a new Windows system, I've installed CygWin and 64 bit Python (2.7.3) in their default locations (c:\cygwin and c:\Python27\python), and added both the CygWin bin and the Python directory to my path (in the user variable PATH). From the normal command window, Python starts up perfectly, but when I invoke it from bash in the CygWin environment, it hangs, never giving my the input prompt. I've done this on other machines, previously, but always with older versions of Python (32 bits) and CygWin, and with Python in a decidely non-standard location. Has anyone else had this problem, or could someone tell me what it might be due to?
Invoking python under CygWin on Windows hangs
-0.024995
0
0
23,262
13,588,908
2012-11-27T16:41:00.000
3
0
0
0
python,tkinter
13,590,474
3
false
0
1
The background colors will not automatically change. Tkinter has the ability to do such a thing with fonts but not with colors. You will have to write some code to iterate over all of the widgets and change their background colors.
1
9
0
I have a simple tkinter window. It consists of a small window, a timer, and a button to set timer. I don't want to go in details with the code. I want to change the background of all the widgets in my window(buttons, label, Etc.). My first thought is to use a global variable which I will set to "red" for example, and associate all the widgets background option with the global variable. Then, on button press I will change the global variable to "green" (so that the background of all widgets change) but nothing happens. My understanding was the .mainloop() sort of updated the window. How can I have the widgets to change background color when my variable change without restarting my application?
Dynamically change widget background color in Tkinter
0.197375
0
0
21,940
13,589,075
2012-11-27T16:50:00.000
0
1
1
1
python,linux,gcc,g++,python-2.6
13,590,180
1
true
0
0
I was able to solve my problem by manually editing the Makefile generated by the configure script so that the linker was using g++ instead of gcc and that solved my problems. Thanks for the possible suggestions!
1
0
0
I have a custom C++ Python Module that I want to build into Python that builds fine but fails when it goes to the linking stage. I have determined that the problem is that it is using the gcc to link and not g++ and this is what is causing all of the errors I am seeing when it tries to link in the std libraries. How would I get the Python build process to link with g++ instead of gcc? Do I have to manually edit the Makefile or is it something I need to set when I am configuring it. I am compiling Python 2.6 on CentOS 5.8. Thanks in advance for the help!
C++ Python Module not being linked into Python with g++
1.2
0
0
85
13,589,417
2012-11-27T17:12:00.000
0
0
0
0
python,django
13,590,715
4
false
1
0
Another way to solve this is by adding javascript that disable the "other" fields when you set one. You can then enforce this in the form/model validation. But I should say that I think the best way to deal with this, if it can be applied to your problem, is the way PT114 propose.
1
0
0
We have three properties on my animal model: dog_name cat_name monkey_name One of them must be filled (no more! animal is a dog, a cat or a monkey) and if I set for example cat_name, I want dog_name and monkey_name to be deactivated (user shouldn't set more than one name). Is it possible to set this in django admin? This example is maybe stupid, but I tried to explain my intensions - deactivating needless properties.
Deactivating needless properties in Django Admin
0
0
0
57
13,591,170
2012-11-27T19:01:00.000
1
0
0
0
python,django,django-models
13,591,321
1
false
1
0
Well you already know it is because of mutual dependencies. The way around it would be to split the util file in to two so, that you could avoid circular imports by separating the parts where you are required to call the models. Also, as suggested by Mipadi instead of using a global import statement you could simply make the import in the method scope Moreover, it would really depend how you are trying to use the models. For instance, you could access the models by "app_name.class_name" but really depends on the context in which you want to use.
1
0
0
In my Django app (lets call it app) I have a number of files: views.py, models.py and I created my own utils.py. Unfortunately, while I can include my models in my views.py file simply by saying from models import * In my utils.py file, if I try the same thing, and then work with a model, I get an exception Global name: MyModel is not defined. models.py does indeed include utils.py, and I understand this may be a circular dependcy, but it worked fine until I added a recent change. Is this the cause, if so is the only solution to refactor my utils file?
Django cannot import from app/models.py
0.197375
0
0
426
13,592,618
2012-11-27T20:38:00.000
18
0
1
0
python,thread-safety,pandas
13,593,942
2
true
0
0
The data in the underlying ndarrays can be accessed in a threadsafe manner, and modified at your own risk. Deleting data would be difficult as changing the size of a DataFrame usually requires creating a new object. I'd like to change this at some point in the future.
1
22
1
I am using multiple threads to access and delete data in my pandas dataframe. Because of this, I am wondering is pandas dataframe threadsafe?
python pandas dataframe thread safe?
1.2
0
0
17,987
13,593,594
2012-11-27T21:40:00.000
0
0
1
1
python,deployment,python-3.2
13,671,961
2
false
0
0
Have you looked at buildout (zc.buildout)? With a custom recipe you may be able to automate most of this.
1
5
0
I'm not happy with the way that I currently deploy Python code and I was wondering if there is a better way. First I'll explain what I'm doing, then the drawbacks: When I develop, I use virtualenv to do dependancy isolation and install all libraries using pip. Python itself comes from my OS (Ubuntu) Then I build my code into a ".deb" debian package consisting of my source tree and a pip bundle of my dependancies Then when I deploy, I rebuild the virtualenv environment, source foo/bin/activate and then run my program (under Ubuntu's upstart) Here are the problems: The pip bundle is pretty big and increases the size of the debian package significantly. This is not too big a deal, but it's annoying. I have to build all the C libraries (PyMongo, BCrypt, etc) every time I deploy. This takes a little while (a few minutes) and it's a bit lame to do this CPU bound job on production Here are my constraints: Must work on Python 3. Preferably 3.2 Must have dependency isolation Must work with libraries that use C (like PyMongo) I've heard things about freezing, but I haven't been able to get this to work. cx_freeze out of Pypi doesn't seem to compile (on my Python, at least). The other freeze utilities don't seem to work with Python 3. How can I do this better?
How to distribute and deploy Python 3 code with dependency isolation
0
0
0
1,205
13,593,922
2012-11-27T22:00:00.000
1
0
1
0
python,windows,command-prompt
13,594,149
4
false
0
0
Changing your PATH environment variable should do the trick. Some troubleshooting tips: Make sure you didn't just change the local, but rather the system variable to reflect the new location Make sure you restarted your CL window (aka close "cmd" or command prompt and reopen it). This will refresh the system variables you just updated. Make sure you remove all references to C:\Python32\ or whatever the old path was (again, check local and system PATH - they are both found on the same environmental variables window). Check to see if Python3.2 is installed where you think it is... (just rename the directory to something like OLD_Python3.2 and go to your CLI and enter "python" - does it start up? If it does is it 2.7? or 3.2? If not, you did something wrong with your PATH variable. All else fails - reboot and try again (you might have some persistent environment variable - which I don't see how that can be - but hey, we are brainstorming here! - and a reboot would give you a fresh start. If that doesn't work then I'd think you are doing something else wrong (aka user error). CMD has to know where to look for python before it can execute. It knows this from your PATH variable... now granted, I work almost exclusively in 2.6/2.7, so if they did something to the registry (which I doubt) then I wouldn't know about that. Good luck!
1
2
0
My command prompt is currently running Python 3.2 by default how do I set it up to run Python 2.7 by default, I have changed the PATH variable to point towards Python 2.7, but that did not work. UPDATE: It still does not work. :( Still running python3 - to be specific it runs python3 when I am trying to install flask - which is what I want to do. More generally, when I simply type python into the command line, it does nothing. I get a 'python' is not recognized as an internal or external command, operable program, or batch file error. No idea what to do.
Command Prompt: Set up for Python 2.7 by default
0.049958
0
0
8,222
13,594,164
2012-11-27T22:17:00.000
1
0
0
0
python,django,dotcloud
13,594,735
2
false
1
0
If you are using a requirements.txt, no, there is not a way to do that from pypi, since Dotcloud is simply downloading the packages you've specified from pypi, and obviously your changes within your virtualenv are not going to be reflected by the canonical versions of the packages. In order to use the edited versions of your dependencies, you'll have to bundle them into your code like any other module you've written, and import them from there.
2
1
0
I'm deploying my Django app with Dotcloud. While developing locally, I had to make changes inside the code of some dependencies (that are in my virtualenv). So my question is: is there a way to make the same changes on the dependencies (for example django-registration or django_socketio) while deploying on dotcloud? Thank you for your help.
Change dependencies code on dotcloud. Django
0.099668
0
0
67
13,594,164
2012-11-27T22:17:00.000
1
0
0
0
python,django,dotcloud
13,594,763
2
true
1
0
There are many ways, but not all of them are clean/easy/possible. If those dependencies are on github, bitbucket, or a similar code repository, you can: fork the dependency, edit your fork, point to the fork in your requirements.txt file. This will allow you to track further changes to those dependencies, and easily merge your own modifications with future versions. Otherwise, you can include the (modified) dependencies with your code. It's not very clean and increases the size of your app, but that's fine too. Last but not least, you can write a very hackish postinstall script, to locate the .py file to be modified (e.g. import foo ; foopath = foo.__file__), then apply a patch on that file. This would probably cause most sysadmins to cringe in terror, but it's worth mentioning :-)
2
1
0
I'm deploying my Django app with Dotcloud. While developing locally, I had to make changes inside the code of some dependencies (that are in my virtualenv). So my question is: is there a way to make the same changes on the dependencies (for example django-registration or django_socketio) while deploying on dotcloud? Thank you for your help.
Change dependencies code on dotcloud. Django
1.2
0
0
67
13,594,953
2012-11-27T23:15:00.000
0
0
1
0
windows,numpy,python-2.7,64-bit
13,595,084
2
true
0
0
It should work if you're using 32-bit Python. If you're using 64-bit Python you'll need 64-bit Numpy.
2
0
1
I have read several related posts about installing numpy for python version 2.7 on a 64 bit windows7 OS. Before I try these, does anybody know if the 32bit version will work on a 64bit system?
numpy for 64 bit windows
1.2
0
0
380
13,594,953
2012-11-27T23:15:00.000
0
0
1
0
windows,numpy,python-2.7,64-bit
33,553,807
2
false
0
0
If you are getting it from pip and you want a 64 bit version of NumPy, you need MSVS 2008. pip needs to compile NumPy module with the same compiler that Python binary was compiled with. The last I checked (this Summer), python's build.py on Windows only supported up to that version of MSVS. Probably because build.py isn't updated for compilers which are not clearly available for free as compile-only versions. There is an "Express" version of MSVS 2010, 2012 and 2013 (which would satisfy that requirement). But I am not sure if there is a dedicated repository for them and if they have a redistribution license. If there is, then the only problem is that no one got around to upgrading build.py to support the newer vertsions of MSVS.
2
0
1
I have read several related posts about installing numpy for python version 2.7 on a 64 bit windows7 OS. Before I try these, does anybody know if the 32bit version will work on a 64bit system?
numpy for 64 bit windows
0
0
0
380
13,596,505
2012-11-28T01:53:00.000
5
0
1
1
python,windows,windows-8,command
29,402,992
23
false
0
0
I am probably the most novice user here, I have spent six hours just to run python in the command line in Windows 8. Once I installed the 64-bit version, then I uninstalled it and replaced it with 32-bit version. Then, I tried most suggestions here, especially by defining path in the system variables, but still it didn't work. Then I realised when I typed in the command line: echo %path% The path still was not directed to C:\python27. So I simply restarted the computer, and now it works.
11
116
0
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do? Note: I have Python 2.7 and Python 3.2 installed on my computer.
Python command not working in command prompt
0.043451
0
0
713,166
13,596,505
2012-11-28T01:53:00.000
4
0
1
1
python,windows,windows-8,command
71,605,377
23
false
0
0
Python 3.10 uses py and not python. Try py --version if you are using this version.
11
116
0
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do? Note: I have Python 2.7 and Python 3.2 installed on my computer.
Python command not working in command prompt
0.034769
0
0
713,166
13,596,505
2012-11-28T01:53:00.000
1
0
1
1
python,windows,windows-8,command
61,078,712
23
false
0
0
I wanted to add a common problem that happens on installation. It is possible that the path installation length is too long. To avoid this change the standard path so that it is shorter than 250 characters. I realized this when I installed the software and did a custom installation, on a WIN10 operation system. In the custom install, it should be possible to have Python added as PATH variable by the software
11
116
0
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do? Note: I have Python 2.7 and Python 3.2 installed on my computer.
Python command not working in command prompt
0.008695
0
0
713,166
13,596,505
2012-11-28T01:53:00.000
4
0
1
1
python,windows,windows-8,command
60,397,519
23
false
0
0
Even after following the instructions from the valuable answers above, calling python from the command line would open the Microsoft Store and redirect me to a page to download the software. I discovered this was caused by a 0 Ko python.exe file in AppData\Local\Microsoft\WindowsApps which was taking precedence over my python executable in my PATH. Removing this folder from my PATH solved it.
11
116
0
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do? Note: I have Python 2.7 and Python 3.2 installed on my computer.
Python command not working in command prompt
0.034769
0
0
713,166
13,596,505
2012-11-28T01:53:00.000
3
0
1
1
python,windows,windows-8,command
52,572,393
23
false
0
0
Here's one for for office workers using a computer shared by others. I did put my user path in path and created the PYTHONPATH variables in my computer's PATH variable. Its listed under Environment Variables in Computer Properties -> Advanced Settings in Windows 7. Example: C:\Users\randuser\AppData\Local\Programs\Python\Python37 This made it so I could use the command prompt. Hope this helped.
11
116
0
When I type python into the command line, the command prompt says python is not recognized as an internal or external command, operable program, or batch file. What should I do? Note: I have Python 2.7 and Python 3.2 installed on my computer.
Python command not working in command prompt
0.026081
0
0
713,166