Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
13,365,521
2012-11-13T17:21:00.000
3
0
0
0
python,api,flask
13,365,631
1
true
1
0
The proper RESTful way of deleting a resource is to send a DELETE request, and put the scoping information in the URI (not the body), like /api/records?id=10 or /api/records/10. The method information should be in the HTTP method, not the URI. I suggest you read "RESTful web services" to learn the best practices on API design.
1
1
0
I am just starting to learn how to design/write RESTful APIs. I have a general question: Assume I have some sort of simple SQL database and I'm writing an API that allows to create a new record, view records, delete a record or update a record. Assuming I want to delete a record, is it usually better to pass in the ID of the record in the URL, for example, /api/delete_record?id=10, or is it better to do something like: /api/record and have it accept GET, POST, PATCH and DELETE, and the data is handled through the JSON body in the request. I've written a small API using Flask in Python and what I have is just one URL: /record which accepts all the above HTTP methods. It looks at the method in the request and expects the request body in JSON accordingly. Is that considered good or bad practice? Any suggestions would be greatly appreciated. Please note that I am still very new to all of this. I've worked with APIs before but I've never developed any. Thanks!
API Design - JSON or URL Parameters?
1.2
0
0
384
13,366,293
2012-11-13T18:12:00.000
8
0
1
0
python,function,arguments,range
13,366,356
4
true
0
0
Range takes, 1, 2, or 3 arguments. This could be implemented with def range(*args), and explicit code to raise an exception when it gets 0 or more than 3 arguments. It couldn't be implemented with default arguments because you can't have a non-default after a default, e.g. def range(start=0, stop, step=1). This is essentially because python has to figure out what each call means, so if you were to call with two arguments, python would need some rule to figure out which default argument you were overriding. Instead of having such a rule, it's simply not allowed. If you did want to use default arguments you could do something like: def range(start=0, stop=object(), step=1) and have an explicit check on the type of stop.
1
18
0
How can the range function take either: a single argument, range(stop), or range(start, stop), or range(start, stop, step). Does it use a variadic argument like *arg to gather the arguments, and then use a series of if statements to assign the correct values depending on the number of arguments supplied? In essence, does range() specify that if there is one argument, then it set as the stop argument, or if there are two then they are start, and stop, or if there are three then it sets those as stop, start, and step respectively? I'd like to know how one would do this if one were to write range in pure CPython.
How can the built-in range function take a single argument or three?
1.2
0
0
5,994
13,369,795
2012-11-13T22:13:00.000
1
0
0
0
python,macos,mongodb,amazon-ec2
13,369,827
5
false
0
0
Databases are, by default, stored in /data/db (some environments override this and use /var/lib/mongodb, however). You can see the total db size by looking at db.stats() (specifically fileSize) in the MongoDB shell.
2
7
0
I'm scraping tweets and inserting them into a mongo database for analysis work in python. I want to check the size of my database so that I won't incur additional charges if I run this on amazon. How can I tell how big my current mongo database is on osx? And will a free tier cover me?
where is mongo db database stored on local hard drive?
0.039979
1
0
17,118
13,369,795
2012-11-13T22:13:00.000
4
0
0
0
python,macos,mongodb,amazon-ec2
13,369,857
5
true
0
0
I believe on OSX the default location would be /data/db. But you can check your config file for the dbpath value to verify.
2
7
0
I'm scraping tweets and inserting them into a mongo database for analysis work in python. I want to check the size of my database so that I won't incur additional charges if I run this on amazon. How can I tell how big my current mongo database is on osx? And will a free tier cover me?
where is mongo db database stored on local hard drive?
1.2
1
0
17,118
13,370,877
2012-11-13T23:48:00.000
1
0
1
1
python,batch-file,python-3.x
13,370,997
1
false
0
0
Can't you just use os.system?
1
1
0
I'm trying to get the output from a batch file LIVE as it runs. I would prefer the console of the batch file to also show as it runs. I have tried using os.popen and subprocess.Popen but the problem is that it does not run the program LIVE in the background and constantly show what is being printed to the console. Exactly what I want, is to have a string that is constantly updated with the data from the console of the running batch file.
Live output from a batch file with Python
0.197375
0
0
148
13,371,444
2012-11-14T00:48:00.000
7
0
1
1
python,file-io
13,371,542
3
true
0
0
You can use os.access for checking your access permission. If access permissions are good, then it has to be the second case.
1
14
0
My goal is to know if a file is locked by another process or not, even if I don't have access to that file! So to be more clear, let's say I'm opening the file using python's built-in open() with 'wb' switch (for writing). open() will throw IOError with errno 13 (EACCES) if: the user does not have permission to the file or the file is locked by another process How can I detect case (2) here? (My target platform is Windows)
Python : Check file is locked
1.2
0
0
23,396
13,373,014
2012-11-14T04:24:00.000
1
0
1
0
python
13,373,099
1
true
0
0
yup, its not at all secure, but eval is the way to go: In [1]: a= 10 In [2]: b= 20 In [3]: eval('a + 10*b') Out[3]: 210
1
0
0
Is there any way to do this? Basically I have a file that I want a user to be able to edit via a GUI I built(all of which I can do easily). Part of this is a calculation in a function. That or being able to edit a .py file from another file would be fine as well, but it is hard to find anything on that because every search returns details about IDLE and such. I also have no problem with just having the calculation only in the text file and being able to read that from the text file and then parse it to add the variables in, but not even sure how to do that easily either, with the calculations varying like so: (abs(x) - abs(y) * dict['t']) * 18 ((abs(y) * dict['t']) - abs(x)) * 20 etc for about 10 different variations
Importing a function from a .txt file
1.2
0
0
119
13,373,291
2012-11-14T04:59:00.000
-1
0
0
0
python,ctypes,complex-numbers
13,373,641
5
false
0
1
If c_complex is a C struct and you have the definition of in documentation or a header file you could utilize ctypes to compose a compatible type. It is also possible, although less likely, that c_complex is a typdef for a type that ctypes already supports. More information would be needed to provide a better answer...
1
4
0
This might be a bit foolish but I'm trying to use ctypes to call a function that receives a complex vector as a paramter. But in ctypes there isn't a class c_complex. Does anybody know how to solve this? edit: I'm refering to python's ctypes, in case there are others....
Complex number in ctypes
-0.039979
0
0
2,911
13,374,423
2012-11-14T07:09:00.000
0
0
0
0
python,json,heroku,urllib2
13,393,776
1
false
1
0
nevermind I fixed it. All i did was I referenced the local json file on my computer rather than the url.
1
0
0
When i run this Heroku app in debug mode, it works perfectly, but when I push the changes and visit the page, the page refuses to load and i get a 503 error. I can't figure out what is wrong (seeing as how debug says everything is fine :( ) [using python 2.7] fixed. see comment
503 error with urllib and flask
0
0
0
1,095
13,376,448
2012-11-14T09:52:00.000
1
0
1
0
python,multithreading,qt,thread-safety,qthread
13,381,623
1
true
0
1
Both answers depend on what your code is doing and what you expect from the thread. If your logic which uses the thread needs to wait synchronously for the moment QThread finishes, then yes, you need to call wait(). However such requirement is a sign of sloppy threading model, except very specific situations like application startup and shutdown. Usage of QThread::wait() suggests creeping sequential operation, which means that you are effectively not using threads concurrently. quit() exits QThread-internal event loop, which is not mandatory to use. A long-running thread (as opposed to one-task worker) must have an event loop of some sort - this is a generic statement, not specific to QThread. You either do it yourself (in form of some while(keepRunning) { } cycle) or use Qt-provided event loop, which you fire off by calling exec() in your run() method. The former implementation is finishable by you, because you did provide the keepRunning condition. The Qt-provided implementation is hidden from you and here goes the quit() call - which internally does nothing more than setting some sort of similar flag inside Qt.
1
1
0
I'm writing a multi-threaded application that utilizes QThreads. I know that, in order to start a thread, I need to override the run() method and call that method using the thread.start() somewhere (in my case in my GUI thread). I was wondering, however, is it required to call the .wait() method anywhere and also am I supposed to call the .quit() once the thread finishes, or is this done automatically? I am using PySide. Thanks
Do I need to manually call .quit() method when QThread() stops running (Python)?
1.2
0
0
438
13,377,197
2012-11-14T10:38:00.000
1
1
0
0
python,wave
13,378,468
4
false
0
0
If your wave file consists of only one note, you can get the fundamental frequency (not the harmonics) simply by detecting the periodicity of the wave. Do this by looking for 0-crossings.
1
2
0
How to analyse frequency of wave file in a simple way? Without extra modules.
How to analyse frequency of wave file
0.049958
0
0
2,086
13,382,262
2012-11-14T15:51:00.000
0
0
0
0
python,node.js,web-applications
13,384,050
1
true
1
0
It would replace Python (flask/werkzeug) in both your view server and your API server.
1
0
0
I am interested in learning more about node.js and utilizing it in a new project. The problem I am having is envisioning where I could enhance my web stack with it and what role it would play. All I have really done with it is followed a tutorial or two where you make something like a todo app in all JS. That is all fine and dandy but where do I leverage this is in a more complex web architecture. so here is an example of how I plan on setting up my application web server for serving views: Python (flask/werkzeug) Jinja nginx html/css/js API sever: Python (flask/werkzeug) SQLAlchemy (ORM) nginx supervisor + gunicorn DB Server Postgres So is there any part of this stack that could be replaced or enhanced by introducing nodeJS I would assume it would be best used on the API server but not exactly sure how.
Where does node.js fit in a stack or enhance it
1.2
1
0
228
13,383,684
2012-11-14T17:09:00.000
0
0
0
0
python,matlab,libsvm
13,445,709
1
false
0
0
Normally you would just call a method in libsvm to save your model to a file. You then can just use it in Python using their svm.py. So yes, you can - it's all saved in libsvm format.
1
0
1
Is it possible to use in Python the svm_model, generated in matlab? (I use libsvm)
Is it possible to use in Python the svm_model, generated in matlab?
0
0
0
77
13,385,085
2012-11-14T18:34:00.000
2
0
1
0
python,algorithm
13,386,318
3
true
0
0
I think the linear probe suggested by @isbadawi is the best way to find the beginning of the subsequence. This is because the subsequence could be very short and could be anywhere within the larger sequence. However, once the beginning of the subsequence is found, we can use a binary search to find the end of it. That will require fewer tests than doing a second linear probe, so it's a better solution for you. As others have pointed out, there is not much practical reason for doing this. This is true for two reasons: your large sequence is quite short (only about 31 elements), and you still need to do at least one linear probe anyway, so the big-O runtime will be still be linear in the length of the large sequence, even though we have reduced part of the algorithm from linear to logarithmic.
1
3
0
I need to find all the days of the month where a certain activity occurs. The days when the activity occurs will be sequential. The sequence of days can range from one to the entire month, and the sequence will occur exactly one time per month. To test whether or not the activity occurs on any given day is not an expensive calculation, but I thought I would use this problem learn something new. Which algorithm minimizes the number of days I have to test?
In python, how can I efficiently find a consecutive sequence that is a subset of a larger consecutive sequence?
1.2
0
0
418
13,385,981
2012-11-14T19:33:00.000
2
1
0
0
python,email,response,imaplib
13,387,035
2
false
0
0
You are connecting to the wrong port. 587 is authenticated SMTP, not IMAP; the IMAP designated port number is 143 (or 993 for IMAPS).
2
2
0
I have the following line of code using imaplib M = imaplib.IMAP4('smtp.gmail.com', 587) I get the following error from imaplib: abort: unexpected response: '220 mx.google.com ESMTP o13sm12303588vde.21' However from reading elsewhere, it seems that that response is the correct response demonstrating that the connection was made to the server successfully at that port. Why is imaplib giving this error?
python imaplib unexpected response 220
0.197375
0
1
2,034
13,385,981
2012-11-14T19:33:00.000
2
1
0
0
python,email,response,imaplib
13,399,991
2
true
0
0
I realized I needed to do IMAP4_SSL() - has to be SSL for IMAP and for using IMAP I need the IMAP server for gmail which is imap.googlemail.com. I ultimately got it work without specifying a port. So, final code is: M = imaplib.IMAP4_SSL('imap.googlemail.com')
2
2
0
I have the following line of code using imaplib M = imaplib.IMAP4('smtp.gmail.com', 587) I get the following error from imaplib: abort: unexpected response: '220 mx.google.com ESMTP o13sm12303588vde.21' However from reading elsewhere, it seems that that response is the correct response demonstrating that the connection was made to the server successfully at that port. Why is imaplib giving this error?
python imaplib unexpected response 220
1.2
0
1
2,034
13,388,177
2012-11-14T22:04:00.000
0
0
0
0
python,sockets,networking
13,389,747
4
false
0
0
I want it so it's always listening. Whenever anyone tries to connect, it instantly accepts them and adds them to a list of connections. So you just have: An accept() loop that does nothing but accept() new connections and start a new thread to handle each one. A thread per connection that reads with a long timeout, whatever you want your session idle timeout to be. If the timeout expires, you close the socket and exit the thread. If the server runs out of FDs, which it will if there are enough simultaneous connections, accept() will start failing with a corresponding errno: in this case you just ignore it and keep looping. Maybe you decrease the idle timeout in this situation, and put it back when accepts start to work again.
2
0
0
I'm trying to make a chat server thing. Basically, I want more than one client to be able to connect at the same time. I want it so it's always listening. Whenever anyone tries to connect, it instantly accepts them and adds them to a list of connections. I could have a listen(1) then a timeout, and keep appending them to a list then closing the socket, and making a new one, then listening with a timeout, etc. Although, this seems very slow, and I'm not even sure it would work Keep in mind, it does not HAVE to be socket. If there is any other kind of network interface, it would work just as well.
Having an infinite amount of sockets connected?
0
0
1
930
13,388,177
2012-11-14T22:04:00.000
2
0
0
0
python,sockets,networking
13,388,385
4
false
0
0
Consider whether you actually need to maintain separate sockets for every connection. Would something like connectionless UDP be appropriate? That way you could support an arbitrary number of users while only using one OS socket. Of course, with this approach you'd need to maintain connection semantics internally (if your application cares about such things); figure out which user each datagram has been sent either by looking at their IP/port or by examining some envelope data within your network protocol, send occasional pings to see whether the other side is still alive, etc. But this approach should nicely separate you from any OS concerns RE: number of sockets your process is allowed to keep open at once.
2
0
0
I'm trying to make a chat server thing. Basically, I want more than one client to be able to connect at the same time. I want it so it's always listening. Whenever anyone tries to connect, it instantly accepts them and adds them to a list of connections. I could have a listen(1) then a timeout, and keep appending them to a list then closing the socket, and making a new one, then listening with a timeout, etc. Although, this seems very slow, and I'm not even sure it would work Keep in mind, it does not HAVE to be socket. If there is any other kind of network interface, it would work just as well.
Having an infinite amount of sockets connected?
0.099668
0
1
930
13,389,724
2012-11-15T00:22:00.000
0
0
1
1
python,compilation,exe,cx-freeze
13,967,222
3
false
0
0
Have you tried innosetup? It can create installer files from the output of cxfreeze. There might be an option somewhere to bundle everything into one file.
1
3
0
I'm using cx_Freeze to compile Python programs into executables and it works just fine, but the problem is that it doesn't compile the program into one EXE, it converts them into a .exe file AND a whole bunch of .dll files including python32.dll that are necessary for the program to run. Does anyone know how I can package all of these files into one .exe file? I would rather it be a plain EXE file and not just a file that copies the DLLs into a temporary directory in order to launch the program. EDIT: This is in reference to Python 3
Convert an EXE and its dependencies into one stand-alone EXE
0
0
0
2,963
13,390,082
2012-11-15T01:05:00.000
1
0
0
0
python,linux,sockets,connect,epoll
13,390,556
2
false
0
0
You can't know it 'failed because the server can't accept more connections', because there is no specific protocol for that condition. You can only know it failed for the usual reasons: ECONNREFUSED, connection timeout, EUNREACH, etc.
2
0
0
I am writing a simple test script(python) to test a web server's performance(all this server does is HTTP redirect). Socket is set to non-blocking, and registered to an epoll instance. How can I know the connect() is failed because the server can't accept more connections? I am currently using EPOLLERR as the indicator. Is this correct? Edit: Assumptions: 1) IP layer network unreachability will not be considered.
How is connect() failure notified in epoll?
0.099668
0
1
988
13,390,082
2012-11-15T01:05:00.000
1
0
0
0
python,linux,sockets,connect,epoll
13,390,310
2
false
0
0
That catches the case of Connection Refused and other socket errors. Since I assume you are registering for read/write availability (success) upon the pending socket as well, you should also manually time-out those connections which have failed to notify you of read, write, or error availability on the associated file descriptor within an acceptable time limit. ECONNREFUSED is generally only returned when the server's accept() queue exceeds its limit or when the server isn't even bound to a socket at the remote port. ENETDOWN, EHOSTDOWN, ENETUNREACH, and EHOSTUNREACH are only returned when a lower layer than TCP (e.g., IP) knows it cannot reach the host for some reason, and so they are not particularly helpful for stress testing a web server's performance. Thus, you need to also bound the time taken to establish a connection with a timeout to cover the full gamut of stress test failure scenarios. Choice of timeout value is up to you.
2
0
0
I am writing a simple test script(python) to test a web server's performance(all this server does is HTTP redirect). Socket is set to non-blocking, and registered to an epoll instance. How can I know the connect() is failed because the server can't accept more connections? I am currently using EPOLLERR as the indicator. Is this correct? Edit: Assumptions: 1) IP layer network unreachability will not be considered.
How is connect() failure notified in epoll?
0.099668
0
1
988
13,390,716
2012-11-15T02:27:00.000
3
0
1
0
python,windows-7,installation
13,390,837
3
true
0
0
The installer typically has an option for "Set file associations" that handles that, and I think it's on by default, which means that the most-recently-installed one will handle double-clicking .py files. So install 64-bit Python 3.3 last and it should work. I just did this a few days ago with 32- and 64-bit version of 2.7 and it seems to work fine. I wouldn't rename your existing directory, though. You should install each version of Python in its own directory.
1
6
0
What would be the best way/order to install both 32 and 64 bit versions of Python 2.7 and Python 3.3 if I want 64 bit Python 3.3 to be the default when I open .py files? I already have the 32 bit version of each installed, with defaults set to Python 3.3. If it is fine to just rename the directory of my currently installed version, will python27.dll and programmes using it (or python33.dll) continue to work? This library gets installed to %WINDIR%\System32 and/or %WINDIR%\SysWoW64 by the python installer. Thanks for your answers, here is what I did: Uninstalled Python 3.3 (This left 2 files) C:\Python33\Lib\lib2to3\Grammar3.3.0.final.0.pickle C:\Python33\Lib\lib2to3\PatternGrammar3.3.0.final.0.pickle Uninstalled Python 2.7 Installed Python 2.7 x86 (without register extensions) C:\Python\27_32\ Installed Python 2.7 x86-64 (without register extensions) C:\Python\27\ Installed Python 3.3 x86 (without register extensions) C:\Python\33_32\ Installed Python 3.3 x86-64 (with default register extensions) C:\Python\33\ Deleted C:\Python33 CCleaner'd installer reference issues to this location
Installing multiple main and bit versions of Python
1.2
0
0
888
13,391,901
2012-11-15T05:15:00.000
0
0
0
1
python,unix,vi
13,391,941
1
false
0
0
Have your script myscript.py take an optional argument e.g. other_term. Spawn xterm -e myscript.py other_term. When the second instance of myscript.py starts, check for the optional argument other_term; if present, perform the second set of commands. Use environment variables or files or command line arguments or pipes to transfer any required state between the first (initial) and second (other_term) instances. When the other_term instance finishes, the second xterm will automatically close and return control to the first (intial) instance which can then proceed with its commands. If you do NOT want the second xterm to close then spawn xterm -e asynchronously and do NOT let your other_term script exit; have it signal instead (e.g. via a semaphore file or, if you know how to do it, via pipe -- xterm does not close filehandles) to the first instance that the first instance can resume, and e.g. wait for user confirmation before exiting on both scripts.
1
2
0
I want to run a script that contains some commands to execute for eg: pwd, xterm home, date, time Here i want to run the script which executes pwd in first terminal, and creates a xterm home then in xterm home terminal i want to run date and time command then i want to run once again pwd in main terminal. "How to switch between the terminals in a python script ?" Thanks n Regards Vasantkumar.R.Nagoor
how to switch the action of execution of script between the terminals in the python script?
0
0
0
101
13,392,095
2012-11-15T05:37:00.000
0
0
0
0
python,django,caching
13,393,855
2
false
1
0
After changing your code, make sure you are restarting your server e.g. apache or fastcgi.
1
1
0
I am really new to Django, and I'm trying have my site display a server status as text. This text, however, is dynamic. I do not understand why, if I go in my model and change the server status function to return 'cats', I don't see 'cats' appear in my browser for like 5 minutes. From what I have learned so far, I suspect this has to do with Django caching templates on the server side. I have tried removing .pyc files, using @never_cache, and editing settings.py to use DummyCache, and clearing browser cache, all to no avail. Does anyone know what's going on, or what a possible fix might be? Thanks!
How to stop Django from caching dynamic templates?
0
0
0
1,291
13,394,969
2012-11-15T09:51:00.000
0
0
0
0
python,machine-learning,cherrypy
13,399,425
2
false
1
0
NLTK based system tends to be slow at response per request, but good throughput can be achieved given enough RAM.
1
4
1
Let me explain what I'm trying to achieve. In the past while working on Java platform, I used to write Java codes(say, to push or pull data from MySQL database etc.) then create a war file which essentially bundles all the class files, supporting files etc and put it under a servlet container like Tomcat and this becomes a web service and can be invoked from any platform. In my current scenario, I've majority of work being done in Java, however the Natural Language Processing(NLP)/Machine Learning(ML) part is being done in Python using the NLTK, Scipy, Numpy etc libraries. I'm trying to use the services of this Python engine in existing Java code. Integrating the Python code to Java through something like Jython is not that straight-forward(as Jython does not support calling any python module which has C based extensions, as far as I know), So I thought the next option would be to make it a web service, similar to what I had done with Java web services in the past. Now comes the actual crux of the question, how do I run the ML engine as a web service and call the same from any platform, in my current scenario this happens to be Java. I tried looking in the web, for various options to achieve this and found things like CherryPy, Werkzeug etc but not able to find the right approach or any sample code or anything that shows how to invoke a NLTK-Python script and serve the result through web, and eventually replicating the functionality Java web service provides. In the Python-NLTK code, the ML engine does a data-training on a large corpus(this takes 3-4 minutes) and we don't want the Python code to go through this step every time a method is invoked. If I make it a web service, the data-training will happen only once, when the service starts and then the service is ready to be invoked and use the already trained engine. Now coming back to the problem, I'm pretty new to this web service things in Python and would appreciate any pointers on how to achieve this .Also, any pointers on achieving the goal of calling NLTK based python scripts from Java, without using web services approach and which can deployed on production servers to give good performance would also be helpful and appreciable. Thanks in advance. Just for a note, I'm currently running all my code on a Linux machine with Python 2.6, JDK 1.6 installed on it.
How to expose an NLTK based ML(machine learning) Python Script as a Web Service?
0
0
0
1,090
13,395,116
2012-11-15T10:01:00.000
5
1
1
0
python,python-import
13,395,225
4
false
0
0
You can just import it again other places that you need it -- it will be cached after the first time so this is relatively inexpensive. Alternatively you could modify the current global namespaces with something like globals()['name'] = local_imported_module_name. EDIT: For the record, although using the globals() function will certainly work, I think a "cleaner" solution would be to declare the module's name global and then import it, as several other answers have mentioned.
1
9
0
I need to have an import in __init__() method (because i need to run that import only when i instance the class). But i cannot see that import outside __init__(), is the scope limited to__init__? how to do?
Python import in __init__()
0.244919
0
0
7,587
13,398,191
2012-11-15T13:12:00.000
0
0
0
1
python
13,399,024
2
false
0
0
You can launch ps command, parse its output and get info about running processes. I haven't found better way still. If you're on unix of cause.
1
0
0
Is there any way to notify when the system process started, terminated or completed. I am searching kind of listener that listen events of the system, such as process start, stop etc.
System process listener in python
0
0
0
524
13,403,741
2012-11-15T18:22:00.000
1
1
0
0
python
13,404,694
1
true
1
0
You should use an asynchronous data approach to transfer data from a PHP script - or directly from the Python script, to an already rendered HTML page on the user side. Check a javascript framework for the way that is easier for you to do that (for example, jquery). Then return an html page minus results to the user, with the javascript code to show a "calculating" animation, and fetch the reslts, in xml or json from the proper URL when they are done.
1
0
0
I have a Python web application in which one function that can take up to 30 seconds to complete. I have been kicking off the process with a cURL request (inc. parameters) from PHP but I don't want the user staring at a blank screen the whole time the Python function is working. Is there a way to have it process the data 'in the background', e.g. close the http socket and allow the user to do other things while it continues to process the data? Thank you.
Python long running process
1.2
0
0
338
13,404,538
2012-11-15T19:16:00.000
1
0
0
0
python,sockets,tcp,epoll
13,404,886
1
true
0
0
ECONNREFUSED signals that the connection was refused by the server, while ETIMEOUT signals that the attempt to connect has timed out, i.e. that no indication (positive or negative) about the connection attempt was received from the peer. socket.recv() returning an empty string without error is simply the EOF condition, corresponding to an empty read in C. This happens when the other side closes the connection, or shuts it down for writing. It is normal that EPOLLIN is notified when EOF occurs, because you want to know about the EOF (and because you can safely recv from the socket without it hanging).
1
1
0
I am doing some stress test to a simple HTTP redirect server in Python scripts. The script is set up with epoll (edge trigger mode) with non-blocking sockets. But I observed something that I don't quite understand, 1) epoll can get both ECONNREFUSED and ETIMEOUT errno while the connect() is in process. Don't they both indicates the remote server can't accept the connection? How are they different, how does a client tell the difference? 2) sometimes when EPOLLIN is notified by epoll, socket.recv() returns empty string without any exception thrown (or errno in C), I can keep reading the socket without getting any exception or error, it just always returns empty string. Why is that? Thanks,
TCP non-blocking socket.connect() and socket.recv() Error questions. (Python or C)
1.2
0
1
1,087
13,405,277
2012-11-15T20:05:00.000
1
0
0
1
python,usb,barcode,stdin
13,405,792
1
true
0
0
The normal way of intercepting standard input in Unix is pipes and multiple processes. If you have a multi-process application, then one process can receive the "raw" standard input, capture barcode input, and pass on the rest to its standard output. That output would then be the standard input of your UI process which would only receive non-barcode data. To set this up initially, have a single launch process that sets up the pipes, starts the other two processes, and exits. If you're new to these concepts, you have a long and interesting learning process ahead of you :-) All this assumes that you really are receiving "keyboard" data through standard input, and not through X11 events as you seem to imply. If you are developing within X11 (or GTK, etc.) then what I have described will almost certainly not work.
1
1
0
I'm trying to set up a barcode scanner object that will capture anything input from the scanner itself. The barcode scanner is recognized as a standard input (stdin) and therefore whenever I scan a barcode, I get standard input text. There will also be a keyboard attached to the system, which is another standard input. In order to differentiate between a barcode scan input and keyboard input, I will be using a prefix for any barcode information. In other words, if my barcodes will be 16 characters in length total, the first 4 would a predetermined character string/key to indicate that the following 12 characters are barcode inputs. This is pretty standard from what I've read. Now most examples I've seen will recognize the barcode input by capturing the character input event in a GUI application. This event callback method then builds a buffer to check for the 4 character prefix and redirects barcode input as necessary. The event callback method also will skip any character input events that are not barcode related and allow them to interact with the GUI as a standard input normally would (type into a text box or what have you). I want to do this same thing except without using a GUI application. I want my barcode scanner object to be independent of the GUI application. Ideally I would have a callback method, within the barcode scanner object, that stdin would call every time a character is input. From there I would grab any barcode input by checking for the 4 character prefix and would pass along any characters not apart of the barcode input. So in other words, I'd like stdin to pipe through my barcode scanner callback method, and then have my barcode scanner call back method be able to pipe non barcode characters back out as a standard input as though nothing had happened (still standard input that would go to a text box or something). Is this possible without a while loop constantly monitoring stdin? Even if I had a while loop monitoring stdin, how would I pump characters back out as stdin if they weren't barcode input? I looked into using pyusb to take over the barcode scanner's USB interface, but this requires root privileges to interact with the hardware (not an option for my project). Any help would be greatly appreciated. I have not been able to find an example of this yet. Edit: This project will be run in CentOS or some flavor of Linux.
Python redirect stdin
1.2
0
0
1,168
13,405,501
2012-11-15T20:19:00.000
1
0
1
0
python,berkeley-db,leveldb,kyotocabinet
13,406,125
2
false
0
0
You said you have lots of disk, huh? One option would be to store the strings as filenames in nested subdirectories. You could either use the strings directly: Store "drew sears" in d/r/e/w/ sears or by taking a hash of the string and following a similar process: MD5('drew sears') = 'f010fe6e20d12ed895c10b93b2f81c6e' Create an empty file named f0/10/fe/6e/20d12ed895c10b93b2f81c6e. Think of it as an OS-optimized, hash-table based indexed NoSQL database. Side benefits: You could change your mind at a later time and store data in the file. You can replicate your database to another system using rsync.
1
6
0
I have a set of 100+ million strings, each up to 63 characters long. I have lots of disk space and very little memory (512 MB). I need to query for existence alone, and store no additional metadata. My de facto solution is a BDB btree. Are there any preferable alternatives? I'm aware of leveldb and Kyoto Cabinet, but not familiar enough to identify advantages.
Efficient way to check existence in a large set of strings
0.099668
0
0
966
13,405,956
2012-11-15T20:49:00.000
0
0
0
0
python,numpy,scipy,python-imaging-library,color-space
60,718,937
5
false
0
0
At the moment I haven't found a good package to do that. You have to bear in mind that RGB is a device-dependent colour space so you can't convert accurately to XYZ or CIE Lab if you don't have a profile. So be aware that many solutions where you see converting from RGB to CIE Lab without specifying the colour space or importing a colour profile must be carefully evaluated. Take a look at the code under the hood most of the time they assume that you are dealing with sRGB colour space.
1
49
1
What is the preferred way of doing the conversion using PIL/Numpy/SciPy today?
Convert an image RGB->Lab with python
0
0
0
55,003
13,407,253
2012-11-15T22:19:00.000
0
0
1
0
python,list,dictionary,tuples
13,407,315
3
false
0
0
([1,2]:0, [2,3]:0) is not a dictionary. I think you meant to use: {(1,2):0, (2,3):1}
1
0
0
I need to make a dictionary where you can reference [[1,2],[3,4]] --> ([1,2]:0, [2,3]:0) I've tried different ways but I can't use a list in a dictionary. So i tried using tuples, but its still the same. Any help is appreciated!
How do I make a list a key value in a dictionary?
0
0
0
86
13,407,554
2012-11-15T22:44:00.000
5
0
0
0
python,django,amazon-s3,amazon-web-services,amazon-ec2
13,407,676
1
true
1
0
You just need to find out where your code is located on the server. SSH to one of the instances and then you can use the python interactive shell to run your django code for debugging, use the manage.py commands for database debugging, tests etc. Once you have connected to the instance, it's just an OS.
1
3
0
I am new to web development. This is probably a dumb question but I could not quite find exact answer or tutorial that could help me. The company I am working at has its site(which is built in python django )hosted on amazon EC2. I want to know where I can start about debugging this production site and check logs and databases that are stored there. I have the account information but is there anyway I can access all the stuff using command line(like an ubuntu shell) or tutorial for the same ?
How can I debug python web site on amazon EC2?
1.2
0
0
743
13,408,685
2012-11-16T00:32:00.000
0
0
0
0
python,django,macos
13,447,213
1
false
1
0
Rule of thumb is: if app's documentary doesn't explain how to install (use, etc.) app, then its better to forget about using that app. How can you rely on 5-month-not-updated-not-tested-not-well-documented app? There should be better solution.
1
0
0
sorry I'm a total noob but I can't find anywhere that actually explains this. I want to make a web blog, and I figured instead of rolling my own I would use a pre-made one, and I picked the blog from the basic apps project (https://github.com/nathanborror/django-basic-apps). I installed everything fine, added the apps to my settings file, synced the DB's, etc. But now I don't know what to do. How to I actually use the blog? When I run the test server it says I have to do manage.py startapp but I already have the app folder. What should I do? Again, sorry for the noob question. Best, Jake
How do I use a pre-made app with my project?
0
0
0
109
13,409,324
2012-11-16T01:59:00.000
2
0
1
0
python,python-3.x
13,409,456
3
false
0
0
f=open(file,'a') first para is the path of file,second is the pattern,'a' is append,'w' is write, 'r' is read ,and so on im my opinion,you can use f.write(list+'\n') to write a line in a loop ,otherwise you can use f.writelines(list),it also functions.
1
1
0
In Python: Let's say I have a loop, during each cycle of which I produce a list with the following format: ['n1','n2','n3'] After each cycle I would like to write to append the produced entry to a file (which contains all the outputs from the previous cycles). How can I do that? Also, is there a way to make a list whose entries are the outputs of this cycle? i.e. [[],[],[]] where each internal []=['n1','n2','n3] etc
How can I append to the new line of a file while using write()?
0.132549
0
0
2,858
13,415,660
2012-11-16T11:26:00.000
2
1
0
0
python,compiler-construction,code-generation,llvm,converter
13,416,469
1
true
1
0
LLVM up to 3.0 provided a C backend (see lib/Target/CBackend) which should be a good starting point for implementing a simple Python code generator.
1
6
0
Is there any tool to convert the LLVM IR code to Python code? I know it is possible to convert it to Javascript (https://github.com/kripken/emscripten/wiki), to Java (http://da.vidr.cc/projects/lljvm/) and I would love to convert it to Python also. Additionaly if such tool does not exist, could you provide any information, what is the best tool to base on (maybe I should extend the emscripten with other language - Javascript and Python are similar to each other in some terms ;) )
LLVM IR to Python Compiler
1.2
0
0
1,835
13,417,553
2012-11-16T13:26:00.000
1
0
1
0
python,user-interface,wxpython,auto-update
13,418,479
2
false
0
1
You would want to use threads. wxPython provides several thread-safe methods such as wx.CallAfter, wx.CallLater and wx.PostEvent. You can combine those with pubsub to send messages too! In your case, I think passing a wx method to wx.CallAfter with whatever text you wish to display would work just fine.
1
1
0
I have got a python function that takes a few parameters and executes a few tasks (let's call it theFunc). While theFunc runs, internal variable resultingMessage is getting more and more lines, and I show it on GUI at the end of the execution. Ideally, I would like resultingMessage to be shown and updated on GUI as the execution of theFunc happens, not after it has finished, and to do it without polluting theFunc with GUI stuff (it is complicated enough already). What would be the best way to do it? Two possible ways I could think of (both are quite far-fetch): Have two threads: one executed theFunc and writes resultingMessage in a variable/file, the other checks this variable/file and updates GUI; Instead of appending lines to resultingMessage, attach some "stream" to GUI element; adding lines to the "stream" would update GUI. I guess there should be a conventional way to do it. Environment: CPython 2.7, Win XP, WXPython
Updating GUI while execution a python function while keeping this function GUI-independent
0.099668
0
0
487
13,419,659
2012-11-16T15:33:00.000
1
0
1
0
python,pycharm
61,092,990
2
false
0
0
To have the Watches pane displayed separately and view the configured watches in it, release the Show watches in the Variables tab toggle button (eyeglasses button) on the toolbar of the Variables pane. By default, the button is pressed.
1
7
0
I'm somewhat new to Pycharm, and I suppose this should be an easy question, but I'm not finding the answer anywhere... The Pycharm documentation has instructions for adding/editing items in the Watches pane, but the documentation assumes the Watches pane is already open, so it skips the step on how to open/access it. Does anyone know where I can find / how I can open the Watches pane?
Pycharm - How do I access the "Watches" pane?
0.099668
0
0
3,492
13,419,734
2012-11-16T15:38:00.000
0
0
0
0
python,web-scraping,scrapy,scrapyd
15,516,604
2
false
1
0
What about use the same sqlite database? The dbs_dir is set in scrapyd.script._get_config().
1
2
0
I have been searching for documentation on the Scrapyd Service but it is very slim. I was wondering if anyone has any idea how to set up multiple Scrapyd servers that point to the same schedule queue?
How to run multiple scrapyd servers?
0
0
0
1,381
13,419,822
2012-11-16T15:43:00.000
41
0
0
0
python,pandas
13,420,016
1
true
0
0
All functions in Python are "pass by reference", there is no "pass by value". If you want to make an explicit copy of a pandas object, try new_frame = frame.copy().
1
19
1
I noticed a bug in my program and the reason it is happening is because it seems that pandas is copying by reference a pandas dataframe instead of by value. I know immutable objects will always be passed by reference but pandas dataframe is not immutable so I do not see why it is passing by reference. Can anyone provide some information? Thanks! Andrew
pandas dataframe, copy by value
1.2
0
0
17,836
13,420,405
2012-11-16T16:20:00.000
3
0
0
0
python,file,networking
13,420,569
1
true
0
0
The fileno() function needs to return a kernel file descriptor, so that it can be passed to the select system call (or poll/epoll/whatever). The multiplexing done by select-like operations is fundamentally an OS operation which must work on OS objects. If you want to implement this for an object not based on an actual file descriptor you can do the following: Create a pipe Return the read end of the pipe from fileno() Write a byte to the other end when you want to mark your object as "ready". This will wake any select calls. Remember to read that byte from your real "read" implementation. This pipe trick should be fairly portable.
1
1
0
I'm trying to write a class with the same interface of Python 2 Standard Library's socket.socket. I've problems trying to reproduce the behavior the object should have when a program tries to call select.select() on it. The documentation in the entry for select.select says: You may also define a wrapper class yourself, as long as it has an appropriate fileno() method (that really returns a file descriptor, not just a random integer). I would like to try something like this: creating a file-like object that can be controlled by a thread of my program with select, while another thread of my program can set it when my object is ready for reading and writing. How can I do it?
Imitating the behavior of fileno() and select
1.2
0
1
486
13,420,413
2012-11-16T16:20:00.000
4
0
1
0
python,pip
13,420,616
4
true
0
0
For me, on an OS X 10.5, the default installation of Python is soft-linked to /Library/Frameworks/Python.framework/Versions/Current/, and ipython is at /Library/Frameworks/Python.framework/Versions/Current/bin/ipython. Make sure the bin directory it's sitting in is in your shell PATH. Type: $ export PATH=/Library/Frameworks/Python.framework/Versions/Current/bin:$PATH , then try running ipython. If that works, edit your ~/.profile to update your PATH permanently.
4
4
0
I'm using Mac OS X 10.6.8 and python 2.7.3. I'm trying to use pip to install ipython, and running sudo pip install ipython installs successfully, but when I try to run ipython I get "command not found". I cannot find where pip installs packages, or why it's not linking correctly. I'm very new with this, please help!
Cannot find packages installed with pip
1.2
0
0
7,228
13,420,413
2012-11-16T16:20:00.000
5
0
1
0
python,pip
28,154,764
4
false
0
0
I had this problem and fixed it by restarting my terminal.
4
4
0
I'm using Mac OS X 10.6.8 and python 2.7.3. I'm trying to use pip to install ipython, and running sudo pip install ipython installs successfully, but when I try to run ipython I get "command not found". I cannot find where pip installs packages, or why it's not linking correctly. I'm very new with this, please help!
Cannot find packages installed with pip
0.244919
0
0
7,228
13,420,413
2012-11-16T16:20:00.000
4
0
1
0
python,pip
30,338,172
4
false
0
0
I fixed this problem by using sudo. sudo will install the ipython in system folder. pip uninstall ipython and than sudo pip install ipython
4
4
0
I'm using Mac OS X 10.6.8 and python 2.7.3. I'm trying to use pip to install ipython, and running sudo pip install ipython installs successfully, but when I try to run ipython I get "command not found". I cannot find where pip installs packages, or why it's not linking correctly. I'm very new with this, please help!
Cannot find packages installed with pip
0.197375
0
0
7,228
13,420,413
2012-11-16T16:20:00.000
0
0
1
0
python,pip
43,588,624
4
false
0
0
Similar to acjay, for me worked with: /Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/site-packages/ adding to path (10.12 Sierra).
4
4
0
I'm using Mac OS X 10.6.8 and python 2.7.3. I'm trying to use pip to install ipython, and running sudo pip install ipython installs successfully, but when I try to run ipython I get "command not found". I cannot find where pip installs packages, or why it's not linking correctly. I'm very new with this, please help!
Cannot find packages installed with pip
0
0
0
7,228
13,424,753
2012-11-16T21:35:00.000
1
0
0
0
python
13,424,798
4
false
0
0
Open a pool of threads, open a Url for each, wait for a 200 or a 404. Rinse and repeat.
1
1
0
Suppose I was giving this list of urls: website.com/thispage website.com/thatpage website.com/thispageagain website.com/thatpageagain website.com/morepages ... could possibly be over say 1k urls. What is the best/easiest way to kinda loop through this list and check whether or not the page is up?
Given a big list of urls, what is a way to check which are active/inactive?
0.049958
0
1
2,657
13,424,875
2012-11-16T21:45:00.000
0
0
0
0
python,django
13,425,827
4
false
1
0
You can use python manage.py shell and import the views you want and then use dir() for example, dir(TemplateView) or you can read the source code or you use help() method to give quick overview. For example, help(TemplateView)
2
0
0
I read the documentation and its kind of vague when it comes to outlining the methods/parameters/properties available to Class based views, is there a list of some website that provides such as list anywhere?
List of methods and parameters available to Django's class based views?
0
0
0
90
13,424,875
2012-11-16T21:45:00.000
1
0
0
0
python,django
13,424,901
4
false
1
0
You should use your python manage.py shell and simply import your views and use dir(my_view) and help(my_view)
2
0
0
I read the documentation and its kind of vague when it comes to outlining the methods/parameters/properties available to Class based views, is there a list of some website that provides such as list anywhere?
List of methods and parameters available to Django's class based views?
0.049958
0
0
90
13,426,374
2012-11-17T00:25:00.000
0
0
0
1
python,c,zeromq,pyzmq
13,487,196
1
false
0
0
It's not a race condition in zmq, and not a problem with zmq_connect. That extra 0x01 byte is presumably at fault. If you are passing that to zmq_connect, what result do you expect except EINVAL? So where does that extra byte come from? Do you get it on all messages sent between two peers? What are you doing different in this program? Since you haven't provided source code it's hard to offer any more detailed advice than this.
1
0
0
I got a C-ZMQ client that receiving two random ports (from pyzmq server) and then connecting to them. Usually, everything is working, but sometimes the 2nd connect fail with errno set to EINVAL. (Even when I switched between the connect calls the 2nd still failed). The port number is fine and it looks like some kind of race condition in ZeroMQ. Anyone know how can I solve this problem? [EDIT]: The server sends the ports in this structure "port1:port2" for example "1234:1235" the hexdump of the packet on the server is 31 32 33 34 3a 31 32 33 35 and on the client is 31 32 33 34 3a 31 32 33 35 01 and because the extra byte the 2nd connect fails... Maybe this is some kind of compatibility bug between pyzmq and zmq I'm using zmq ver 2.2.0
ZeroMQ 2nd connection fail with einval
0
0
0
411
13,427,477
2012-11-17T03:45:00.000
1
0
0
1
python,google-app-engine,http-post,http-get
13,427,499
1
true
1
0
Links inherently generate GET requests. If you want to generate a POST request, you'd need to either: Use a form with method="POST" and submit it, or Use AJAX to load the new page.
1
1
0
I'm trying to pass a variable from one page to another using google app engine, I know how to pass it using GET put putting it in the URL. But I would like to keep the URL clean, and I might need to pass a larger amount of data, so how can pass info using post. To illustrate, I have a page with a series of links, each goes to /viewTaskGroup.html, and I want to pass the name of the group I want to view based on which link they click (so I can search and get it back and display it), but I'd rather not use GET if possible. I didn't think any code is required, but if you need any I'm happy to provide any needed.
Pass data in google app engine using POST
1.2
0
0
297
13,428,313
2012-11-17T06:31:00.000
3
0
1
0
python
13,428,441
2
false
0
0
Just examine closed attribute of file object.
1
2
0
How do I add a check to avoid flushing a file f with f.flush() when some function has already done f.close()? I can't seem to figure out how to do so :/
Python Flushing and Already Closed File
0.291313
0
0
421
13,432,635
2012-11-17T16:54:00.000
1
0
1
0
python,list,dictionary
13,432,674
3
false
0
0
On Linux: A nice method using grep to filter out any words containing apostrophes in the words file and save to mywords.txt in your home directory. grep "^[^']*$" /usr/share/dict/words > ~/mywords.txt No need to install, download or write any code! On OS X: Even easier as /usr/share/dict/words contains no words with apostrophes in already.
1
0
0
I am looking for a dictionary file containing only words without apostrophes. I cant seem to find one! Does anyone know how where I can find one, if not how could I eliminate those words from the file using Python?
List of Dictionary words without apostrophes
0.066568
0
0
1,178
13,432,800
2012-11-17T17:14:00.000
3
1
0
0
c++,python,performance,opencv
66,955,473
5
false
0
0
Why choose? If you know both Python and C++, use Python for research using Jupyter Notebooks and then use C++ for implementation. The Python stack of Jupyter, OpenCV (cv2) and Numpy provide for fast prototyping. Porting the code to C++ is usually quite straight-forward.
2
92
1
I aim to start opencv little by little but first I need to decide which API of OpenCV is more useful. I predict that Python implementation is shorter but running time will be more dense and slow compared to the native C++ implementations. Is there any know can comment about performance and coding differences between these two perspectives?
Does performance differ between Python or C++ coding of OpenCV?
0.119427
0
0
76,398
13,432,800
2012-11-17T17:14:00.000
6
1
0
0
c++,python,performance,opencv
13,432,830
5
false
0
0
You're right, Python is almost always significantly slower than C++ as it requires an interpreter, which C++ does not. However, that does require C++ to be strongly-typed, which leaves a much smaller margin for error. Some people prefer to be made to code strictly, whereas others enjoy Python's inherent leniency. If you want a full discourse on Python coding styles vs. C++ coding styles, this is not the best place, try finding an article. EDIT: Because Python is an interpreted language, while C++ is compiled down to machine code, generally speaking, you can obtain performance advantages using C++. However, with regard to using OpenCV, the core OpenCV libraries are already compiled down to machine code, so the Python wrapper around the OpenCV library is executing compiled code. In other words, when it comes to executing computationally expensive OpenCV algorithms from Python, you're not going to see much of a performance hit since they've already been compiled for the specific architecture you're working with.
2
92
1
I aim to start opencv little by little but first I need to decide which API of OpenCV is more useful. I predict that Python implementation is shorter but running time will be more dense and slow compared to the native C++ implementations. Is there any know can comment about performance and coding differences between these two perspectives?
Does performance differ between Python or C++ coding of OpenCV?
1
0
0
76,398
13,433,597
2012-11-17T18:42:00.000
5
0
0
1
python,linux,streaming,video-streaming,audio-streaming
13,435,380
2
true
1
0
A good start for trying different options is to use vlc (http://www.videolan.org) Its file->transmit menu command opens a wizard with which you can play. Another good one is gstreamer, (http://www.gstreamer.net), the gst-launch program in particular, which allows you to build pipelines from the command line.
1
2
0
I've been trying for a while but struggling. I have two projects: Stream audio to server for distribution over the web Stream audio and video from a webcam to a server for distribution over the web. I have thus far tried ffmpeg and ffserver, PulseAudio, mjpegstreamer (I got this working but no audio) and IceCast all with little luck. While I'm sure this is likely my fault, I was wondering if there are any more option? I've spent a while experimenting with Linux options and was also wondering if there were options with Python having recently played with OpenCV. If anyone can suggest more options to look into Python or Linux based it would be much appreciated or point me at some good tutorials or explainations of what I've already used it would be much appreciated.
Streaming audio and video
1.2
0
0
1,745
13,434,664
2012-11-17T20:52:00.000
0
0
0
0
python,web-scraping,casperjs
13,437,094
5
false
1
0
Because you mentioned CasperJS I can assume that web site generate some data by using JavaScript. My suggestion would be check WebKit. It is a browser "engine", that will let you do what ever you want with web-site. You can use PyQt4 framework, which is very good, and has a good documentation.
1
2
0
So I am trying to scrape something that is behind a login system. I tried using CasperJS, but am having issues with the form, so maybe that is not the way to go; I checked the source code of the site and the form name is "theform" but I can never login must be doing something wrong. Does any have any tutorials on how to do this correctly using CasperJS, I've looked at the API and google and nothing really works. Or does someone have any recommendations on how to do web scraping easily. I have to be able to check a simple conditional state and click a few buttons, that is all.
Web scraping - web login issue
0
0
1
684
13,436,032
2012-11-17T23:55:00.000
0
0
0
0
python,colors,cluster-computing,k-means
13,436,279
1
true
0
0
You can use a vector quantisation. You can make a list of each pixel and each adjacent pixel in x+1 and y+1 direction and pick the difference and plot it along a diagonale. Then you can calculate a voronoi diagram and get the mean color and compute a feature vector. It's a bit more effectice then to use a simple grid based mean color.
1
0
1
My score was to get the most frequent color in a image, so I implemented a k-means algorithm. The algorithm works good, but the result is not the one I was waiting for. So now I'm trying to do some improvements, the first I thought was to implement k-means++, so I get a beter position for the inicial clusters centers. First I select a random point, but how can I select the others. I mean how I define the minimal distance between them. Any help for this? Thanks
K-Means plus plus implementation
1.2
0
0
1,239
13,438,920
2012-11-18T09:23:00.000
0
0
0
0
django,python-2.7,django-views
13,439,337
1
true
1
0
A Django process is loaded once and remains active to handle incoming requests. So if you define the list as a global variable, it stays in RAM and all is fine. It is discouraged to manipulate the list though.
1
1
0
I have a very big python list ( ~ 1M strings) defined in a .py file. I import it in my views.py to access the list in my views. My question is does the list gets loaded in RAM for every user coming to the web app, or does it loads just one single time and is used for all users ?
Understanding imports in views.py - Django
1.2
0
0
95
13,440,079
2012-11-18T12:28:00.000
0
0
0
0
python,operating-system
13,440,101
2
true
1
0
The access time for an individual file are not affected by the quantity of files in the same directory. running ls -l on a directory with more files in it will take longer of course. Same as viewing that directory in the file browser. Of course it might be easier to work with these images if you store them in a subdirectory defined by the user's name. But that just depends on what you are going to doing with them. There is no technical reason to do so. Think about it like this. The full path to the image file (/srv/site/images/my_pony.jpg) is the actual address of the file. Your web server process looks there, and returns any data it finds or a 404 if there is nothing. What it doesn't do is list all the files in /srv/site/images and look through that list to see if it contains an item called my_pony.jpg.
1
0
0
My website users can upload image files, which then need to be found whenever they are to be displayed on a page (using src = ""). Currently, I put all images into one directory. What if there are many files - is it slow to find the right file? Are they indexed? Should I create subdirectories instead? I use Python/Django. Everything is on webfaction.
Python/Django: how to get files fastest (based on path and name)
1.2
0
0
57
13,442,981
2012-11-18T18:08:00.000
0
0
1
0
python,data-structures
13,443,464
3
false
0
0
Don't subclass list, proxy it: override __getattribute__ to pass all calls through to the proxied list and then check your constraints.
1
10
0
I need a python list object which, upon insert, automatically checks for certain constraints of the form: "A must always come before B" or "If C is included, it must always come last". What's the easiest/fastest way to go about implementing this. The obvious approach is to override all the methods of the list data type which alter its contents (append, extend, insert, etc), and verify that constraints still hold after the operation. It's just that this is pretty tedious as there are a lot of these methods. Is there an easier way?
Implement a python list with constraints
0
0
0
1,786
13,444,341
2012-11-18T20:41:00.000
0
1
0
0
python,svn
13,445,099
4
false
0
0
you should use direct repository access via file:// this comparable to a slow hd (if you have fast CPU and HD) use the svn bindings for your respective scripting language, do not rely on xml parsing, as they are much slower do not read the whole tree out, but maintain a navigational hierarchy and read directories on demand, usually if you read the whole hierarchy, you end up with hundreds/thousands directories in the deeper levels which are usually not interesting for your application, so you can omit them and display them on demand(if user browse this deep) if you are doing svn access modifications use the entries in your access file to get to know your important directories beforehand
2
2
0
I am working on a repository management system for my university that will provide a gui for modifying permissions to individual folders in a subversion repository, and make it easy for professors to add directories for students and TA's, with the appropriate permissions. In order to make this work, I need to be able to retrieve the directory structure of an existing svn repository, and present it on the web. I have looked at several methods, and was wondering if anyone had other ideas, or suggestions. Some things I have looked at: Every hour, run a script that runs 'svn ls -R --xml' on all of the repositories and populates a mysql database Positive: Fast page loads afterwards Doesn't take a lot of disk space Easy to manage permission, i.e. the website doesn't need to touch svn directly at all Negative: Really slow on some of our more complicated repositories No 'live' updates Has to run whether there are changes or not On page load, run 'svn ls -R --xml' and retrieve only the directory I need to render the current page Positive: updates live no cron job to tie up the server Negative: website is slow as molasses webserver uses a lot more resources Directly read svn database Positive: Fast page loads live updates Negative: Difficult? I am very curious what alternatives there are that I have not seen or thought of, because I feel like any of these would be quite awful and inelegant in one way or another. Also I don't want to reinvent the wheel if it can be avoided. Thanks!
Quickly retrieving svn directory trees
0
0
0
292
13,444,341
2012-11-18T20:41:00.000
0
1
0
0
python,svn
13,444,407
4
false
0
0
I would refer to a database solution. The problem with a repository is simultaneous access. Using the database that stores the updates the repository gives you a reliable source of information to retrieve its layout. I have done a fair amount of research for an internship and using a database is almost always the fastest way in which you can read the repo. To put it in terms of pseudo code: read repository load site print layout perform tasks -- and repeat. Another option to consider is using a database that tracks the layout of the repository independently. This way, you're sure users won't bump into eachothers updates and it keeps the repository database safe from corruption.
2
2
0
I am working on a repository management system for my university that will provide a gui for modifying permissions to individual folders in a subversion repository, and make it easy for professors to add directories for students and TA's, with the appropriate permissions. In order to make this work, I need to be able to retrieve the directory structure of an existing svn repository, and present it on the web. I have looked at several methods, and was wondering if anyone had other ideas, or suggestions. Some things I have looked at: Every hour, run a script that runs 'svn ls -R --xml' on all of the repositories and populates a mysql database Positive: Fast page loads afterwards Doesn't take a lot of disk space Easy to manage permission, i.e. the website doesn't need to touch svn directly at all Negative: Really slow on some of our more complicated repositories No 'live' updates Has to run whether there are changes or not On page load, run 'svn ls -R --xml' and retrieve only the directory I need to render the current page Positive: updates live no cron job to tie up the server Negative: website is slow as molasses webserver uses a lot more resources Directly read svn database Positive: Fast page loads live updates Negative: Difficult? I am very curious what alternatives there are that I have not seen or thought of, because I feel like any of these would be quite awful and inelegant in one way or another. Also I don't want to reinvent the wheel if it can be avoided. Thanks!
Quickly retrieving svn directory trees
0
0
0
292
13,444,534
2012-11-18T21:01:00.000
1
0
0
0
android,python,django
21,954,459
10
false
1
0
Well if your end goal is to develop Web applications and host host them on your Android and since you had flask there why not give bottle.py a shot. It's just one file that you copy into your sl4a scripts folder and voila. Bottle is minimalist and near similar to flask. No rooting or Unix environments required.
1
14
0
I am a Django developer and wanted to know if anyone has any idea of the possibilities of installing and developing on Django using an Android tablet such as the nexus 7. This seems like a reasonably powerful device, can be hooked up with a bluetooth keyboard, and has linux at the core of the OS. So - is it possible to install Python and Django (or even Flask) on Android?
Python + Django on Android
0.019997
0
0
34,933
13,445,585
2012-11-18T23:08:00.000
0
0
0
0
python,search,scrapy,web-crawler
13,451,254
1
true
1
0
different PROJECT for each site is a WORST idea . different SPIDER for each site is a GOOD idea . if you can adjust multiple sites in one SPIDER (based of there nature) is a BEST idea . but again all depends on your Requirements.
1
1
0
I have a lot of different sites I want to scrape using scrapy. I was wondering what is the best way of doing this? Do you use a different "project" for each site you want to scrape, or do you use a different "spider", or neither? Any input would be appreciated
Scrapy Python Crawler - Different Spider for each?
1.2
0
1
252
13,448,232
2012-11-19T05:35:00.000
2
1
0
0
java,python
13,448,289
5
false
0
0
If you are just learning object oriented programming language then I will suggest you to start with JAVA. Because if you don't understand the ideas behind the object oriented programming nicely, you will certainly legging behind. but if you have good experience on the ideologies (i.e. structured programming language or object oriented) then, its not a matter whether you should go with JAVA or Python. The basic concept is the main thing you need to learn.
5
0
0
I have opportunity to study either JAVA or PYTHON. But I can't decide which to choose. I am already well versed with C++. Can you plz tell which one is better with our experience.
Which to choose Python or java
0.07983
0
0
194
13,448,232
2012-11-19T05:35:00.000
2
1
0
0
java,python
13,448,256
5
false
0
0
This is a really relative questions and there is no "right" answer. I personally would go with Python but I already took multiple Java classes. Python is fun and interesting but Java has been around for a while and isn't going anywhere any time soon.
5
0
0
I have opportunity to study either JAVA or PYTHON. But I can't decide which to choose. I am already well versed with C++. Can you plz tell which one is better with our experience.
Which to choose Python or java
0.07983
0
0
194
13,448,232
2012-11-19T05:35:00.000
1
1
0
0
java,python
13,448,303
5
false
0
0
Start out with Python; use Python for your own hackish projects - it's great for Web Apps and rapid prototyping. Learn Java later on and you'll enjoy it; learn it before Python and you won't appreciate the kind of OOP Java has to offer as much. This is from personal experience; again, like twodayslate mentioned, there is no "right" answer. I learnt both Python and Java on my own and use mainly Python for personal projects.
5
0
0
I have opportunity to study either JAVA or PYTHON. But I can't decide which to choose. I am already well versed with C++. Can you plz tell which one is better with our experience.
Which to choose Python or java
0.039979
0
0
194
13,448,232
2012-11-19T05:35:00.000
1
1
0
0
java,python
13,448,294
5
false
0
0
I feel, there is not much about the language. Its just implementing the logic. You can use anything to express that. But the have to keep in mind about the drivers and libraries available for the language that you are selecting
5
0
0
I have opportunity to study either JAVA or PYTHON. But I can't decide which to choose. I am already well versed with C++. Can you plz tell which one is better with our experience.
Which to choose Python or java
0.039979
0
0
194
13,448,232
2012-11-19T05:35:00.000
3
1
0
0
java,python
13,448,259
5
false
0
0
I'd say go for python. Its very easy to code.
5
0
0
I have opportunity to study either JAVA or PYTHON. But I can't decide which to choose. I am already well versed with C++. Can you plz tell which one is better with our experience.
Which to choose Python or java
0.119427
0
0
194
13,448,366
2012-11-19T05:49:00.000
1
0
0
0
python,google-app-engine,app-engine-ndb
21,716,718
3
true
1
0
If you have a small app then your data probably live on the same part of the same disk and you have one instance. You probably won't notice eventual consistency. As your app grows, you notice it more. Usually it takes milliseconds to reach consistency, but I've seen cases where it takes an hour or more. Generally, queries is where you notice it most. One way to reduce the impact is to query by keys only and then use ndb.get_multi() to load the entities. Fetching entities by keys ensures that you get the latest version of that entity. It doesn't guarantee that the keys list is strongly consistent, though. So you might get entities that don't match the query conditions, so loop through the entities and skip the ones that don't match. From what I've noticed, the pain of eventual consistency grows gradually as your app grows. At some point you do need to take it seriously and update the critical areas of your code to handle it.
3
8
0
I am in the process of migrating an application from Master/Slave to HRD. I would like to hear some comments from who already went through the migration. I tried a simple example to just post a new entity without ancestor and redirecting to a page to list all entities from that model. I tried it several times and it was always consistent. Them I put 500 indexed properties and again, always consistent... I was also worried about some claims of a limit of one 1 put() per entity group per second. I put() 30 entities with same ancestor (same HTTP request but put() one by one) and it was basically no difference from puting 30 entities without ancestor. (I am using NDB, could it be doing some kind of optimization?) I tested this with an empty app without any traffic and I am wondering how much a real traffic would affect the "eventual consistency". I am aware I can test "eventual consistency" on local development. My question is: Do I really need to restructure my app to handle eventual consistency? Or it would be acceptable to leave it the way it is because the eventual consistency is actually consistent in practice for 99%?
In practice, how eventual is the "eventual consistency" in HRD?
1.2
0
0
560
13,448,366
2012-11-19T05:49:00.000
0
0
0
0
python,google-app-engine,app-engine-ndb
13,457,830
3
false
1
0
The replication speed is going to be primarily server-workload-dependent. Typically on an unloaded system the replication delay is going to be milliseconds. But the idea of "eventually consistent" is that you need to write your app so that you don't rely on that; any replication delay needs to be allowable within the constraints of your application.
3
8
0
I am in the process of migrating an application from Master/Slave to HRD. I would like to hear some comments from who already went through the migration. I tried a simple example to just post a new entity without ancestor and redirecting to a page to list all entities from that model. I tried it several times and it was always consistent. Them I put 500 indexed properties and again, always consistent... I was also worried about some claims of a limit of one 1 put() per entity group per second. I put() 30 entities with same ancestor (same HTTP request but put() one by one) and it was basically no difference from puting 30 entities without ancestor. (I am using NDB, could it be doing some kind of optimization?) I tested this with an empty app without any traffic and I am wondering how much a real traffic would affect the "eventual consistency". I am aware I can test "eventual consistency" on local development. My question is: Do I really need to restructure my app to handle eventual consistency? Or it would be acceptable to leave it the way it is because the eventual consistency is actually consistent in practice for 99%?
In practice, how eventual is the "eventual consistency" in HRD?
0
0
0
560
13,448,366
2012-11-19T05:49:00.000
0
0
0
0
python,google-app-engine,app-engine-ndb
13,457,661
3
false
1
0
What's the worst case if you get inconsistent results? Does a user see some unimportant info that's out of date? That's probably ok. Will you miscalculate something important, like the price of something? Or the number of items in stock in a store? In that case, you would want to avoid that chance occurence. From observation only, it seems like eventually consistent results show up more as your dataset gets larger, I suspect as your data is split across more tablets. Also, if you're reading your entities back with get() requests by key/id, it'll always be consistent. Make sure you're doing a query to get eventually consistent results.
3
8
0
I am in the process of migrating an application from Master/Slave to HRD. I would like to hear some comments from who already went through the migration. I tried a simple example to just post a new entity without ancestor and redirecting to a page to list all entities from that model. I tried it several times and it was always consistent. Them I put 500 indexed properties and again, always consistent... I was also worried about some claims of a limit of one 1 put() per entity group per second. I put() 30 entities with same ancestor (same HTTP request but put() one by one) and it was basically no difference from puting 30 entities without ancestor. (I am using NDB, could it be doing some kind of optimization?) I tested this with an empty app without any traffic and I am wondering how much a real traffic would affect the "eventual consistency". I am aware I can test "eventual consistency" on local development. My question is: Do I really need to restructure my app to handle eventual consistency? Or it would be acceptable to leave it the way it is because the eventual consistency is actually consistent in practice for 99%?
In practice, how eventual is the "eventual consistency" in HRD?
0
0
0
560
13,450,878
2012-11-19T09:28:00.000
1
0
0
0
python,input,pygame
13,451,666
1
true
0
1
You can't do that, unless you use the input command in a different thread, but then you have to deal with syncronization (which might be what you want or don't want to do). The way I'd implement this is to create a kind of in-game console. When a special key (e.g. '\') is pressed you make the console appear, and when your application is in that state you interpreter key pressing not as in-game commands but... well, as text. You can print them in the console (using fonts). When a key (e.g "return") is pressed you can make the console disappear and the keys take back their primary functionality. I did this for my pet-project and it works as a charm. Plus, since you are developing in python you can accept python instructions and use exec to execute them and edit your game "on fhe fly"
1
2
0
In Pygame, how can I get graphical input(e.g. clicking exit button) and also get input from the a terminal window simultaneously? To give you context, my game has a GUI but gets its game commands from a "input()" command. How can I look for input from the command line while also handling graphics? I'm not sure if this is possible, but if not, what other options do I have for getting text input from the user? Thanks in advance.
Pygame: Graphical Input + Text Input
1.2
0
0
906
13,451,235
2012-11-19T09:48:00.000
1
1
0
1
python,zip,backup,unzip
13,451,351
4
false
0
0
Rsync will automatically detect and only copy modified files, but seeing as you want to bzip the results, you still need to detect if anything has changed. How about you output the directory listing (including time stamps) to a text file alongside your archive. The next time you diff the current directory structure against this stored text. You can grep differences out and pipe this file list to rsync to include those changed files.
1
7
0
This is the scenario. I want to be able to backup the contents of a folder using a python script. However, I want my backups to be stored in a zipped format, possibly bz2. The problem comes from the fact that I don’t want to bother backing up the folder if the contents in the “current” folder are exactly the same as what is in my most recent backup. My process will be like this: Initiate backup Check contents of “current” folder against what is stored in the most recent zipped backup If same – then “complete” If different, then run backup, then “complete” Can anyone recomment the most reliable and simple way of completing step2? Do I have to unzip the contents of the backup and store in a temp directory to do a comparison or is there a more elegant way of doing this? Possibly to do with modified date?
How to elegantly compare zip folder contents to unzipped folder contents
0.049958
0
0
5,546
13,455,988
2012-11-19T14:37:00.000
2
0
0
0
python,html,beautifulsoup
13,456,177
1
true
1
0
No, there is no way to force the .prettify() method to not output XHTML-compliant HTML.
1
4
0
I'm trying to parse and prettify a bunch of files made with Microsoft FrontPage. Beautifulsoup parses them with no problem, but when I try to print the output with prettify(), tags like <meta> or <br> are rewritten as <meta ... /> and <br/>. Is there a way to force HTML output?
Beautifulsoup 4 prettify outputs XHTML, not HTML
1.2
0
0
1,423
13,458,191
2012-11-19T16:36:00.000
2
0
0
0
python,django,plugins,multilingual,django-hvad
13,459,152
1
true
1
0
You should act like this: extract each translation in single variable, put them in list ant let that list go to view. Then you can iterate through each translation in view.
1
2
0
I have an article object which has several languages. What's the best way to work with this object? I need to display all attributes in every language. Is is possible to get just my article object and iterate through the languages in the template? Thanks for you help! Ron
Django hvad - Best practice to work with multi-lingual object in template
1.2
0
0
347
13,458,249
2012-11-19T16:40:00.000
1
1
0
0
python,cron
13,458,421
3
false
0
0
You can't really set standard crons directly from Python. Instead, I'd set the cron to fire every hour, and in the code determine if it needs to run again (ie last successful execution is more than 7 days ago).
1
0
0
So I have a little script I wish to run once a week. It will check on some variable and if its set, it continues running the script, if not, I want it to wait an hour and try again. If it's still not set, it'll wait 2 hours, then 4, and then give up for the week. My question is, can I do this in python? Seems like I'd have to create and delete cron jobs in python to get this to work.
Creating and deleting cron jobs in python
0.066568
0
0
1,865
13,459,647
2012-11-19T18:13:00.000
1
1
1
1
python,eclipse,pydev
14,926,237
1
false
0
0
if you are using setuptools, you can try running sudo python setup.py develop on the egg as well as adding project dependencies between the two in Eclipse
1
3
0
I'm using pydev in eclipse. I was hoping that pydev would first use the python classes I develop in my source dir. but since I also install the built egg into system dir, pydev also picks up the classes from system dir. the problem is that pydev uses system dir first in its python path. so after I installed a buggy version, and debug through pydev, and made the necessary changes in local sourcecode, it does not take effect, since the installed egg is not changed. or in the reverse case, as I was debugging, pydev takes me to the egg files, and I modify those egg files, so the real source code is not changed. so How could I let pydev rearrange pythonpath order? (just like eclipse does for java build classpath) ? thanks yang
re-arrange pythonpath in pydev-eclipse?
0.197375
0
0
464
13,460,288
2012-11-19T18:57:00.000
0
0
0
0
python,selenium,webdriver
13,460,962
2
false
0
0
Very hacky, but you could modify the Webdriver Firefox plugin to point to your binary.
1
3
0
I'm trying to use Firefox portable for my tests in python. In plain webdriver it works, but i was wondering how to do it in remote webdriver. All i could find is how to pass firefox profile, but how to specify to webdriver which binary to use?
How to specify browser binary in selenium remote webdriver?
0
0
1
620
13,460,428
2012-11-19T19:08:00.000
0
0
0
0
python,numpy,scipy
13,463,491
1
false
0
0
For computing Reimann sums you could look into numpy.cumsum(). I am not sure if you can do a surface or only an array with this method. However, you could always loop through all the rows of your terrain and store each row in a two dimensional array as you go. Leaving you with an array of all the terrain heights.
1
0
1
I am working on a visualization that models the trajectory of an object over a planar surface. Currently, the algorithm I have been provided with uses a simple trajectory function (where velocity and gravity are provided) and Runge-Kutta integration to check n points along the curve for a point where velocity becomes 0. We are discounting any atmospheric interaction. What I would like to do it introduce a non-planar surface, say from a digital terrain model (raster). My thought is to calculate a Reimann sum at each pixel and determine if the offset from the planar surface is equal to or less than the offset of the underlying topography from the planar surface. Is it possible, using numpy or scipy, to calculate the height of a Reimann rectangle? Conversely, the area of the rectangle (midpoint is fine) would work, as I know the width nd can calculate the height.
Scipy / Numpy Reimann Sum Height
0
0
0
329
13,462,448
2012-11-19T21:20:00.000
2
0
0
0
database,python-2.7,couchdb,couchdb-python
13,473,004
3
true
0
0
In order to avoid NAT problems, I would use an external service like Cloudant or Iris Couch. You can replicate your local database against the common DB in the cloud and your colleagues can connect to it.
1
1
0
I created a couchDB on my computer, that is I used the Python line server = couchdb.Server('http://localhost:5984') I want to share this database with two other colleagues. How can I make it available to them? For the moment, I am comfortable giving them full admin privileges until I get a better handle on this. I tried to read the relevant parts of CouchDB: The Definitive Guide, but I still don't get it. How would they access it? They can't just type in my computer's IP address?
How to access CouchDB server from another computer?
1.2
0
1
2,338
13,464,456
2012-11-19T23:46:00.000
2
1
0
1
python,installation,ubuntu-12.04
13,464,518
1
true
0
0
As an absolute beginner, don't worry right now about where to install libraries. Simple example scripts that you're trying out for learning purposes don't belong being installed in any lib directory such as under /usr/lib/python.' On Linux you want to do most work in your home directory, so just cd ~ to make sure you're there and create files there with an editor of your choice. You might want to organize your files hierarchically too. For example create a directory called src/ using the mkdir command in your home directory. And and then mdkir src/lpthw, for example, as a place to store all your samples from "Learn Python the Hard Way". Then simply fun python <path/to/py/file> to execute the script. Or you can cd ~/src/lpthw and run your scripts by filename only.
1
0
0
I am learning python from learnpythonthehardway. in the windows I had no issues with going through a lots of exercises because the setup was easier but I want to learn linux as well and ubuntu seemed to me the nicest choice. now I am having trouble with setting up. I can get access to the terminal and then usr/lib/python.2.7 but I don't know if to save the script in this directory? if I try to make a directory inside this through mkdir I can't as permission is denied. I also tried to do chmod but didn't know how or if to do it. any help regarding how to save my script in what libary? how to do that? and how to run it in terminal as: user@user$ python sampleexcercise.py using ubuntu 12.04 lts skill = newbie thanks in advance.
python library access in ubuntu 12.04
1.2
0
0
409
13,466,072
2012-11-20T03:15:00.000
0
0
0
0
python,pyqt4
13,470,421
1
false
0
1
I've worked on some software where every different pane is done as a separate. Ui file, so that they can be changed independently without requiring merges. It worked fine. Can you turn the map and dock parts into widgets, and then make a new "main window" ui, and then give that a layout and add the other two as child widgets to it?
1
1
0
I created a main app with a mdiArea for loading map graphics with Qt Designer *.ui and coded with pyQt4 using uic.loadUi() in python. I also created a separate *.ui file and tested the dockWidget successfully in a separate python script file. I wish to combine these 2 UI so that the main_app window will have the mdiArea widget on the left, while the dockWidget as the info_panel on the right. I tried to load the *.ui file in the main app python, but ended up the dockWidget as a separate window when show(). Any advice to resolve this? I hope I need not have to use Qt Designer to combine the mdiArea main_app UI with the dockWidget info_panel and load them as a single UI. ;P Thanks in advance.
Combining 2 UI as one main app window in python
0
0
0
719
13,466,939
2012-11-20T05:08:00.000
3
0
1
0
python,numpy
13,467,084
1
true
0
0
a compiler is unavailable, and no pre-built binaries can be installed This... makes numpy impossible. If you cannot install numpy binaries, and you cannot compile numpy source code, then you are left with no options.
1
2
1
Assuming performance is not an issue, is there a way to deploy numpy in a environment where a compiler is unavailable, and no pre-built binaries can be installed? Alternatively, is there a pure-python numpy implementation?
Installing (and using) numpy without access to a compiler or binaries
1.2
0
0
435
13,468,755
2012-11-20T07:53:00.000
8
0
0
0
python,selenium,scrapy
16,050,387
2
false
1
0
Updated: PhantomJS is abandoned, and you can use headless browsers directly now, such like Firefox and Chrome! Use PhantomJS instead. You can do browser = webdriver.PhantomJS() in selenium v2.32.0.
1
2
0
I want to do some web crawling with scrapy and python. I have found few code examples from internet where they use selenium with scrapy. I don't know much about selenium but only knows that it automates some web tasks. and browser actually opens and do stuff. but i don't want the actual browser to open but i want everything to happen from command line. Can i do that in selenium and scrapy
Can i use selenium with Scrapy without actual browser opening with python
1
0
1
2,960
13,469,321
2012-11-20T08:38:00.000
7
0
0
0
python,scrapy
13,469,554
2
false
1
0
If you insert directly inside a spider, then your spider will block until the data is inserted. If you create an Item and pass it to the Pipeline, the spider can continue to crawl while the data is inserted. Also, there might be race conditions if multiple spiders try to insert data at the same time.
1
1
0
I will be using scrapy to crawl a domain. I plan to store all that information into my db with sqlalchemy. It's pretty simple xpath selectors per page, and I plan to use HttpCacheMiddleware. In theory, I can just insert data into my db as soon as I have data from the spiders (this requires hxs to be instantiated at least). This will allow me to bypass instantiating any Item subclasses so there won't be any items to go through my pipelines. I see the advantages of doing so as: Less CPU intensive since there won't be any CPU processing for the pipelines Prevents memory leaks. Disk I/O is a lot faster than Network I/O so I don't think this will impact the spiders a lot. Is there a reason why I would want to use Scrapy's Item class?
Scrapy why bother with Items when you can just directly insert?
1
0
1
1,529
13,470,611
2012-11-20T09:55:00.000
0
0
1
0
windows,python-3.x,cx-freeze
13,814,960
3
false
0
1
The only reason i know of that that can happen is if the _imagingtk.pyd is not for your python version. Oh, could you post the link to the unofficial version? I've been searching for it for awhile.
1
1
0
I use Python 3 and unofficial PIL module. My code works fine. But after using cx_freeze I get exception "_imaging c module is not installed". What can I do with this problem? All solutions that I found were about Python 2.X and Linux OS. I need the solution for Windows and Python 3.
After using cx_freeze I get exception _imaging c module is not installed
0
0
0
348
13,473,489
2012-11-20T12:44:00.000
1
1
0
0
python,web-applications,haskell,clojure,lisp
13,476,327
4
false
1
0
When the server-side creates the form, encode an hidden field with the timestamp of the request, so when the users POSTs his form, you can see the time difference. How to implement that is up to you, which server you have available, and several other factors.
1
1
0
I'd like to make a webapp that asks people multiple choice questions, and times how long they take to answer. I'd like those who want to, to be able to make accounts, and to store the data for how well they've done and how their performance is increasing. I've never written any sort of web app before, although I'm a good programmer and understand how http works. I'm assuming (without evidence) that it's better to use a 'framework' than to hack something together from scratch, and I'd appreciate advice on which framework people think would be most appropriate. I hope that it will prove popular, but would rather get something working than spend time at the start worrying about scaling. Is this sane? And I'd like to be able to develop and test this on my own machine, and then deploy it to a virtual server or some other hosting solution. I'd prefer to use a language like Clojure or Lisp or Haskell, but if the advantages of using, say, Python or Ruby would outweigh the fact that I'd enjoy it more in a more maths-y language, then I like both of those too. I probably draw the line at perl, but if perl or even something like Java or C have compelling advantages then I'm quite happy with them too. They just don't seem appropriate for this sort of thing.
How do I make a web server to make timed multiple choice tests?
0.049958
0
0
947
13,476,383
2012-11-20T15:23:00.000
0
0
1
0
python,python-imaging-library,python-unicode
13,476,705
2
false
0
1
Maybe use Unicode strings?? Like u'cadeau check 50 €' ... Now, does also your font have the corresponding glyphs?
1
2
0
I have a title ("cadeau check 50 €") in a form value that I want to write to a background image with arial.ttf. My text is correct but for the euro sign. I have 2 [] in place. I don't know where the problem is coming from. Is this an encoding problem in PIL, or have I a problem with the font?
PIL: how to draw € on an image with draw.text?
0
0
0
785
13,478,965
2012-11-20T17:44:00.000
2
0
1
0
python
13,479,006
3
false
0
0
You need to have an __init__.py file in the backend folder for Python to consider it a package. Then you can do import backend.handlers or from backend.handlers import foo
1
0
0
From main.py, I want to import a file from the backend folder WebAppName/main.py WebAppName/backend/handlers.py How do I specify this as an import statement I am aware that importing from the same folder is just import handlers But this is a child directory, so how do I do this?
How do I import from a child directory / subfolder?
0.132549
0
0
134
13,481,352
2012-11-20T20:15:00.000
0
0
0
0
python,django,django-registration
13,481,428
1
true
1
0
LOGIN_REDIRECT_URL = '/' This redirects to home url.
1
0
0
I am using django-registration and django-registration_defaults (for the templates) in my app. How do I change the page the user sees after he/she logs in? I looked through the documentation but was unable to find anything.
How to change the post-login page using django-registration?
1.2
0
0
40
13,481,893
2012-11-20T20:50:00.000
0
0
1
1
python,windows-8,cmd,environment-variables
13,482,036
3
false
0
0
Unless the project's folder is in the PATH, you cannot call the file unless you are inside the project's folder. Don't create PATHs for projects, unless they are needed; it's unnecessary.    Just transverse to the file's directory and run the command inside the directory. That will work.    If the project will be used by other projects/files, you can use PYTHONPATH to set the directory, so the other projects can successfully access it.    Hope that helps.
1
2
0
I am a Python beginner and I am having trouble running Python from CMD. I have added the Python installation directory as a PATH variable (;C:\Python27). I am able to run the Python Interpreter from CMD, however when I issue a command like "python file.py command" from CMD, it returns "Error2, Python can't open, no such file/directory". So what I do is go to "cd C:\Folder\Folder2\My_Python_Files", then type the "file.py command" each and every time. Is there faster or more efficient way of doing this? I am currently running Python2.7 on Windows 8.
Running Python in CMD not working
0
0
0
7,684
13,482,777
2012-11-20T21:47:00.000
1
0
0
0
python,httplib,python-requests
13,483,006
4
false
0
0
It depends on how they are doing the redirection. The "right" way is to return a redirected HTTP status code (301/302/303). The "wrong" way is to place a refresh meta tag in the HTML. If they do the former, requests will handle it transparently. Note that any sane error page redirect will still have an error status code (e.g. 404) which you can check as response.status_code.
2
18
0
What I mean is, if I go to "www.yahoo.com/thispage", and yahoo has set up a filter to redirect /thispage to /thatpage. So whenever someone goes to /thispage, s/he will land on /thatpage. If I use httplib/requests/urllib, will it know that there was a redirection? What error pages? Some sites redirect user to /errorpage whenever the page cannot be found.
When I use python requests to check a site, if the site redirects me to another page, will I know?
0.049958
0
1
17,050
13,482,777
2012-11-20T21:47:00.000
16
0
0
0
python,httplib,python-requests
13,483,018
4
false
0
0
To prevent requests from following redirects use: r = requests.get('http://www.yahoo.com/thispage', allow_redirects=False) If it is in indeed a redirect, you can check the redirect target location in r.headers['location'].
2
18
0
What I mean is, if I go to "www.yahoo.com/thispage", and yahoo has set up a filter to redirect /thispage to /thatpage. So whenever someone goes to /thispage, s/he will land on /thatpage. If I use httplib/requests/urllib, will it know that there was a redirection? What error pages? Some sites redirect user to /errorpage whenever the page cannot be found.
When I use python requests to check a site, if the site redirects me to another page, will I know?
1
0
1
17,050
13,483,004
2012-11-20T22:05:00.000
1
0
0
0
python,django
66,409,423
4
false
1
0
You can try with following code py -m django startproject add_your_project_name_here
2
1
0
I tried starting a new Django project yesterday but when I did "django-admin.py startproject projectname" I got an error stating: "django-admin.py is not recognized as an internal or external command." The strange thing is, when I first installed Django, I made a few projects and everything worked fine. But now after going back a few months later it has suddenly stopped working. I've tried looking around for an answer and all I could find is that this typically has to do with the system path settings, however, I know that I have the proper paths set up so I don't understand what's happening. Does anybody have any idea what's going on?
Django-admin.py not being recognized suddenly
0.049958
0
0
4,143
13,483,004
2012-11-20T22:05:00.000
1
0
0
0
python,django
59,301,923
4
false
1
0
i am totally new to coding, so pardon my amateur answers. I had similar problem - i realized that while my Django was installed on C Drive, my files were saved on D drive and i was trying to run django-admin from D drive in the command prompt which was giving the above error. what worked for me was the following Located the Django-admin.exe and django-admin.py file which was in below path C:\Users[Username]\AppData\Local\Programs\Python\Python38-32\Scripts> copied both these files into the D drive folder where i was trying to create new projects then on the terminal command prompt (which was set to D Drive projects) ran django-admin startproject [filename] and it created a new file [filename]in that folder and this error was resolved
2
1
0
I tried starting a new Django project yesterday but when I did "django-admin.py startproject projectname" I got an error stating: "django-admin.py is not recognized as an internal or external command." The strange thing is, when I first installed Django, I made a few projects and everything worked fine. But now after going back a few months later it has suddenly stopped working. I've tried looking around for an answer and all I could find is that this typically has to do with the system path settings, however, I know that I have the proper paths set up so I don't understand what's happening. Does anybody have any idea what's going on?
Django-admin.py not being recognized suddenly
0.049958
0
0
4,143
13,483,928
2012-11-20T23:21:00.000
2
0
1
0
python,event-loop,tidesdk
13,484,579
1
true
0
0
There's nothing really specific to TideSDK here—this is a general issue with any program built around an event loop, which means nearly all GUI apps and network servers, among other things. There are three standard solutions: Break the long task up into a bunch of small tasks, each of which schedules the next to get run. Make the task call back to the event loop every so often. Run the task in parallel. For the first solution, most event-based frameworks have a method like doLater(func) or setTimeout(func, 0). If not, they have to at least have a way of posting a message to the event loop's queue, and you can pretty easily build a doLater around that. This kind of API can be horrible to use in C-like languages, and a bit obnoxious in JS just because of the bizarre this/scoping rules, but in Python and most other dynamic languages it's nearly painless. Since TideSDK is built around a browser JS engine, it's almost certainly going to provide this first solution. The second solution really only makes sense for frameworks built around either cooperative threadlets or explicit coroutines. However, some traditional single-threaded frameworks like classic Mac (and, therefore, modern Win32 and a few cross-platform frameworks like wxWindows) use this for running background jobs. The first problem is that you have to deal with re-entrancy carefully (at least wx has a SafeYield to help a little), or you can end up with many of the same kinds of problems as threads—or, worse, everything seems to work except that under heavy use you occasionally get a stack crash from infinite recursion. The other problem is that it only really works well when there's only one heavy background task at a time, because it doesn't work so well If your framework has a way of doing this, it'll have a function like yieldToOtherTasks or processNextEvent, and all you have to do is make sure to call that every once in a while. (However, if there's also a doLater, you should consider that first.) If there is no such method, this solution is not appropriate to your framework. The third solution is to spin off a task via threading.Thread or multiprocessing.Process. The problem with this parallelism is that you have to come up with some way to signal safely, and to share data safely. Some event-loop frameworks have a thread-safe "doLater" or "postEvent" method, and if the only signal you need is "task finished" and the only data you need to share are the task startup params and return values, everything is easy. But once that's not sufficient, things can get very complicated. Also, if you have hundreds of long-running tasks to run, you probably don't want a thread or process for each one. In fact, you probably want a fixed-size pool of threads or processes, and then you'll have to break your tasks into small-enough subtasks so they don't starve each other out, so in effect you're doing all the work of solution #1 anyway. However, there are cases where threads or processes are the simplest solution.
1
0
0
Is there a way to do long processing loops in Python without freezing the GUI with TideSDK? or I'll just have to use threads... Thanks.
TideSDK and long processing loops in Python
1.2
0
0
433
13,484,482
2012-11-21T00:23:00.000
0
0
0
1
python,applescript
53,218,057
2
false
0
0
My issue was an app with LSBackgroundOnly = YES set attempting to run an AppleScript that displays UI, such as display dialog ... Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory" AppleScript.scpt: execution error: No user interaction allowed. (-1713) Using tell application "Finder" ... or etc. works, as shown in the other answer. Or, remove the LSBackgroundOnly key to enable UI AppleScripts without telling a different Application. LSUIElement presents a similar mode - no dock icon, no menu bar, etc. - but DOES allow UI AppleScripts to be launched.
1
5
0
I have the applescript which will displays menu list and allow user to select menu items etc. It runs fine by itself. And now I try to run it in python. I get the No user interaction allowed. (-1713) error. I looked online. I tried the following: add on run function in the same applescript, so what i did is just add the main into the run on run tell application “AppleScript Runner” main() end tell end run i tried to run the above in python import os def main(): os.system ('osascript -e "tell application "ApplesScript Runner" do script /Users/eee/applescript/iTune.scpt end tell"') if name == 'main': main() Neither way works. Can anyone tell me how to do this correctly?
"No user interaction allowed" When running AppleScript in python
0
0
0
5,247
13,487,181
2012-11-21T05:51:00.000
0
0
1
0
python,sockets,shared-memory
13,500,968
2
false
0
0
First, note that what you're trying to build will require more than just shared memory: it's all well if a.py writes to shared memory, but how will b.py know when the memory is ready and can be read from? All in all, it is often simpler to solve this problem by connecting the multiple processes not via shared memory, but through some other mechanism. (The reason for why mmap usually needs a file name is that it needs a name to connect the several processes. Indeed, if a.py and b.py both call mmap(), how would the system know that these two processes are asking for memory to be shared between them, and not some unrelated z.py? Because they both mmaped the same file. There are also Linux-specific extensions to give a name that doesn't correspond to a file name, but it's more a hack IMHO.) Maybe the most basic alternative mechanism is pipes: they are usually connected with the help of the shell when the programs are started. That's how the following works (on Linux/Unix): python a.py | python b.py. Any output that a.py sends goes to the pipe, whose other end is the input for b.py. You'd write a.py so that it listens to the UDP socket and writes the data to stdout, and b.py so that it reads from stdin to process the data received. If the data needs to go to several processes, you can use e.g. named pipes, which have a nice (but Bash-specific) syntax: python a.py >(python b.py) >(python c.py) will start a.py with two arguments, which are names of pseudo-files that can be opened and written to. Whatever is written to the first pseudo-file goes as input for b.py, and similarly what is written to the second pseudo-file goes as input for c.py.
2
2
0
I have a processes from several servers that send data to my local port 2222 via udp every second. I want to read this data and write it to shared memory so there can be other processes to read the data from shared memory and do things to it. I've been reading about mmap and it seems I have to use a file... which I can't seem to understand why. I have an a.py that reads the data from the socket, but how can I write it to an shm? Once once it's written, I need to write b.py, c.py, d.py, etc., to read the very same shm and do things to it. Any help or snippet of codes would greatly help.
how to write to shared memory in python from stream?
0
0
0
2,659