Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
25,270,885 | 2014-08-12T17:48:00.000 | 0 | 0 | 1 | 1 | python,batch-file,python-3.x,cx-freeze | 33,244,651 | 3 | false | 0 | 0 | Make sure the Version of Python is correct, if you have more than one version on your computer, just simply type "python" in console to check the version of your python. I just had this problem earlier. | 2 | 26 | 0 | I am using python 3.4 at win-8. I want to obtain .exe program from python code. I learned that it can be done by cx_Freeze.
In MS-DOS command line, I wrote pip install cx_Freeze to set up cx_Freeze. It is installed but it is not working.
(When I wrote cxfreeze to command line, I get this warning:C:\Users\USER>cxfreeze
'cxfreeze' is not recognized as an internal or external command,operable program or batch file.)
(I also added location of cxfreeze to "PATH" by environment variables)
Any help would be appriciated thanks. | installing cx_Freeze to python at windows | 0 | 0 | 0 | 27,553 |
25,273,987 | 2014-08-12T21:03:00.000 | 1 | 0 | 0 | 0 | python,shapefile,qgis | 29,585,421 | 3 | false | 0 | 1 | QgsGeometry has the method wkbType that returns what you want. | 1 | 7 | 0 | I'm writing a script that is dependent on knowing the geometry type of the loaded shapefile.
but I've looked in the pyqgis cookbook and API and can't figure out how to call it.
infact, I have trouble interpreting the API, so any light shed on that subject would be appreciated.
Thank you | How to get shapefile geometry type in PyQGIS? | 0.066568 | 0 | 0 | 8,852 |
25,273,989 | 2014-08-12T21:03:00.000 | 9 | 0 | 0 | 0 | python,flask,global-variables | 25,274,042 | 1 | true | 1 | 0 | Generally speaking, global variables are shared between requests.
Some WSGI servers can use a new separate process for each request, but that is not an efficient way to scale your requests. Most will use treading or several child processes to spread the load but even in the case of separate child processes each subprocess will have to handle multiple requests during its lifetime.
In other words: no, Flask will not protect your global variables from being shared between different users. | 1 | 2 | 0 | If I have global variables in flask and have multiple users accessing the site at once, can one persons session overwrite the global variables of another persons session, or does flask make a unique instance of my site and program code each time its requested from a users browser? | Flask global variables and sessions | 1.2 | 0 | 0 | 2,915 |
25,274,746 | 2014-08-12T21:57:00.000 | 1 | 0 | 1 | 0 | python,algorithm | 25,275,905 | 7 | false | 0 | 0 | I'm pretty sure python is not a good language to do this in, but if the length of distinct substrings you want to find is not small like 5 but larger like 1000 where your main string is very long, then a linear time solution to your problem is to build a suffix tree, you can read about them online. A suffix tree for a string of length n can be built in O(n) time, and traversing the tree also takes O(n) time and by traversing the higher levels of the tree you can count all distinct substrings of a particular length, also in O(n) time regardless of the length of substrings you want. | 1 | 6 | 0 | Given a string i want to count how many substrings with len = 5 i have on it.
For example: Input: "ABCDEFG" Output: 3
And I'm not sure what should be the easiest and fast way to do this in python. Any idea?
Update:
I want only to count different substrings.
Input: "AAAAAA"
Substrings: 2 times "AAAAA"
Output: 1 | Counting the number of different 5 characters substrings inside a string | 0.028564 | 0 | 0 | 216 |
25,275,654 | 2014-08-12T23:19:00.000 | 1 | 0 | 0 | 0 | python,url,web-crawler,urllib2 | 25,275,720 | 2 | false | 1 | 0 | You can store the hash of the content of pages previously seen and check if the page has already been seen before continuing. | 1 | 1 | 0 | I am writing a web crawler, but I have a problem with function which recursively calls links.
Let's suppose I have a page: http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind.
I am looking for all links, and then open each link recursively, downloading again all links etc.
The problem is, that some links, although have different urls, drive to the same page, for example:
http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind#mw-navigation
gives the same page as the previous link.
And I have an infinite loop.
Is any possibility to check if two links drive to the same page without comparing the all content of this pages? | Predict if sites returns the same content | 0.099668 | 0 | 1 | 52 |
25,279,746 | 2014-08-13T06:48:00.000 | 2 | 0 | 1 | 1 | python,windows | 25,279,812 | 4 | false | 0 | 0 | Create a service that runs permanently.
Arrange for the service to have an IPC communications channel.
From your desktop python code, send messages to the service down that IPC channel. These messages specify the action to be taken by the service.
The service receives the message and performs the action. That is, executes the python code that the sender requests.
This allows you to decouple the service from the python code that it executes and so allows you to avoid repeatedly re-installing a service.
If you don't want to run in a service then you can use CreateProcessAsUser or similar APIs. | 2 | 0 | 0 | I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu | How to launch a python process in Windows SYSTEM account | 0.099668 | 0 | 0 | 1,368 |
25,279,746 | 2014-08-13T06:48:00.000 | 0 | 0 | 1 | 1 | python,windows | 25,281,143 | 4 | false | 0 | 0 | You could also use Windows Task Scheduler, it can run a script under SYSTEM account and its interface is easy (if you do not test too often :-) ) | 2 | 0 | 0 | I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu | How to launch a python process in Windows SYSTEM account | 0 | 0 | 0 | 1,368 |
25,284,879 | 2014-08-13T11:25:00.000 | 1 | 0 | 1 | 1 | python,linux,pip | 25,284,972 | 2 | true | 0 | 0 | This sounds to me like a good approach but perhaps instead of placing the .desktop file in the system wide /usr/share/applications/ folder, you could place the file in the users applications folder at ~/.local/share/applications.
This would also not require elevated permissions to access the root owned /user directory and it's sub-directories. | 1 | 8 | 0 | I have a python application that is supposed to be launchable via GUI so it has to have a .desktop file in /usr/share/applications/. The application only supports Linux. Normally, pip installs all files in one directory but it is possible to specify other locations (e.g. the .desktop file) in the setup.py using data_files=[].
Is this considered to be good a solution in this case or is this something that should only happen in a distribution specific package (like .rpm/.deb/.ebuild)? | Install .desktop file with setup.py | 1.2 | 0 | 0 | 1,893 |
25,288,032 | 2014-08-13T13:52:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,nlp,nltk | 25,298,846 | 1 | true | 0 | 0 | The way how topic modelers usually pre-process text with n-grams is they connect them by underscore (say, topic_modeling or white_house). You can do that when identifying big rams themselves. And don't forget to make sure that your tokenizer does not split by underscore (Mallet does if not setting token-regex explicitly).
P.S. NLTK native bigrams collocation finder is super slow - if you want something more efficient look around if you haven't yet or create your own based on, say, Dunning (1993). | 1 | 0 | 1 | Background: I got a lot of text that has some technical expressions, which are not always standard.
I know how to find the bigrams and filter them.
Now, I want to use them when tokenizing the sentences. So words that should stay together (according to the calculated bigrams) are kept together.
I would like to know if there is a correct way to doing this within NLTK. If not, I can think of various non efficient ways of rejoining all the broken words by checking dictionaries. | Python NLTK tokenizing text using already found bigrams | 1.2 | 0 | 0 | 317 |
25,288,653 | 2014-08-13T14:20:00.000 | 6 | 0 | 1 | 0 | python,python-2.7,comparison,operators | 25,288,736 | 2 | false | 0 | 0 | That's just division. And, at least for integers a >= 0 and b > 0, a/b is truthy if a>=b. Because, in that scenario, a/b is a strictly positive integer and bool() applied to a non-zero integer is True.
For zero and negative integer arguments, I am sure that you can work out the truthiness of a/b for yourself. | 1 | 6 | 0 | I recently got into code golfing and need to save as many characters as possible.
I remember seeing someone say to use if a/b: instead of if a<=b:. However, I looked through Python documentation and saw nothing of the sort.
I could be remembering this all wrong, but I'm pretty sure I've seen this operator used and recommended in multiple instances.
Does this operator exist? If so, how does it work? | Using '/' as greater than less than in Python? | 1 | 0 | 0 | 482 |
25,294,601 | 2014-08-13T19:32:00.000 | 2 | 0 | 0 | 0 | python,biopython,blast,ncbi | 25,295,081 | 1 | false | 0 | 0 | If you want to use BLAT online, there's not such tool as Bio.Blast.NCBIWWW.
If you want to use BLAT locally, there's not such tool as Bio.Blast.NCBIStandalone
The good news are that you can install BLAT locally and use the subprocess library to call BLAT, and Biopython provides the Bio.SearchIO.BlatIO to parse the output. Or you can try to submit your queries to the website of BLAT, and get the output to parse it locally.
But if you're new to python, I think the first option is the easy path. | 1 | 0 | 0 | I'd like to run several BLAT queries with different sequences and then perform a multiple sequence alignment on the results.
How can I use Python to run these BLAT queries?
I know that there is a way to use BLAST, but I am not sure about BLAT. | Using Biopython to run a BLAT search through NCBI | 0.379949 | 0 | 0 | 982 |
25,297,446 | 2014-08-13T22:47:00.000 | 1 | 0 | 1 | 0 | python,recursion | 25,297,640 | 7 | false | 0 | 0 | Note: This answer is limited to your topmost question, i.e. "Is it advisable to write recursive functions in Python?".
The short answer is no, it's not exactly "advisable". Without tail-call optimization, recursion can get painfully slow in Python given how intensive function calls are on both memory and processor time. Whenever possible, it's best to rewrite your code iteratively. | 3 | 10 | 0 | I have written a verilog (logic gates and their connectivity description basically) simulator in python as a part of an experiment.
I faced an issue with the stack limit so I did some reading and found that Python does not have a "tail call optimization" feature (i.e. removing stack entries dynamically as recursion proceeds)
I mainly have two questions in this regard:
1) If I bump up the stack limit to sys.setrecursionlimit(15000) does it impact performance in terms of time (memory -- I do not care)?
2) Is there any way I can circumvent this limitation assuming that I can live without a stack-trace.
I ask this because Verilog mainly deals with state-machines which can be implemented in an elegant way using recursive functions.
Also, if I may add, in case of recursive function calls, if there is a bug, I rely more on the input which is causing this bug rather than the stack trace.
I am new to Python, so maybe experts might argue that the Python stack trace is quite useful to debug recursive function calls...if that is the case, I would be more than happy to learn how to do that.
Lastly, is it advisable to write recursive functions in Python or should I be moving to other languages?
If there any work-around such that I can continue using python for recursive functions, I would like to know if there any performance impact (I can do profiling though). | Is it advisable to write recursive functions in Python | 0.028564 | 0 | 0 | 1,765 |
25,297,446 | 2014-08-13T22:47:00.000 | 0 | 0 | 1 | 0 | python,recursion | 25,298,141 | 7 | false | 0 | 0 | I use sys.setrecursionlimit to set the recursion limit to its maximum possible value because I have had issues with large classes/functions hitting the default maximum recursion depth. Setting a large value for the recursion limit should not affect the performance of your script, i.e. it will take the same amount of time to complete if it completes under both a high and a low recursion limit. The only difference is that if you have a low recursion limit, it prevents you from doing stupid things (like running an infinitely recursive loop). With a high limit, rather than hit the limit, a horribly inefficient script that uses recursion too much will just run forever (or until it runs out of memory depending on the task).
As the other answers explain in more detail, most of the time there is a faster way to do whatever it is that you are doing other than a long series of recursive calls. | 3 | 10 | 0 | I have written a verilog (logic gates and their connectivity description basically) simulator in python as a part of an experiment.
I faced an issue with the stack limit so I did some reading and found that Python does not have a "tail call optimization" feature (i.e. removing stack entries dynamically as recursion proceeds)
I mainly have two questions in this regard:
1) If I bump up the stack limit to sys.setrecursionlimit(15000) does it impact performance in terms of time (memory -- I do not care)?
2) Is there any way I can circumvent this limitation assuming that I can live without a stack-trace.
I ask this because Verilog mainly deals with state-machines which can be implemented in an elegant way using recursive functions.
Also, if I may add, in case of recursive function calls, if there is a bug, I rely more on the input which is causing this bug rather than the stack trace.
I am new to Python, so maybe experts might argue that the Python stack trace is quite useful to debug recursive function calls...if that is the case, I would be more than happy to learn how to do that.
Lastly, is it advisable to write recursive functions in Python or should I be moving to other languages?
If there any work-around such that I can continue using python for recursive functions, I would like to know if there any performance impact (I can do profiling though). | Is it advisable to write recursive functions in Python | 0 | 0 | 0 | 1,765 |
25,297,446 | 2014-08-13T22:47:00.000 | 3 | 0 | 1 | 0 | python,recursion | 25,297,631 | 7 | false | 0 | 0 | A lot depends on the specific nature of the recursive solution you're trying to implement. Let me give a concrete example. Suppose you want the sum of all values in a list. You can set the recursion up by adding the first value to the sum of the remainder of the list - the recursion should be obvious. However, the recursive subproblem is only 1 smaller than the original problem, so the recursive stack will grow to be as big as the number of items in the list. For large lists this will be a problem. An alternate recursion is to note that the sum of all values is the sum of the first half of the list plus the sum of the second half of the list. Again, the recursion should be obvious and the terminating condition is when you get down to sublists of length 1. However, for this version the stack will only grow as log2 of the size of the list, and you can handle immense lists without stack problems. Not all problems can be factored into subproblems which are half the size, but when you can this is a good way to avoid stack overflow situations.
If your recursive solution is a tail recursion, you can easily be converted into a loop rather than a recursive call.
Another possibility if you don't have tail recursion is to implement things with a loop and explicitly store your intermediate state on an explicit stack. | 3 | 10 | 0 | I have written a verilog (logic gates and their connectivity description basically) simulator in python as a part of an experiment.
I faced an issue with the stack limit so I did some reading and found that Python does not have a "tail call optimization" feature (i.e. removing stack entries dynamically as recursion proceeds)
I mainly have two questions in this regard:
1) If I bump up the stack limit to sys.setrecursionlimit(15000) does it impact performance in terms of time (memory -- I do not care)?
2) Is there any way I can circumvent this limitation assuming that I can live without a stack-trace.
I ask this because Verilog mainly deals with state-machines which can be implemented in an elegant way using recursive functions.
Also, if I may add, in case of recursive function calls, if there is a bug, I rely more on the input which is causing this bug rather than the stack trace.
I am new to Python, so maybe experts might argue that the Python stack trace is quite useful to debug recursive function calls...if that is the case, I would be more than happy to learn how to do that.
Lastly, is it advisable to write recursive functions in Python or should I be moving to other languages?
If there any work-around such that I can continue using python for recursive functions, I would like to know if there any performance impact (I can do profiling though). | Is it advisable to write recursive functions in Python | 0.085505 | 0 | 0 | 1,765 |
25,298,281 | 2014-08-14T00:26:00.000 | 2 | 0 | 0 | 0 | python,excel,com,win32com | 25,308,893 | 1 | true | 0 | 1 | When an application registers itself, only the first instance gets registered, until it dies and then the very next instance to register gets registered.
There's no registration queue, so when your first instance dies, the second keeps unregistered, so any call to Excel.Application will launch a third instance and they'll keep using it until it dies too.
In summary, the instances launched in between registered instances never get registered.
If you need to reuse an instance, you must keep a pointer to it.
That said, if you get an instance of an open Excel file, you might obtain a link to an unregistered Excel instance. For instance, if Excel 1 (registered) has workbook 1 open, and Excel 2 (unregistered) has workbook 2 open, if you ask for workbook 2, you'll get Excel 2's instance (e.g. through Workbook.Application). | 1 | 3 | 0 | I am using python to parse an Excel file and am accessing the application COM using excel = Dispatch('Excel.Application') at the beginning of a restart the code will find the application object just fine and I will be able to access the active workbook.
The problem comes when I have had two instances of Excel open and I close the first. From then on every call to excel = Dispatch('Excel.Application') provides an application object that is different from the open instance of Excel. If I try excel.Visible=1 it opens a new Excel instance rather than showing the already open instance of excel. How do I get the COM object of the already open instance of Excel rather than creating a new instance? | win32com dispatch Won't Find Already Open Application Instance | 1.2 | 1 | 0 | 1,075 |
25,299,681 | 2014-08-14T03:40:00.000 | 4 | 0 | 0 | 1 | python,google-app-engine,http-status-code-413 | 25,311,367 | 1 | true | 1 | 0 | Looks like it was because I was making a GET request. Changing it to POST fixed it. | 1 | 3 | 0 | I've implemented an app engine server in Python for processing html documents sent to it. It's all well and good when I run it locally, but when running off the App engine, I get the following error:
"413. That’s an error. Your client issued a request that was too large. That’s all we know."
The request is only 155KB, and I thought the app engine request limit was 10MB. I've verified that I haven't exceeded any of the daily quotas, so anyone know what might be going on?
Thanks in advance!
-Saswat | Google App Engine 413 error (Request Entity Too Large) | 1.2 | 0 | 0 | 5,558 |
25,302,979 | 2014-08-14T08:04:00.000 | 0 | 0 | 0 | 1 | python,ftp,ftplib | 25,303,266 | 1 | true | 0 | 0 | It seems that it uses, per default, two connections (one for sending commands, one for datatransfer?).
That's how ftp works. You have a control connection (usually port 21) for commands and a data connection for data transfer, file listing etc and a dynamic port.
However my ftpserver only accepts one connection at any given time.
ftpserver might have a limit for multiple control connections, but it must still accept data connections. Could you please show from tcpdump, wireshark, logfiles etc why you think multiple connections are the problem?
In filezilla I'm able to "limit the maximum number of simultanious connections"
This is for the number of control connections only. Does it work with filezilla? Because I doubt that ftplib opens multiple control connections. | 1 | 0 | 0 | I have a bit of a problem with the ftplib from python. It seems that it uses, per default, two connections (one for sending commands, one for datatransfer?). However my ftpserver only accepts one connection at any given time. Since the only file that needs to be transfered is only about 1 MB large, the reasoning of being able to abort inflight commands does not apply here.
Previously the same job was done by the windows commandline ftp client. So I could just call this client from python, but I would really prefer a complete python solution.
Is there a way to tell ftplib, that it should limit itself to a single connection? In filezilla I'm able to "limit the maximum number of simultanious connections", ideally I would like to reproduce this functionality.
Thanks for your help. | python ftpclient limit connections | 1.2 | 0 | 1 | 691 |
25,312,626 | 2014-08-14T16:04:00.000 | 3 | 0 | 0 | 0 | python,django,model-view-controller | 25,312,789 | 6 | false | 1 | 0 | Just check that the object retrieved by the primary key belongs to the requesting user. In the view this would be
if some_object.user == request.user:
...
This requires that the model representing the object has a reference to the User model. | 1 | 5 | 0 | I'm new to the web development world, to Django, and to applications that require securing the URL from users that change the foo/bar/pk to access other user data.
Is there a way to prevent this? Or is there a built-in way to prevent this from happening in Django?
E.g.:
foo/bar/22 can be changed to foo/bar/14 and exposes past users data.
I have read the answers to several questions about this topic and I have had little luck in an answer that can clearly and coherently explain this and the approach to prevent this. I don't know a ton about this so I don't know how to word this question to investigate it properly. Please explain this to me like I'm 5. | How to prevent user changing URL to see other submission data Django | 0.099668 | 0 | 0 | 3,945 |
25,315,082 | 2014-08-14T18:26:00.000 | 0 | 0 | 1 | 0 | python,macos,io,ipython | 25,315,127 | 2 | false | 0 | 0 | In the Terminal find out the process id using the "top" command. It is the PID column, and under the COMMAND column you will see IPython, or something similar.
Them run kill -9 PID | 1 | 3 | 0 | I'm using iPython to control some other equipment connected to my Mac. Most of the time it runs fine, but sometimes I gave a wrong command to the equipment, iPython is just hanging forever since it's waiting for a response but won't get one, so I need a way to kill the iPython process. ctrl+c doesn't do anything, it just prints a ^C string on the screen. ctrl+z can get me out of iPython, but it doesn't seem to kill it, because when I restart iPython I can't re-establish the communication with the equipment. Eventually I had to restart my computer to get the two talking again. | How to kill a running iPython process | 0 | 0 | 0 | 17,768 |
25,315,217 | 2014-08-14T18:34:00.000 | -2 | 0 | 1 | 0 | python,python-2.7 | 25,315,304 | 2 | false | 0 | 0 | __name__ belongs to the local scope (attribute) of the module that you call with python, ie: in this case manage.py. | 1 | 0 | 0 | My current understanding is that, when one writes from foo import bar, foo which is a package and has __init__.py, will have its __init__.py automatically processed after which its resource bar will be imported. If from the command prompt, I write python manage.py, and in that module call from foo import bar, in the __init__.py which belongs to foo package, is the variable __name__ then equal to the package name? foo in this case? | What is the value of __name__ when accessed in a __init__.py | -0.197375 | 0 | 0 | 908 |
25,318,344 | 2014-08-14T22:09:00.000 | 0 | 1 | 1 | 0 | php,python,struct | 35,089,240 | 3 | false | 0 | 0 | If you are trying to pass a null value from PHP to a Python dictionary, you need to use an empty object rather than an empty array.
You can define a new and empty object like $x = new stdClass(); | 1 | 11 | 0 | I cannot find how to write empty Python struct/dictionary in PHP. When I wrote "{}" in PHP, it gives me an error. What is the equivalent php programming structure to Python's dictionary? | What is the equivalent php structure to python's dictionary? | 0 | 0 | 0 | 6,548 |
25,319,035 | 2014-08-14T23:15:00.000 | 1 | 0 | 0 | 0 | javascript,python,ajax,websocket,flask | 25,319,136 | 1 | true | 1 | 0 | You have to store the current state on your server and when a page is requested, you have to build a page from your server that will show the current state.
When anything changes the current state (I don't know what actions can change the state as you haven't stated how that works), then you must update the state on the server so it stays current.
If you want other open clients to update anytime anyone changes the state, then each open page will have to either maintain some sort of open connection to the server like a websocket (so it can be notified of updates to the state and it can update it's visuals) or you will have to poll the server from each open page to find out if anything has been updated. | 1 | 1 | 0 | I have a web app with several AJAX call and from them it draws realtime graphs from the call. And the problem is that everytime we connect to the page, it start over and draw and make calls from there. I want everybody to share the same state of the page, not each person reloading and getting different values.
How do I limit the calls and share the same state for everyone? | AJAX and Javascript to display the same for everyone? | 1.2 | 0 | 0 | 29 |
25,320,342 | 2014-08-15T02:14:00.000 | 1 | 0 | 1 | 0 | python,cx-freeze | 25,331,281 | 1 | true | 0 | 0 | AFAIK, cx_Freeze doesn't use the description option, but it's a standard part of setup.py files, which use the same mechanism (distutils) that Python has for distributing packages.
I think the version field can be embedded in the executable, though. | 1 | 0 | 0 | I installed cx_Freeze a while ago, and recently froze my first program. In all the example setup scripts I've seen, the call to setup() contains several options, including things such as version and description. Why does cx_Freeze want a description of my program? What does it do with that information? Most importantly, what am I missing out on if I don't set that argument? | What does the description option in a cx_Freeze setup script do? | 1.2 | 0 | 0 | 97 |
25,321,391 | 2014-08-15T04:48:00.000 | 2 | 1 | 1 | 0 | python,lua | 25,321,438 | 2 | false | 0 | 0 | Leave your conditional empty by doing this
if <condition> then end | 1 | 1 | 0 | I've recently started learning Lua. The only other programming language I have some experience in is Python. In Python there is the "pass" function that does nothing. I was wondering what the equivalent (if any) of this would be in Lua. | Function that does nothing in Lua | 0.197375 | 0 | 0 | 3,469 |
25,325,004 | 2014-08-15T10:35:00.000 | 2 | 1 | 1 | 0 | python | 25,325,351 | 1 | true | 0 | 0 | *.pyd are compiled python extensions (on windows). *.lib are library modules used for building and linking with python itself. The *.h are C include files needed when you are creating your own extensions.
Generally these are all quite small and do not consume material disk space. I recommend you leave them alone. even if you don't need them now, you may want them in the future (when it might be difficult to locate). | 1 | 1 | 0 | What are the purposes of *.pyd files in the DLLs directory, header files (*.h) in the include directory, and *.lib files in the libs directory? When I delete them it seems that at least some basic python code works properly. | Purposes of different files in python installation | 1.2 | 0 | 0 | 32 |
25,327,192 | 2014-08-15T13:24:00.000 | 1 | 0 | 0 | 0 | python,django,cookies,csrf,django-1.6 | 25,334,477 | 1 | true | 1 | 0 | I think we finally figured it out. The separate "CSRF_COOKIE_DOMAIN" for each environment (".beta.site.com", ".demo.site.com", etc.) stopped the cross-environment issues. We also ended up setting "CSRF_COOKIE_NAME" to "csrf_token" instead of the default "csrftoken" so that users with old csrftoken cookies weren't negatively affected. | 1 | 4 | 0 | We've been experiencing issues with duplicate CSRF token cookies in Django in our most recent release. We just upgraded from Django 1.4 to 1.6 and we never had any issues back in 1.4. Basically, everything starts fine for each user, but at some point they end up having more than one CSRF token cookie and the browser gets confused and doesn't know which one to use. It typically chooses wrong and causes CSRF failure issues. Our site uses multiple sub-domains, so there's typically a cookie for .site.com, .sub.site.com, site.com, and other variants.
We tried setting "CSRF_COOKIE_DOMAIN" to .site.com, and that seemed to make the issue happen less frequently, but it still happened occasionally when sub-domains were being used and users were logging out and logging back in as other users.
We also discovered that the favicon shortcut wasn't being defined in our base template, causing an extra request to go through the middleware, but that was fixed. We then confirmed that only the real request was going through the middleware and not any of the static or media files.
We still can't reproduce the issue on command, and typically whenever it does happen then clearing cookies works as a temporary fix, but it still keeps happening periodically. Does anyone know why this might be happening? Is there something that we're missing in the docs?
Thanks.
EDIT:
One thing I forgot to mention is that we have multiple server environments (site.com, demo.site.com, and beta.site.com). After a little more digging, it looked like users who were testing on beta and then used production had cross-environment cookie collisions. Just now we tried setting the csrf cookie domains for each environment to ".beta.site.com" and ".demo.site.com" instead of just ".site.com" and that seemed to help, especially when you clear your cookies between working in each environment. However, there's still potential for collisions between .site.com cookies on production colliding in beta and demo, but that's less of an issue at least.
So is there anything more we can do about this? Also, is there anything we can do once we push this to production when users have old "site.com" cookies that run into collisions with the new specified ".site.com" cookies?
EDIT 2:
I posted the solution, but it won't let me accept it for a few days. | Issue with CSRF token cookies in Django 1.6 | 1.2 | 0 | 0 | 596 |
25,328,259 | 2014-08-15T14:28:00.000 | 0 | 0 | 1 | 0 | python | 25,328,446 | 5 | false | 0 | 0 | Something like gist.github.com or jsfiddle.net whould work, though they're both "cloudish". You could always create a local directory and use git or hg or some other distributed version control system to manage your code. That would be a good learning experience too. | 2 | 0 | 0 | I am learning with an online interpreter and would like to save bits of code in my Google drive. Is there a good way to do this so I can easily copy and paste my work back into a webpage. If there is a better way to save it that isn't in the cloud I would be interested in that as well. | How best to save python code? | 0 | 0 | 0 | 86 |
25,328,259 | 2014-08-15T14:28:00.000 | 0 | 0 | 1 | 0 | python | 25,328,332 | 5 | false | 0 | 0 | If you download or have a text editor ( something like Sublime text ) you can paste it into that, put in the syntax you are using so your code is highlighted. Save it via that. However if you want something online, use github or for simplicity dropbox. | 2 | 0 | 0 | I am learning with an online interpreter and would like to save bits of code in my Google drive. Is there a good way to do this so I can easily copy and paste my work back into a webpage. If there is a better way to save it that isn't in the cloud I would be interested in that as well. | How best to save python code? | 0 | 0 | 0 | 86 |
25,328,558 | 2014-08-15T14:44:00.000 | 4 | 0 | 1 | 0 | python,regex | 25,328,630 | 2 | true | 0 | 0 | I always use r"[\s\S]" all whitespace and non-whitespace, so everything. | 1 | 5 | 0 | My regexp needs both the default non-newline-matching dot and the re.DOTALL dot (. matches newline). I need several of the former and just one of the latter within a single regexp. Nevertheless, because I need one dot to match newlines, I have to use DOTALL, and use [^\n] several times to get the default "anything except newlines" behavior.
I'd like to get rid of the DOTALL, replace those [^\n] with . and have a more complicated way of matching "anything including newlines" in the one place that I need.
So the question is: what is the regexp syntax to match "anything including newline" without DOTALL? | How to match anything (DOTALL) without DOTALL? | 1.2 | 0 | 0 | 347 |
25,334,643 | 2014-08-15T21:43:00.000 | 0 | 0 | 0 | 0 | python,sockets,networking,udp,frame-rate | 26,517,593 | 1 | false | 0 | 0 | It looks to me that the network is getting congested, transmitting old packets that got into the routers' queues and dropping the new packets.
Programming UDP is something of a black art — it requires reacting to network congestion when needed, and slowing down your sending rate. A simple solution would be to have the receiver send a periodic summary of received packets (say, once per RTT), and reduce your sending rate when you're seeing too many losses. Ideally, you'd combine that with a precise RTT estimator and reduce your sending rate preemptively when the RTT suddenly increases. | 1 | 3 | 0 | I have written an application in Python 2.7 and I'm using UDP sockets to implement networking capabilities. Though my application is not a game, I would consider it a game for networking purposes because the screen is redrawn 60 times per second.
I do not need extreme precision, so I don't need to send a ton of packets per second, but the way I have implemented networking causes fellow users to look "choppy" if there aren't a fair amount of packets sent per second.
After some research and fiddling, I've decided to send one packet every 50 milliseconds. This makes the other users look fairly "smooth" for a while, but after about a minute they get more and more choppy, eventually to the point of no updates happening.
How am I supposed to implement networking like the networking done in video games? It seems like I am fundamentally missing something. | How often should UDP packets be sent? | 0 | 0 | 1 | 769 |
25,341,332 | 2014-08-16T14:53:00.000 | 0 | 0 | 0 | 0 | python,django,google-chrome,gunicorn | 25,341,679 | 1 | false | 1 | 0 | The session information, i.e. which user is logged in, is saved in a cookie, which is send from browser to server with each request. The cookie is set through the server with your login request.
For some reason, chrome does not send or save the correct cookie. If you have a current version of each browser, they should behave similar. Older browser versions may not be as strict as newer versions in respect to cookie security:
Same origin: are all pages located at the same sub-domain, or is the login page at some other domain?
path: do you set the cookie for a specific path, but use URLs with other paths?
http-only: Do you try to set or get a cookie with javascript, which is set http-only?
secure-only: Do you use https for the login-page but http for other pages?
Look at the developer tools in chrome Resources -> Cookies which cookies are set and if they change with each login. Delete all cookies, and try again. | 1 | 0 | 0 | i have a strange error with my website which created by django .
for the server i use gunicorn and nginx .yes it works well at first,when i use firefox to test my website.
i create an account ,login the user ,once i submit the data,the user get login .
one day i change to chrome to test my website ,i go to the login page,fill in the user name and password,click the submit button ,this user get login ,when i refresh the page,the strange thing is ,the website ask me to login again ,it means the user do not login at that time.this happens only in chrome .i test in IE and firefox ,all works well.
my english is not good,i description the error again.
when i use chrome ,i login one account,the page show the account get login already,however i refresh the page or i click to other page,the website show the user is not in login status.
this error only in chrome.
and if i stop guncorn ,i start the website use django command .manage.py runserver.
even i user chrome ,the error do not appear.
i do not know what exact cause the problem.
any one can help me. | django website with gunicorn errror using chrome when login user | 0 | 0 | 0 | 211 |
25,344,239 | 2014-08-16T21:38:00.000 | 10 | 1 | 0 | 1 | python,rabbitmq,amqp,pika | 25,345,174 | 1 | true | 1 | 0 | Your code is fine logically, and runs without issue on my machine. The behavior you're seeing suggests that you may have accidentally started two consumers, with each one grabbing a message off the queue, round-robin style. Try either killing the extra consumer (if you can find it), or rebooting. | 1 | 3 | 0 | I'm testing out a producer consumer example of RabbitMQ using Pika 0.98. My producer runs on my local PC, and the consumer runs on an EC2 instance at Amazon.
My producer sits in a loop and sends up some system properties every second. The problem is that I am only seeing the consumer read every 2nd message, it's as though every 2nd message is not being read. For example, my producer prints out this (timestamp, cpu pct used, RAM used):
2014-08-16 14:36:17.576000 -0700,16.0,8050806784
2014-08-16 14:36:18.578000 -0700,15.5,8064458752
2014-08-16 14:36:19.579000 -0700,15.0,8075313152
2014-08-16 14:36:20.580000 -0700,12.1,8074121216
2014-08-16 14:36:21.581000 -0700,16.0,8077778944
2014-08-16 14:36:22.582000 -0700,14.2,8075038720
but my consumer is printing out this:
Received '2014-08-16 14:36:17.576000 -0700,16.0,8050806784'
Received '2014-08-16 14:36:19.579000 -0700,15.0,8075313152'
Received '2014-08-16 14:36:21.581000 -0700,16.0,8077778944'
The code for the producer is:
import pika
import psutil
import time
import datetime
from dateutil.tz import tzlocal
import logging
logging.getLogger('pika').setLevel(logging.DEBUG)
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='54.191.161.213'))
channel = connection.channel()
channel.queue_declare(queue='ems.data')
while True:
now = datetime.datetime.now(tzlocal())
timestamp = now.strftime('%Y-%m-%d %H:%M:%S.%f %z')
msg="%s,%.1f,%d" % (timestamp, psutil.cpu_percent(),psutil.virtual_memory().used)
channel.basic_publish(exchange='',
routing_key='ems.data',
body=msg)
print msg
time.sleep(1)
connection.close()
And the code for the consumer is:
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='0.0.0.0'))
channel = connection.channel()
channel.queue_declare(queue='hello')
print ' [*] Waiting for messages. To exit press CTRL+C'
def callback(ch, method, properties, body):
print " [x] Received %r" % (body,)
channel.basic_consume(callback,
queue='hello',
no_ack=True)
channel.start_consuming() | Python RabbitMQ - consumer only seeing every second message | 1.2 | 0 | 0 | 1,694 |
25,344,841 | 2014-08-16T23:09:00.000 | 1 | 1 | 1 | 1 | python,bash,filesystems,sys | 64,768,294 | 3 | false | 0 | 0 | sys.path and PATH are two entirely different variables. The PATH environment variable specifies to your shell (or more precisely, the operating system's exec() family of system calls) where to look for binaries, whereas sys.path is a Python-internal variable which specifies where Python looks for installable modules.
The environment variable PYTHONPATH can be used to influence the value of sys.path if you set it before you start Python.
Conversely, os.environ['PATH']can be used to examine the value of PATH from within Python (or any environment variable, really; just put its name inside the quotes instead of PATH). | 1 | 6 | 0 | I would like to access the $PATH variable from inside a python program. My understanding so far is that sys.path gives the Python module search path, but what I want is $PATH the environment variable. Is there a way to access that from within Python?
To give a little more background, what I ultimately want to do is find out where a user has Package_X/ installed, so that I can find the absolute path of an html file in Package_X/. If this is a bad practice or if there is a better way to accomplish this, I would appreciate any suggestions. Thanks! | sys.path vs. $PATH | 0.066568 | 0 | 0 | 1,684 |
25,345,130 | 2014-08-17T00:02:00.000 | 4 | 1 | 1 | 0 | python,setup.py | 25,345,187 | 1 | true | 0 | 0 | Using a setup.py script is only useful if:
Your code is a C extension, and then depends on platform-specific features that you really don't want to define manually.
Your code is pure Python but depends on other modules, in which case dependencies may be resolved automatically.
For a single file or a few set of files that don't rely on anything else, writing one is not worth the hassle. As a side note, your code is likely to be more attractive to people if trying it up doesn't require a complex setup. "Installing" it is then just about copying a directory or a single file in one's project directory. | 1 | 3 | 0 | Disclaimer: I'm still not sure I understand fully what setup.py does.
From what I understand, using a setup.py file is convenient for packages that need to be compiled or to notify Disutils that the package has been installed and can be used in another program. setup.py is thus great for libraries or modules.
But what about super simple packages that only has a foo.py file to be run? Is a setup.py file making packaging for Linux repository easier? | Should every python program be distributed with a setup.py? | 1.2 | 0 | 0 | 157 |
25,347,001 | 2014-08-17T07:01:00.000 | 1 | 0 | 1 | 0 | python,linux,windows,string | 25,347,015 | 1 | true | 0 | 0 | In Windows, newline is "\r\n", while on Linux it is "\n". This is why there is a character count discrepancy. | 1 | 0 | 0 | This is the first question I'm posting so please pardon my ignorance. I am using python to write into files and then read them. Using the usual suspects (file.read(), file.write())
The code is being run on both windows and Linux.
A particular string I'm reading, say str is giving a length of 6 on Windows, while it is giving a length of 7 on Linux.
I tried exploring what this magic character is but it turns out I cant print it!
If i try printing str, it gives the same results on Windows and Linux.
If i try printing str[6] on Linux, it prints blank!
I have verified that it is not a whitespace or newline(\n) character. I am even unable to print the ascii value of this character. Are there characters out there without ascii values?
I have found that strip() function eliminates this magic character but I am still curious as to what it is. | Python string has a character that is present in Linux but missing in Windows | 1.2 | 0 | 0 | 117 |
25,347,991 | 2014-08-17T09:35:00.000 | 6 | 0 | 1 | 0 | ipython,ipython-notebook | 25,347,992 | 2 | true | 0 | 0 | It turns out that I forgot to install dependencies to ipython notebook (I just did: pip install ipython). After you install ipython[notebook] or ipython[all] (or just install notebook depencies by hand) ipython profile create will also create notebook config files. | 1 | 5 | 0 | I have installed brand new ipython in a virtual enviorment, after that I tried to create configuration files via: ipython profile create, however ipython_notebook_config.py was not created, while ipython_config.py and ipython_nbconvert_config.py were created.
What can I do to create this file? | After calling ipython profile create ipython_notebook_config.py was not created | 1.2 | 0 | 0 | 2,048 |
25,348,557 | 2014-08-17T10:55:00.000 | 2 | 0 | 1 | 0 | python,virtualenv | 25,348,608 | 1 | true | 0 | 0 | Readable, but not writeable. :-)
A virtualenv is simply a place where a Python interpreter with a private library can be found. You can put your virtualenvs into a directory that the user can read (and change to), but has no write permissions. They will be able to use the python interpreter, but not change anything within the virtualenv. | 1 | 0 | 0 | I am setting up a PC with different virtual environments for Python develoment. The environments are set up and should not be messed with. A user with no root access is expected to write and test some Python code on this PC.
How the file permissions should be set up so that this no-root-access user could switch between the enviroments to activate the different sets of modules to test their code, but without ability to mess those environments up (ie, by adding new modules or removing existing ones)? | Minimum permissions to work with different virtualenvs in Python | 1.2 | 0 | 0 | 43 |
25,349,639 | 2014-08-17T13:32:00.000 | 0 | 0 | 0 | 0 | python,pygame | 25,518,175 | 1 | false | 0 | 1 | If you change your files to bmp, it should help. If you have really that little ram, then you should lower the resolution of your files using an image editor such as Preview or Paintbrush. Also, space might be saved through more efficient programming, such as putting objects in a list and just calling a list update. | 1 | 1 | 0 | I'm using PyGame on a Raspberry Pi, so I only have 512mb of RAM to work with. I have to load and display a lot of images in succession, though. I can't naively load all of these images into RAM as PyGame surfaces - I don't have enough RAM. The images themselves are fairly small, so I assume that PyGame surfaces are fairly big, and this is why I run out of RAM. I've tried loading from the disk every time I want to display an image, but that's obviously slow (noticeably so).
Is there a reasonable way to display lots of images in succession in PyGame with limited RAM - either by keeping the size in memory of the PyGame surface as low as possible, or some other way? | Loading many images in PyGame with limited RAM | 0 | 0 | 0 | 595 |
25,350,882 | 2014-08-17T15:52:00.000 | -1 | 0 | 0 | 1 | python,linux | 25,350,907 | 1 | false | 0 | 0 | history is not an executable file, but a built-in bash command. You can't run it with os.system. | 1 | 0 | 0 | I am trying to using os.system function to call command 'history'
but the stdout just show that 'sh :1 history: not found'
Other example i.e. os.system('ls') is works. Can anyone can tell me why 'history' does not work, and how to call 'history' command in Python script. | Call system command 'history' in Linux | -0.197375 | 0 | 0 | 227 |
25,351,113 | 2014-08-17T16:18:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,pygame | 25,352,369 | 1 | true | 0 | 1 | Alex Reynolds idea to use tar archives seems to be a perfect match. | 1 | 0 | 0 | I'm currently tasked with the difficult problem of figuring out how to efficiently pack an image and some text within a single file. In doing so, I need to make the file relatively small (it shouldn't be much bigger than the size of the image file alone), and the process of accessing and saving the information should be relatively fast.
Now, I have already found one way that works - converting the image to a string using pygame, storing it (and the text I need) within an python object, and then pickling the object. This works fine, but the file ends up being much MUCH larger than the image, since it's not being compressed. So to help with this, I then take the pickled object and compress it using gzip. Now I have another problem - the whole process is just a tad bit too slow, since I'll need to do hundreds of these files at a time, which can take several minutes (it shouldn't take longer than a 1/2 second to load a single file, and this method takes up to 2 seconds per file).
I had an idea to somehow put the two separate files, as they are, into one file like how someone would with a .zip, but without the need to further compress the data. As long as the image remains in it's original, compressed format (in this case, .png), simply storing it's data with some text should theoretically be both fast and wouldn't use much more memory. The problem is, I don't know how I would go about doing this.
Any ideas? | How would I go about serializing multiple file objects into one file? | 1.2 | 0 | 0 | 49 |
25,351,968 | 2014-08-17T17:52:00.000 | 0 | 0 | 0 | 0 | python,html,pandas | 63,317,500 | 9 | false | 1 | 0 | For those who like to reduce typing (i.e., everyone!): pd.set_option('max_colwidth', None) does the same thing | 1 | 374 | 1 | I converted a Pandas dataframe to an HTML output using the DataFrame.to_html function. When I save this to a separate HTML file, the file shows truncated output.
For example, in my TEXT column,
df.head(1) will show
The film was an excellent effort...
instead of
The film was an excellent effort in deconstructing the complex social sentiments that prevailed during this period.
This rendition is fine in the case of a screen-friendly format of a massive Pandas dataframe, but I need an HTML file that will show complete tabular data contained in the dataframe, that is, something that will show the latter text element rather than the former text snippet.
How would I be able to show the complete, non-truncated text data for each element in my TEXT column in the HTML version of the information? I would imagine that the HTML table would have to display long cells to show the complete data, but as far as I understand, only column-width parameters can be passed into the DataFrame.to_html function. | How can I display full (non-truncated) dataframe information in HTML when converting from Pandas dataframe to HTML? | 0 | 0 | 0 | 438,196 |
25,352,523 | 2014-08-17T18:50:00.000 | 1 | 0 | 1 | 0 | regex,python-2.7 | 25,352,554 | 1 | false | 0 | 0 | The $ denotes the end of the string. You want to match *x that's at the end of the string, so you need to write \*x$.
Also, since strings are iterables, l.extend('asd') will essentially do l.append('a'); l.append('s'); l.append('d'). You probably want to use append, not extend. | 1 | 0 | 0 | I wanted to see if I could strip the *x from these elements without using rstrip. I tried the following:
import re
import time
list = ["3*x", "2", "4*x", "1", "3*x", "0"]
new_list = []
for terms in list:
new_list.extend(re.sub(r'$(\*x)','', terms))
print new_list
time.sleep(4)
This logically makes sense, because every element that has the *x will end with it, so I used $. Yet I get output like:
['3', '*', 'x', '2', '4', '*', 'x', '1', '3', '*', 'x', '0']
But if I just take away the $ from the above code, then I get the correct output:
['3', '2', '4', '1', '3', '0']
So why does the initial code, give such erroneous output?
I am fairly new to regex entirely, so try to be fairly basic. | Why does this simple regex error occur? | 0.197375 | 0 | 0 | 50 |
25,352,831 | 2014-08-17T19:24:00.000 | 1 | 0 | 1 | 1 | python-2.7,module,pip,python-requests | 25,352,852 | 1 | true | 0 | 0 | Since the "python -m pip install -U pip" actually displayed something, on a hunch I tried:
"python -m pip install requests"
This worked! I don't know why any of the installation guides do not say to do this. | 1 | 0 | 0 | I am new to Python (2.7) but I am trying to run a program that requires the "requests" module. I have installed pip using the get-pip.py script and registered the Python27 and Python27/Scripts paths as environment variables.
When I run "python -m pip install -U pip" it says the package is already up-to-date.
Following installation guides, when I run "pip install requests" I get a new command prompt line. I tried "easy_install requests" and get the same thing. I tried "pip install --verbose requests" and have the same behavior (so much for being verbose!).
I am running on Windows Vista Ultimate, using the command prompt as administrator. | Pip/Easy_install do not install desired package | 1.2 | 0 | 0 | 97 |
25,353,008 | 2014-08-17T19:47:00.000 | 4 | 0 | 1 | 1 | python,console,icons,pyinstaller | 46,946,166 | 2 | false | 0 | 0 | You must have group.icns file for app in Mac OS | 2 | 4 | 0 | This is a really short question. I have created a package for Mac using Pyinstaller and I am mainly trying to add an icon to it. I am also trying to get the program to run without launching the terminal as the user has no interaction with the terminal. Currently I am keeing the following into cmd when running pyinstaller:
python pyinstaller.py --icon=group.ico --onefile --noconsole GESL_timetabler.py
I get the regular package (Unix Executable) and an App. However only the Unix Executable works and the no processes run when I double click the App.
Also, neither the App, nor the Unix Executable, has the icon image displayed. I am sure this is a trivial problem with my command to pyinstaller, but I am having difficulty figuring out the mistake. Could someone help me fix the instructions above? Thank you! | Pyinstaller add icon, launch without console for Mac | 0.379949 | 0 | 0 | 3,926 |
25,353,008 | 2014-08-17T19:47:00.000 | 1 | 0 | 1 | 1 | python,console,icons,pyinstaller | 33,063,961 | 2 | false | 0 | 0 | Try using --windowed instead. As far as I can tell they're the same thing, but it might do the trick.
As for icons, I've only gotten that to work on console windows. It just doesn't carry over to my main GUI window. | 2 | 4 | 0 | This is a really short question. I have created a package for Mac using Pyinstaller and I am mainly trying to add an icon to it. I am also trying to get the program to run without launching the terminal as the user has no interaction with the terminal. Currently I am keeing the following into cmd when running pyinstaller:
python pyinstaller.py --icon=group.ico --onefile --noconsole GESL_timetabler.py
I get the regular package (Unix Executable) and an App. However only the Unix Executable works and the no processes run when I double click the App.
Also, neither the App, nor the Unix Executable, has the icon image displayed. I am sure this is a trivial problem with my command to pyinstaller, but I am having difficulty figuring out the mistake. Could someone help me fix the instructions above? Thank you! | Pyinstaller add icon, launch without console for Mac | 0.099668 | 0 | 0 | 3,926 |
25,355,287 | 2014-08-18T01:36:00.000 | 1 | 0 | 0 | 0 | python,selenium,selenium-webdriver | 25,355,323 | 2 | false | 1 | 0 | I'd use findelement(by.name(" submit.button2-click.x")).click() or use find element(by.cssSelector("selector ")).click() | 1 | 0 | 0 | I'm using the following code to click a button on a page but the XPath keeps changing so the code keeps breaking:
mydriver.find_element_by_xpath("html/body/div[2]/div[3]/div[1]/div/div[2]/div[2]/div[4]/div/form[2]/span/span/input").click()
Is there a better way I should be doing this? Here is the code for the button I am trying to click:
<input class="a-button-input" type="submit" title="Button 2" name="submit.button2-click.x" value="Button 2 Click"/> | Python - Issues with selenium button click using XPath | 0.099668 | 0 | 1 | 531 |
25,359,288 | 2014-08-18T08:41:00.000 | 6 | 0 | 1 | 0 | python,opencv,video,frame,frame-rate | 35,372,469 | 5 | false | 0 | 0 | Another solution that doesn't depend on the sometimes buggy CV_CAP_PROP getters is to traverse your whole video file in a loop
Increase a frame counter variable every time a valid frame is encountered and stop when an invalid one comes (end of the video file).
Gathering information about the resolution is trickier because some codecs support variable resolution (similar to VBR in audio files where the bitrate is not a constant but instead covers some predefined range).
constant resolution - you need only the first frame to determine the resolution of the whole video file in this case so traversing the full video is not required
variable resolution - you need to get the resolution of every single frame (width and height) and calculate an average to get the average resolution of the video
FPS can be calculated however here you have the same problem as with the resolution - constant (CFR) vs variable (VFR). This is more of a mutli-threading problem omho. Personally I would use a frame counter, which increased after each valid frame while at an interval of 1 second a timer (running in a background thread) would trigger saving the current counter's value and then resetting it. You can store the values in a list in order to calculate the average/constant frame rate at the end when you will also know the total number of frames the video has.
The disadvantage of this rather simplistic way of doing things is that you have to traverse the whole file, which - in case it's several hours long - will definitely be noticeable by the user. In this case you can be smart about it and do that in a background process while letting the user do something else while your application is gathering this information about the loaded video file.
The advantage is that no matter what video file you have as long as OpenCV can read from it you will get quite accurate results unlike the CV_CAP_PROP which may or may not work as you expect it to. | 1 | 99 | 0 | How to know total number of Frame in a file ( .avi) through Python using open cv module.
If possible what all the information (resolution, fps,duration,etc) we can get of a video file through this. | How to know total number of Frame in a file with cv2 in python | 1 | 0 | 0 | 111,201 |
25,362,508 | 2014-08-18T11:42:00.000 | 0 | 0 | 0 | 0 | python,django,neo4j,neo4django | 25,363,972 | 1 | false | 1 | 0 | This sounds like a network setup problem. Can you check what URL the library is trying to connect to and that that one really goes to your local Neo4j Server? | 1 | 0 | 0 | I have neo4j-2.1.3 installed and server running on my Linux system . I created model "publisher" in my app . And then in manage.py shell , whenever I save a node with
from BooksGraph.models import Publisher
p=Publisher.objects.create(name='Sunny',address='b-1/196')
a long error pops up with:
Traceback (most recent call last):
File "", line 1, in File
"/usr/local/lib/python2.7/dist-packages/neo4django/db/models/manager.py",
line 42, in create return self.get_query_set().create(**kwargs) File
"/usr/local/lib/python2.7/dist-packages/neo4django/db/models/query.py",
line 1052, in create return super(NodeQuerySet, self).create(**kwargs)
File
"/usr/local/lib/python2.7/dist-packages/django/db/models/query.py",
line 377, in create obj.save(force_insert=True, using=self.db) File
"/usr/local/lib/python2.7/dist-packages/neo4django/db/models/base.py",
line 325, in save return super(NodeModel, self).save(using=using,
**kwargs) File "/usr/local/lib/python2.7/dist-packages/django/db/models/base.py",
line 463, in save self.save_base(using=using,
force_insert=force_insert, force_update=force_update) File
"/usr/local/lib/python2.7/dist-packages/neo4django/db/models/base.py",
line 341, in save_base self._save_neo4j_node(using) File "",
line 2, in _save_neo4j_node File
"/usr/local/lib/python2.7/dist-packages/neo4django/db/models/base.py",
line 111, in trans_method
len(connections[args[0].using]._transactions) < 1: File
"/usr/local/lib/python2.7/dist-packages/neo4django/utils.py", line
313, in getitem **db['OPTIONS']) File
"/usr/local/lib/python2.7/dist-packages/neo4django/neo4jclient.py",
line 29, in init super(EnhancedGraphDatabase,
self).init(*args, **kwargs) File
"/usr/local/lib/python2.7/dist-packages/neo4jrestclient/client.py",
line 74, in init response = Request(**self._auth).get(self.url)
File
"/usr/local/lib/python2.7/dist-packages/neo4jrestclient/request.py",
line 63, in get return self._request('GET', url, headers=headers) File
"/usr/local/lib/python2.7/dist-packages/neo4django/db/init.py",
line 60, in _request headers) File
"/usr/local/lib/python2.7/dist-packages/neo4jrestclient/request.py",
line 198, in _request auth=auth, verify=verify) File
"/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line
468, in get return self.request('GET', url, **kwargs) File
"/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line
456, in request resp = self.send(prep, **send_kwargs) File
"/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line
559, in send r = adapter.send(request, **kwargs) File
"/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line
378, in send raise ProxyError(e) ProxyError: ('Cannot connect to
proxy.', error(113, 'No route to host')) | Neo4Django create node not working in manage.py shell | 0 | 0 | 0 | 113 |
25,365,036 | 2014-08-18T13:53:00.000 | 2 | 0 | 1 | 0 | python,oop | 25,365,197 | 4 | false | 0 | 0 | Each object has its own copy of data members whereas the the member functions are shared. The compiler creates one copy of the member functions separate from all objects of the class. All the objects of the class share this one copy.
The whole point of OOP is to combine data and functions together. Without OOP, the data cannot be reused, only the functions can be reused. | 2 | 2 | 0 | I usually use classes similarly to how one might use namedtuple (except of course that the attributes are mutable). Moreover, I try to put lengthy functions in classes that won't be instantiated as frequently, to help conserve memory.
From a memory point of view, is it inefficient to put functions in classes, if it is expected that the class will be instantiated often? Keeping aside that it's good design to compartmentalize functionality, should this be something to be worried about? | Python OOP: inefficient to put methods in classes? | 0.099668 | 0 | 0 | 167 |
25,365,036 | 2014-08-18T13:53:00.000 | 6 | 0 | 1 | 0 | python,oop | 25,365,082 | 4 | true | 0 | 0 | Methods don't add any weight to an instance of your class. The method itself only exists once and is parameterized in terms of the object on which it operates. That's why you have a self parameter. | 2 | 2 | 0 | I usually use classes similarly to how one might use namedtuple (except of course that the attributes are mutable). Moreover, I try to put lengthy functions in classes that won't be instantiated as frequently, to help conserve memory.
From a memory point of view, is it inefficient to put functions in classes, if it is expected that the class will be instantiated often? Keeping aside that it's good design to compartmentalize functionality, should this be something to be worried about? | Python OOP: inefficient to put methods in classes? | 1.2 | 0 | 0 | 167 |
25,367,508 | 2014-08-18T16:05:00.000 | 1 | 0 | 0 | 0 | python-2.7,stored-procedures,pyramid,pymssql | 25,646,833 | 1 | true | 1 | 0 | The solution was rather trivial. Within one object instance, I was calling two different stored procedures without closing the connection after the first call. That caused a pending request or so in the MSSQL-DB, locking it for further requests. | 1 | 0 | 0 | From a pyramid middleware application I'm calling a stored procedure with pymssql. The procedure responds nicely upon the first request I pass through the middleware from the frontend (angularJS). Upon subsequent requests however, I do not get any response at all, not even a timeout.
If I then restart the pyramid application, the same above described happens again.
I'm observing this behavior with a couple of procedures that were implemented just yesterday. Some other procedures implemented months ago are working just fine, regardless of how often I call them.
I'm not writing the procedures myself, they are provided for.
From what I'm describing here, can anybody tell where the bug should be hiding most probably? | pyramid middleware call to mssql stored procedure - no response | 1.2 | 1 | 0 | 122 |
25,368,199 | 2014-08-18T16:48:00.000 | 1 | 0 | 0 | 0 | python,django,post,browser | 25,370,027 | 1 | true | 1 | 0 | Solution #1)
skip all this and see Rjzheng's link below -- it's much simpler.
Solution #2)
Since webbrowser.open() doesn't take POST args:
1) write a javascript page which accepts args via GET, then does an Ajax POST
2) have webbbrowser.open() open URL from step #1
Not glamorous, but it'll work :)
Be careful with security, you don't want to expose someone's password in the GET URL! | 1 | 0 | 0 | I made a local GUI which requires the users to enter their usernames and passwords. Once they click submit, I want to have a pop out window which directs them to a website with their personal information through POST, which requires a request. I know that there is webbroswer.open() to open a website, but it doesn't take any requests, how would I be able to do what I want it to do? I am using django 1.6 and python 2.7 | How to use webbrowser.open() with request in python | 1.2 | 0 | 1 | 2,534 |
25,368,320 | 2014-08-18T16:56:00.000 | 1 | 0 | 1 | 0 | python,tkinter,py2exe | 25,370,162 | 2 | false | 0 | 1 | Well, I had installed both versions of Python 32 bit and 64 bit in my machine. When I was making it a stand alone probably some dlls were copied from the wrong library. So I completely uninstalled both versions and then installed 32 bit and it worked fine. | 1 | 0 | 0 | I have a python script that works fine on my computer (Python 2.7 32 bit installed). It has the following imports :
import mechanize
from bs4 import BeautifulSoup
from Tkinter import *
import json
import webbrowser
I wanted to distribute this to others so I found that we can create exe files using py2exe. I wrote a script like this:
from distutils.core import setup
import py2exe
setup(console=['notification.py'],
options = {'py2exe' : {
'packages' : ['bs4', 'mechanize','Tkinter', 'json', 'webbrowser']
}})
This works fine on my computer but when I run it on Windows XP, I get this error -
Traceback (most recent call last):
File "notification.py", line 3, in
File "Tkinter.pyc", line 38, in
File "FixTk.pyc", line 65, in
File "_tkinter.pyc", line 12, in
File "_tkinter.pyc", line 10, in __load
ImportError: DLL load failed: %1 is not a valid Win32 application.
I tried searching other threads but found none that has the same problem. So please help me fix this issue. | exe generated from a python script with Py2exe does not work on xp | 0.099668 | 0 | 0 | 579 |
25,370,287 | 2014-08-18T19:02:00.000 | 4 | 0 | 1 | 0 | python,django,lifecycle | 25,370,876 | 1 | true | 1 | 0 | This is not a function of Django at all, but of whatever system is being used to serve Django. Usually that'll be wsgi via something like mod_wsgi or a standalone server like gunicorn, but it might be something completely different like FastCGI or even plain CGI.
The point is that all these different systems have their own models that determines process lifetime. In anything other than basic CGI, any individual process will certainly serve several requests before being recycled, but there is absolutely no general guarantee of how many - the process might last several days or weeks, or just a few minutes.
One thing to note though is that you will almost always have several processes running concurrently, and you absolutely cannot count on any particular request being served by the same one as the previous one. That means if you have any user-specific data you want to persist between requests, you need to store it somewhere like the session. | 1 | 3 | 0 | When using Django, how long does the Python process used to service requests stay alive? Obviously, a given Python process services an entire request, but is it guaranteed to survive across across requests?
The reason I ask is that I perform some expensive computations at when I import certain modules and would like to know how often the modules will be imported. | Django Process Lifetime | 1.2 | 0 | 0 | 266 |
25,373,895 | 2014-08-19T00:12:00.000 | 1 | 0 | 0 | 0 | python-2.7,opengl,pygame | 25,383,333 | 1 | true | 0 | 1 | I am no expert in Python but you could try:
Multiply the color of the image by a value like 0.1, 0.2, 0.3 (anything less that 1) which will give you a very dark texture. This would be the easiest method as it involves just reducing the color values of that texture.
Or you could try a more complex method such as drawing a transparent black quad over the original image to give it the illusion of being in a shadow. | 1 | 0 | 0 | I'm trying to make an OpenGL in python/pygame but I don't know how to add shadow. I don't want to make a lot of darker images for my game. Can someone help me? | How can you edit the brightness of images in PyGame? | 1.2 | 0 | 0 | 933 |
25,374,338 | 2014-08-19T01:18:00.000 | 0 | 0 | 0 | 0 | javascript,python,ajax,highcharts,graphite | 25,381,895 | 1 | true | 0 | 0 | Maybe better is call one ajax which gets all data and then prepare parser which will return data for each chart. | 1 | 0 | 0 | In general I want to know the possible benefits of Graphite. For now I have a web app that receives data directly from JavaScript Ajax call and plots the data using high chart.
It first run 20 different queries for each graph using Python from my SQL database.
And sends each result data to HighChart library using GET Ajax call.
And HighChart adds plot to each graph in realtime.
There is no need to save data because I need only realtime plotting within certain time range. Data outside time range just plushes.
But when I see the 20 Ajax calls in one page I feel like I am doing this in an inefficient way although it gets the job done.
So I looked at the Graphite but it is hard for me to decide which is better. Since I will pull up all data from present SQL table I don't need another storage. But everybody says graphite performs fast but I would still need to instantiate 20 different graphite graphs. Please give me some guidance.
What would you do if you have to visualize 20 different realtime graphs in one page concurrently each of which receives its own query data? | Graphite or multiple query with AJAX call? | 1.2 | 1 | 0 | 199 |
25,375,469 | 2014-08-19T03:57:00.000 | 0 | 0 | 0 | 0 | python,mysql,django,testing | 25,793,512 | 1 | false | 1 | 0 | The celery workers are still feeding off of the dev database even if the test server brings up other databases because they were told to in the settings file.
One fix would be to make a separate settings_test.py file that specifies the test database name and bring up celery workers from the setup command using subprocess.checkoutput that consume from a special queue for testing. Then these celery workers would feed from the test database rather than the dev database. | 1 | 1 | 0 | I have some code (a celery task) which makes a call via urllib to a Django view. The code for the task and the view are both part of the same Django project.
I'm testing the task, and need it to be able to contact the view and get data back from it during the test, so I'm using a LiveServerTestCase. In theory I set up the database in the setUp function of my test case (I add a list of product instances) and then call the task, it does some stuff, and then calls the Django view through urllib (hitting the dev server set up by the LiveServerTestCase), getting a JSON list of product instances back.
In practice, though, it looks like the products I add in setUp aren't visible to the view when it's called. It looks like the test case code is using one database (test_<my_database_name>) and the view running on the dev server is accessing another (the urllib call successfully contacts the view but can't find the product I've asked for).
Any ideas why this may be the case?
Might be relevant - we're testing on a MySQL db instead of the sqlite.
Heading off two questions (but interested in comments if you think we're doing this wrong):
I know It seems weird that the task accesses the view using urllib. We do this because the task usually calls one of a series of third party APIs to get info about a product, and if it cannot access these, it accesses our own Django database of products. The code that makes the urllib call is generic code that is agnostic of which case we're dealing with.
These are integration tests so we'd prefer actually make the urllib call rather than mock it out | LiveServerTestCase server sees different database to tests | 0 | 0 | 0 | 190 |
25,375,903 | 2014-08-19T04:58:00.000 | 0 | 0 | 0 | 0 | python,django,windows-7-x64,aptana3 | 25,376,287 | 1 | false | 1 | 0 | After some searching, finally figured out that the default program to run the django-admin.py was aptana studio 3, even though the program had supposedly been uninstalled completely from my system. I changed the default program to be the python console launcher and now it works fine. There goes 2 hours down the drain.. | 1 | 0 | 0 | I am having an issue with starting a new project from the command prompt. After I have created a virtual env and activated the enviroment, when I enter in .\Scripts\django-admin.py startproject new_project, a popup window shows up which says "AptanaStudio3 executable launcher was unable to locate its companion shared library"
I have tried uninstalling Aptana studio, but even when it is uninstalled, the error still occurs. Not sure what I need to do fix this. I have not unistalled/reinstalled python, i'm not even sure if that has anything to do with it. Many thanks in advance | Aptana Studio 3 newproject error with Django | 0 | 0 | 0 | 116 |
25,380,448 | 2014-08-19T09:52:00.000 | 1 | 0 | 1 | 0 | python,excel,csv,export-to-csv | 25,380,579 | 2 | false | 0 | 0 | You shall inspect the real content of CSV file you have created and you will see, that there are ways to enclose text in quotes. This allows distinction between delimiter and a character inside text value.
Check csv module documentation, it explains these details too. | 1 | 1 | 0 | I am using python's CSV module to output a series of parsed text documents with meta data. I am using the csv.writer module without specifying a special delimiter, so I am assuming it is delimited using commas. There are many commas in the text as well as in the meta data, so I was expecting there to be way more columns in the document rows, when compared to the header row.
What surprises me is that when I load the outputted file in Excel, everything looks exactly right. How does Excel know how to delimit this correctly??? How is it able to figure out which commas are text commas and which ones are delimiters?
Related question: Do people usually use CSV for saving text documents? Is this a standard practice? It seems inferior to JSON or creating a SQLite database in every sense, from long-term sustainability to ease of interpreting without errors. | Python CSV module - how does it avoid delimiter issues? | 0.099668 | 1 | 0 | 518 |
25,382,412 | 2014-08-19T11:36:00.000 | 2 | 0 | 1 | 1 | python,python-3.x | 25,382,814 | 1 | false | 0 | 0 | The code works, the problem is probably your terminal settings. Go there and find the settings for "bell" and make sure it's set to "audible" or whatever your system calls it (as opposed to "visual" or "disabled" etc.).
To prove that it isn't Python's fault, try pressing backspace at the terminal prompt when nothing has been typed on the line. This should make the bell ding on most systems where it is enabled. | 1 | 1 | 0 | I have tried print('\a') just to preduce a sound, but it didn't work. Why and How can I make it work? I'm on a linux system. | print('\a') doesn't work on linux (no sound) | 0.379949 | 0 | 0 | 788 |
25,383,624 | 2014-08-19T12:37:00.000 | 2 | 1 | 0 | 0 | python,c++,cross-platform,buffer,communication | 34,848,899 | 2 | false | 0 | 1 | I think we can use Named Pipes for communication between a python and C++ program if the processes are on the same machine.
However for different machines sockets are the best option. | 1 | 13 | 0 | I am looking for an efficient and smart way to send data between a C++-program and a Python-script. I have a C++ program which calculates some coordinates in-real-time 30Hz. And I wanna access these coordinates with a Python-script. My first idea was to simply create a .txt-file and write the coordinates to it, and then have Python open the file and read. But I figured that it must be a smarter and more efficient way using the RAM and not the harddrive.
Does anyone have any good solutions for this? The C++ program should write 3coordinates (x,y,z) to some sort of buffer or file, and the Python program can open it and read them. Ideally the C++-program overwrites the coordinates every time and there's no problem with reading and writing to the file/buffer at the same time.
Thank you for your help | Communication between C++ and Python | 0.197375 | 0 | 0 | 20,489 |
25,383,954 | 2014-08-19T12:53:00.000 | 0 | 0 | 1 | 0 | python,mpi,mpi4py | 28,458,585 | 3 | false | 0 | 0 | I had a similar problem. For me, the easiest way to work around this was to have each process write out to its own file, and include a time stamp. This file can then be processed afterwards to put everything in order.
For example, include (python3-style) prints like:
print("Process %d just received point %r at %s" % (rank, point,str(datetime.datetime.now())))
Just include datetime at the top. mpi4py seems to buffer some of it's I/O in an interesting fashion, so each process maintaining its own output is the most robust solution. | 2 | 1 | 0 | I am using mpi4py to model a distributed application and I want all the processes to write to a common file. Is there any function which allows this without the race condition ? | file writing in mpi using python without race condition | 0 | 0 | 0 | 1,017 |
25,383,954 | 2014-08-19T12:53:00.000 | 0 | 0 | 1 | 0 | python,mpi,mpi4py | 25,389,738 | 3 | false | 0 | 0 | You should check out one of the many tutorials out there for using MPI I/O. I'm sure there's some way to use it in mpi4py. | 2 | 1 | 0 | I am using mpi4py to model a distributed application and I want all the processes to write to a common file. Is there any function which allows this without the race condition ? | file writing in mpi using python without race condition | 0 | 0 | 0 | 1,017 |
25,385,706 | 2014-08-19T14:13:00.000 | 0 | 0 | 0 | 0 | python,django,configuration,distributed,etcd | 26,611,821 | 2 | false | 1 | 0 | I haven't used CoreOS or Docker but read a lot and think it's very sexy stuff. I guess the solution depends on how you set up your app. If you have the same sort of "touch-reload" support you see in many appservers (uWSGI f.ex.), you can set key_file in /etc/etcd/etcd.conf and make your appserver watch that. This feels a ton heavier than it should be thou. I'm quite sure someone with experience with the platform can come up with something much better. | 1 | 6 | 0 | Let's say that I have a Django app, and I've offloaded environment variable storage to etcd. When I deploy a new server, the app can read from etcd, write the vars into (for example) a Python file that can be conditionally loaded on the app boot. This much is acceptable.
When the configuration changes, however, I have no way of knowing. Afaik, etcd doesn't broadcast changes. Do I need to set up a daemon that polls and then reloads my app on value changes? Should I query etcd whenever I need to use one of these parameters? How do people handle this? | Using etcd to manage Django settings | 0 | 0 | 0 | 1,116 |
25,386,119 | 2014-08-19T14:32:00.000 | 0 | 0 | 0 | 0 | python,django,many-to-many,foreign-key-relationship,one-to-one | 69,433,734 | 2 | false | 1 | 0 | In my point of View the diff b/w One-To-One & One-To-Many is
One-To-One : it means one person can contain one passport only
One-To-Many : it means one person can contain many address like(permanent address, Office address, Secondary Address)
if you call the parent model it will automatically call the many child class | 1 | 93 | 0 | I'm having a little difficulty getting my head around relationships in Django models.
Could someone explain what the difference is between a OneToOne, ManyToMany and ForeignKey? | Whats the difference between a OneToOne, ManyToMany, and a ForeignKey Field in Django? | 0 | 0 | 0 | 31,671 |
25,388,124 | 2014-08-19T16:07:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,google-bigquery | 25,393,093 | 1 | true | 1 | 0 | This is a known issue that has lingered for far far too long. It is fixed in this week's release, which should go live this afternoon or tomorrow. | 1 | 1 | 0 | We have a query which returns 0 records sometimes when called. When you call the getQueryResults on the jobId it returns with a valid pageToken with 0 rows. This is a bit unexpected since technically there is no data. Whats worst is if you keep supplying the pageToken for subsequent data-pulls it keeps giving zero rows with valid tokens at each page.
If the query does return data initially with a pageToken and you keep using the pageToken for subsequent data pulls it returns pageToken as None after the last page giving a termination condition.
The behavior here seems inconsistent?Is this a bug?
Here is a sample jobresponse I see:
Here is a sample job response:
{u'kind': u'bigquery#getQueryResultsResponse', u'jobReference': {u'projectId': u'xxx', u'jobId': u'job_aUAK1qlMkOhqPYxwj6p_HbIVhqY'}, u'cacheHit': True, u'jobComplete': True, u'totalRows': u'0', u'pageToken': u'CIDBB777777QOGQFBAABBAAE', u'etag': u'"vUqnlBof5LNyOIdb3TAcUeUweLc/6JrAdpn-kvulQHoSb7ImNUZ-NFM"', u'schema': {......}}
I am using python and running queries on GAE using the BQ api | BigQuery Api getQueryResults returning pageToken for 0 records | 1.2 | 1 | 0 | 471 |
25,388,571 | 2014-08-19T16:32:00.000 | 0 | 0 | 0 | 0 | python,gtk,gtk3,pygobject | 25,463,540 | 1 | false | 0 | 1 | Assuming the new parent is at the same level of the tree, you could use treestore.swap, otherwise you may have to just remove all the rows in the subtree and reinsert them at the new position | 1 | 0 | 0 | I have a treeiter created with the treeiter = self.devices_treestore.append(parent_treeiter, column_values_list) call.
How can I move it to another parent (with the whole subtree it holds)? | GTK+: Move tree element to another parent | 0 | 0 | 0 | 102 |
25,389,095 | 2014-08-19T17:02:00.000 | 1 | 1 | 1 | 0 | python | 61,017,490 | 22 | false | 0 | 0 | I used the ../ method to fetch the current project path.
Example:
Project1 -- D:\projects
src
ConfigurationFiles
Configuration.cfg
Path="../src/ConfigurationFiles/Configuration.cfg" | 1 | 221 | 0 | I've got a python project with a configuration file in the project root.
The configuration file needs to be accessed in a few different files throughout the project.
So it looks something like: <ROOT>/configuration.conf
<ROOT>/A/a.py, <ROOT>/A/B/b.py (when b,a.py access the configuration file).
What's the best / easiest way to get the path to the project root and the configuration file without depending on which file inside the project I'm in? i.e without using ../../? It's okay to assume that we know the project root's name. | Python - Get path of root project structure | 0.009091 | 0 | 0 | 355,036 |
25,392,779 | 2014-08-19T20:53:00.000 | 0 | 0 | 1 | 0 | python,json,synchronization | 25,403,785 | 3 | false | 0 | 0 | If concurrency is not required, maybe consider writing 2 functions to read and write the data to a shelf file? Our is the idea to have the dictionary" aware" of changes to update the file without this kind of thing? | 2 | 5 | 0 | In perl there was this idea of the tie operator, where writing to or modifying a variable can run arbitrary code (such as updating some underlying Berkeley database file). I'm quite sure there is this concept of overloading in python too.
I'm interested to know what the most idiomatic way is to basically consider a local JSON file as the canonical source of needed hierarchical information throughout the running of a python script, so that changes in a local dictionary are automatically reflected in the JSON file. I'll leave it to the OS to optimise writes and cache (I don't mind if the file is basically updated dozens of times throughout the running of the script), but ultimately this is just about a kilobyte of metadata that I'd like to keep around. It's not necessary to address concurrent access to this. I'd just like to be able to access a hierarchical structure (like nested dictionary) within the python process and have reads (and writes to) that structure automatically result in reads from (and changes to) a local JSON file. | In Python, how do I tie an on-disk JSON file to an in-process dictionary? | 0 | 0 | 0 | 500 |
25,392,779 | 2014-08-19T20:53:00.000 | 1 | 0 | 1 | 0 | python,json,synchronization | 25,406,625 | 3 | false | 0 | 0 | This is a developpement from aspect_mkn8rd' answer taking into account Gerrat's comments, but it is too long for a true comment.
You will need 2 special container classes emulating a list and a dictionnary. In both, you add a pointer to a top-level object and override the following methods :
__setitem__(self, key, value)
__delitem__(self, key)
__reversed__(self)
All those methods are called in modification and should have the top-level object to be written to disk.
In addition, __setitem__(self, key, value) should look if value is a list and wrap it into a special list object or if it is a dictionary, wrap it into a special dictionnary object. In both case, the method should set the top-level object into the new container. If neither of them and the object defines __setitem__, it should raise an Exception saying the object is not supported. Of course you should then modify the method to take in account this new class.
Of course, there is a good deal of code to write and test, but it should work - left to the reader as an exercise :-) | 2 | 5 | 0 | In perl there was this idea of the tie operator, where writing to or modifying a variable can run arbitrary code (such as updating some underlying Berkeley database file). I'm quite sure there is this concept of overloading in python too.
I'm interested to know what the most idiomatic way is to basically consider a local JSON file as the canonical source of needed hierarchical information throughout the running of a python script, so that changes in a local dictionary are automatically reflected in the JSON file. I'll leave it to the OS to optimise writes and cache (I don't mind if the file is basically updated dozens of times throughout the running of the script), but ultimately this is just about a kilobyte of metadata that I'd like to keep around. It's not necessary to address concurrent access to this. I'd just like to be able to access a hierarchical structure (like nested dictionary) within the python process and have reads (and writes to) that structure automatically result in reads from (and changes to) a local JSON file. | In Python, how do I tie an on-disk JSON file to an in-process dictionary? | 0.066568 | 0 | 0 | 500 |
25,393,067 | 2014-08-19T21:13:00.000 | 1 | 0 | 1 | 0 | python,multithreading,resources | 25,393,196 | 1 | true | 0 | 0 | Python threads (as opposed to multiprocessing processes) use the same block of memory. If a thread adds something to a data structure that is directly or indirectly referenced from the master thread or other workers (for instance, a shared dictionary or list), that data won't be deleted when the thread dies. So basically, as long as the only data your threads write to memory is referenced by variables local to the thread target function scope or below, the resources should be cleaned up the next time the gc runs after the thread exits. | 1 | 0 | 0 | I am writing super awesome software where i will create a new thread every new minute. This thread will store some data on a remote database server and end. When a new thread is created resources(memory...) are assigned to that thread. If i don't correctly free those resources at some time i will have a problem.
The thread that stores the data can sometimes end unexpectedly, an error because the remote server is unreachable. This is not a problem the thread will end and the data will be stored the next minute together with the data of that next minute.
So my question is: Do python threads free all the resources they use when they end as expected? Do they free all resources when they end because of a error? | Python MultiThreading. Releasing resources | 1.2 | 0 | 0 | 362 |
25,393,753 | 2014-08-19T22:05:00.000 | 3 | 0 | 0 | 0 | python,django,django-south | 25,393,815 | 1 | false | 1 | 0 | It sounds like you want your program to add and delete fields from the model? That sounds like a bad idea. That would imply that your database schema will change dynamically under program control, which would be very unusual indeed. Think harder about what data you need to represent, and come up with a database schema that works for all of your data.
Or, change to a non-SQL database, which means avoiding South altogether. | 1 | 0 | 0 | I have need to dynamically (not manually edit models.py) alter/add/remove from a Django Model. Is this possible? Once the model is altered, will it persist? I then want to use South for running the database migration from the altered model. | Dynamically add to Django Model | 0.53705 | 0 | 0 | 106 |
25,395,229 | 2014-08-20T01:00:00.000 | 4 | 0 | 1 | 0 | python,python-2.7,code-organization,namedtuple | 25,395,776 | 1 | true | 0 | 0 | The thought process used for deciding where to place the namedtuples is no different than the one you would use for any other line of code:
Modules define logical units of functionality. Certain pieces of code may never need to know about or interact with another piece of code. The identification of these boundary lines are a strong hint for where to break the code into modules.
Modules encapsulate an interface. They give you the opportunity to define an API through which all other pieces of code interact, while isolating the details of its implementation in the module. Isolating code in modules makes it easier to know where to focus your attention when you want to change the implementation while preserving the API.
Once you've identified the logical units (i.e. modules) and the API through which the logical units will interact, it should be clearer where to place the namedtuples.
If one module, X needs to import another module, Y, for no other reason than for the definition of the namedtuples, then it may make sense to place the namedtuples in a separate module, Z, because you've found a boundary line.
If, however, X would need to import the Y anyway, then it really would not make much difference if the namedtuples were placed in a separate module, since everywhere you import Y you also import Z.
Now, it is frequently the case that X does not need all the functionality provided by Y, and so you might be tempted to separate that smaller bit that X needs into a separate module. But after a certain point breaking up every little bit into it's own module is craziness -- it becomes more burdensome to have lots of little modules rather than one medium-sized module. Where that line is -- exactly what is medium-sized -- is a matter of taste and what you envision to be the logical units of functionality. | 1 | 3 | 0 | I'm using quite a few namedtuples in my Python codebase and they're littered all over the .py files. Is it a good practice to extract all these declarations into a separate file or should they stay put where they're used?
In a few cases other modules need to use reference the namedtuples in separate modules since that's how the interfaces are defined - they expect namedtuples. What is the recommended Pythonic way of organizing the various namedtuples especially for cross module references? | Should all namedtuples be in a separate file? | 1.2 | 0 | 0 | 948 |
25,395,814 | 2014-08-20T02:22:00.000 | 0 | 1 | 0 | 1 | python,terminal,raspberry-pi,tesseract,raspbian | 25,409,597 | 1 | false | 0 | 0 | There are ways to do what you asked, but I think you lack some research of your own, as some of these answers are very "googlable".
You can print commands to LX terminal with python using "sys.stdout.write()"
For the boot question:
1 - sudo raspi-config
2 - change the Enable Boot to Desktop to Console
3 - there is more than one way to make your script auto-executable:
-you have the Crontab (which I think it will be the easiest, but probably not the best of the 3 ways)
-you can also make your own init.d script (best, not easiest)
-or you can use the rc.local
Also be carefull when placing an infinite loop script in auto-boot.
Make a quick google search and you will find everything you need.
Hope it helps.
D.Az | 1 | 0 | 0 | okay, So for a school project I'm using raspberry pi to make a device that basically holds both the functions of an ocr and a tts. I heard that I need to use Google's tesseract through a terminal but I am not willing to rewrite the commands each time I want to use it. so i was wondering if i could either:
A: Use python to print commands into the LX Terminal
B: use a type of loop command on the LX terminal and save as a script?
It would also be extremely helpful if I could find out how to make my RPI go staight to my script rather than the raspbian desktop when it first boote up.
Thanks in advance. | can I use python to paste commands into LX terminal? | 0 | 0 | 0 | 428 |
25,396,421 | 2014-08-20T03:48:00.000 | 0 | 0 | 0 | 0 | python,http,python-requests | 25,396,973 | 1 | false | 0 | 0 | Spawn a thread (import threading). Run an HTTP server in there. You can generate a unique port on demand by socket.socket().bind(0). In the HTTP server, just write the incoming data to a file (perhaps named by timestamp and incoming port number). Then send your requests there. | 1 | 0 | 0 | I am looking for a recipe for writing and reading the raw data generated by a requests transaction from files rather than a socket. By "raw data" I mean the bytes just before they are written to or read from the underlying socket. I've tried:
Using "hooks". This seems to be mostly deprecated as the only remaining hook is "response".
mount()ing a custom Adapter. Some aggressive duck-typing here provides access to the underlying httplib.HTTPConnection objects, but the call stack down there is complicated and quite brittle.
The final solution does not need to be general-purpose as I am only interested in vanilla HTTP functionality. I won't be streaming or using the edgier parts of the protocol.
Thanks! | Perofrming a python-requests Request/Reponse transaction using files rather than a socket | 0 | 0 | 1 | 107 |
25,396,898 | 2014-08-20T04:50:00.000 | 0 | 1 | 0 | 0 | python-2.7,ftp | 25,397,066 | 2 | false | 0 | 0 | This is impossible
"The original FTP specification and recent extensions to it do not include any way to preserve the time and data for files uploaded to a FTP server." | 1 | 0 | 0 | How to change a modification time of file via ftp?
Any suggestions? Thanks! | How to set file modification time via ftp with python | 0 | 0 | 0 | 318 |
25,397,840 | 2014-08-20T06:12:00.000 | 2 | 0 | 0 | 0 | python,plot,widget,pyqt,pyqtgraph | 25,578,618 | 1 | false | 0 | 1 | For (1), you will have to break your data into two separate lines, and assign the colors individually. PyQtGraph does not yet support multiple colors per line.
For (2), consider using pg.InfiniteLine or pg.VTickGroup. | 1 | 2 | 0 | I'm making a time series monitoring program.
I'd like to change the color of a plot starting at half the range of the x-axis.
For a 100 x 20 plot widget I would like to change last 50 data points to another color.
How can I draw a custom vertical grid whenever every xx items of data are passed? | How can I obtain a pyqtgraph plotwidget with variable colors and grids depending on the data? | 0.379949 | 0 | 0 | 1,047 |
25,400,493 | 2014-08-20T08:50:00.000 | 0 | 1 | 1 | 0 | python,c++,multithreading,cpython,gil | 25,400,635 | 2 | false | 0 | 1 | Only if you spawn separate interpreters.
GIL is one-per-interpreter policy to protect interpreter internals. One interpreter will run one line at a time.
The only other way is to program at least one of your threads in pure C++ and offer a communication queue API to your python script or any way to communicate asynchronously really. | 1 | 1 | 0 | Can python embedded into c++ allow you to run n python scripts concurrently?
I am currently dealing with the dread which is the GIL. My project requires concurrency of at least 2 threads and the easy typing in Python would really help with code simplicity.
Would embedding my Python code in a C++ script which deals with the threading circumvent the problems the GIL causes? | Python GIL: concurrent C++ embed | 0 | 0 | 0 | 1,020 |
25,403,160 | 2014-08-20T11:07:00.000 | 1 | 0 | 0 | 1 | python,tornado | 25,410,159 | 1 | true | 0 | 0 | These methods are used internally; you shouldn't call them yourself. | 1 | 1 | 0 | I am learning the web framework Tornado. During the study of this framework, I found the class tornado.httpserver.HTTPserver. I know how to create a constructor of this class and create instance tornado.httpserver.HTTPserver in main() function. But this class tornado.httpserver.HTTPserver has 4 methods. I have not found how to use these methods.
1) def close_all_connections(self):
2) def handle_stream(self, stream, address):
3) def start_request(self, server_conn, request_conn):
4) def on_close(self, server_conn):
I know that 2-4 methods are inherited from the class tornado.tcpserver.TCPServer
Can someone illustrate how to use these methods of a class tornado.httpserver.HTTPserver? | How to call methods of class tornado.httpserver.HTTPserver? | 1.2 | 0 | 0 | 120 |
25,407,197 | 2014-08-20T14:20:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,jinja2 | 25,407,418 | 1 | true | 1 | 0 | These are very artificial distinctions, and it's a mistake to assume that all apps have each of these layers, or that any particular function will fit only into one of them.
Jinja2 is a template language. It's firmly in the presentation layer.
There isn't really any such thing as the data access layer. If you really need to put something here, one possibility would be whichever library you are using to access the data: ndb or the older db. | 1 | 0 | 0 | I'm new in Python/GAE and jinja2, and I want to present a schema of this architecture with displaying that in Layered, like this:
Presentation Layer: HTML+CSS+JQUERY
Business Layer: webapp2
DAO Layer: (I don't know what I put here when it's Python, I find some exemples for java thay put here "JDO orJDO or low level API")
Data Layer: appengine DataStore
My questions:
Regarding jinja2, where can I put it?
What can I put in DAO layer for Python/GAE
Thanks | In which design layer i can put jinja2 | 1.2 | 0 | 0 | 54 |
25,408,726 | 2014-08-20T15:28:00.000 | 1 | 0 | 0 | 0 | python,qt,qgraphicsview | 25,409,718 | 1 | true | 0 | 1 | You need to set QGraphicsItem::ItemIgnoresTransformations flag for text items. See the documentation:
This flag is useful for keeping text label items horizontal and unscaled, so they will still be readable if the view is transformed. | 1 | 1 | 0 | Is there a way of somehow putting a QGraphicsTextItem so that the text is always displayed undistorted and in the same size with respect to the user?
Imagine a scene that is zoomed in and out, but has points that are marked with a dot and some text. If the texts is part of the scene it is zoomed with the scene and will be unreadable the most of the time. | Qt display text in QGraphicsView undistorted, even in IgnoreAspectRatio views | 1.2 | 0 | 0 | 74 |
25,410,153 | 2014-08-20T16:43:00.000 | 3 | 0 | 1 | 0 | python-2.7,argparse | 25,410,267 | 1 | true | 0 | 0 | The documentation is talking about having flags (what it refers to as optional arguments) as being required (presumably positional arguments should be used instead). But if you insist on having them be required, that is the way to do it. | 1 | 2 | 0 | I have been using optparse module till python 2.6
But as 2.7 documentation says that optparse is deprecated, I am trying to explore argparse
Looks like I am stuck at a point wherein I need to write a script which accepts multiple 'mandatory' arguments where their position is not fixed. In addition it may have optional parameters and flags too
So I need something like:
xyz_script.py --foo --bar --flag1 --flag2 --opt1
One way I could think of is using 'required=True' with optional parameters in the argparse but the documentation says that it is not recommended
Is there any other way of achieving this ? | Non-positional but required argument with argparse | 1.2 | 0 | 0 | 927 |
25,412,094 | 2014-08-20T18:39:00.000 | 1 | 0 | 0 | 0 | javascript,python,google-app-engine,google-plus,google-signin | 25,419,890 | 2 | true | 1 | 0 | You cannot perform reliable access control using only client-side javascript.
This is because since the javascript is executed on the user's browser, the user will be able to bypass any access control rule you've set there.
You must perform your access control on server-side, in your case in Python code.
Generally, people also perform some kind of access control check on the client side, not to prevent access, but for example to hide/disable buttons that the user cannot use. | 1 | 0 | 0 | I decided to use social media benefits on my page and currently I'm implementing Google+ Sign-In.
One of the pages on my website should be accessible for logged in users only (adding stuff to the page). I am logging user to website via JavaScript.
I'm aware that javascript is executed on client-side but I am curious is it possible to restrict access to the certain page using only javascript. | Google+ Sign-In - Page accessible for logged in users only | 1.2 | 0 | 1 | 73 |
25,412,906 | 2014-08-20T19:27:00.000 | 0 | 0 | 1 | 0 | python,macos,ipython,packages | 25,412,973 | 1 | true | 0 | 0 | Use the pip/easy_install binary located in /sw/bin to install packages. If you want this to be the default when the command is called, just put /sw/bin before /usr/bin in your .profile. | 1 | 0 | 0 | I have Mac OS X 10.7. I have a "default" ipython, but also a different ipython installed at a different location /sw/bin/ipython. When I install packages with pip install, though, I can't access them with the ipython at /sw/bin/ipython. How would I install packages for this other ipython? | How can python packages be installed for ipython located in a special location on a Mac? | 1.2 | 0 | 0 | 32 |
25,413,343 | 2014-08-20T19:55:00.000 | 0 | 0 | 0 | 0 | python,mysql,solr,django-haystack | 25,414,143 | 1 | true | 1 | 0 | I'd go with a modified version of the first one - it'll keep user specific data that's not going to be used for search out of the index (although if you foresee a case where you want to search for favourite'd articles, it would probably be an interesting field to have in the index) for now. For just display purposes like in this case, I'd take all the id's returned from Solr, fetch them in one SQL statement from the database and then set the UI values depending on that. It's a fast and easy solution.
If you foresee that "search only in my fav'd articles" as a use case, I would try to get that information into the index as well (or other filter applications against whether a specific user has added the field as a favourite). I'd try to avoid indexing anything more than the user id that fav'd the article in that case.
Both solutions would however work, although the latter would require more code - and the required response from Solr could grow large if a large number of users fav's an article, so I'd try to avoid having to return a set of userid's if that's the case (many fav's for a single article). | 1 | 0 | 0 | Let's assume I am developing a service that provides a user with articles. Users can favourite articles and I am using Solr to store these articles for search purposes.
However, when the user adds an article to their favourites list, I would like to be able to figure out out which articles the user has added to favourites so that I can highlight the favourite button.
I am thinking of two approaches:
Fetch articles from Solr and then loop through each article to fetch the "favourite-status" of this article for this specific user from MySQL.
Whenever a user favourites an article, add this user's ID to a multi-valued column in Solr and check whether the ID of the current user is in this column or not.
I don't know the capacity of the multivalued column... and I also don't think the second approach would be a "good practice" (saving user-related data in index).
What other options do I have, if any? Is approach 2 a correct approach? | Solr & User data | 1.2 | 1 | 0 | 102 |
25,414,394 | 2014-08-20T21:04:00.000 | 1 | 0 | 0 | 1 | python,django,eventlet,green-threads | 25,425,696 | 1 | true | 1 | 0 | There is no such context manager, though you are welcome to contribute one.
You have monkey patched everything, but you do not want to monkey patch socket in memcache client. Your options:
monkey patch everything but socket, then patcher.import_patched particular modules. This is going to be very hard with Django/Tastypie.
modify your memcache client to use eventlet.patcher.original('socket') | 1 | 1 | 0 | I have a Django/Tastypie app where I've monkey patched everything with eventlet.
I analysed performance during load tests while using both sync and eventlet worker clasees for gunicorn. I tested against sync workers to eliminate the effects of waiting for other greenthreads to switch back, and I found that the memcached calls in my throttling code only take about 1ms on their own. Rather than switch to another greenthread while waiting for this 1ms response, I'd rather just block at this one point. Is there some way to tell eventlet to not switch to another greenthread? Maybe a context manager or something? | Prevent greenthread switch in eventlet | 1.2 | 0 | 0 | 349 |
25,415,104 | 2014-08-20T21:57:00.000 | 24 | 0 | 1 | 1 | python,linux,multiprocessing | 25,415,676 | 4 | true | 0 | 0 | SIGQUIT (Ctrl + \) will kill all processes even under Python 2.x.
You can also update to Python 3.x, where this behavior (only child gets the signal) seems to have been fixed. | 2 | 22 | 0 | I am running a Python program which uses the multiprocessing module to spawn some worker threads. Using Pool.map these digest a list of files.
At some point, I would like to stop everything and have the script die.
Normally Ctrl+C from the command line accomplishes this. But, in this instance, I think that just interrupts one of the workers and that a new worker is spawned.
So, I end up running ps aux | grep -i python and using kill -9 on the process ids in question.
Is there a better way to have the interrupt signal bring everything to a grinding halt? | Kill Python Multiprocessing Pool | 1.2 | 0 | 0 | 10,314 |
25,415,104 | 2014-08-20T21:57:00.000 | 0 | 0 | 1 | 1 | python,linux,multiprocessing | 25,415,725 | 4 | false | 0 | 0 | I found that using the python signal library works pretty well in this case. When you initialize the pool, you can pass a signal handler to each thread to set a default behavior when the main thread gets a keyboard interrupt.
If you really just want everything to die, catch the keyboard interrupt exception in the main thread, and call pool.terminate(). | 2 | 22 | 0 | I am running a Python program which uses the multiprocessing module to spawn some worker threads. Using Pool.map these digest a list of files.
At some point, I would like to stop everything and have the script die.
Normally Ctrl+C from the command line accomplishes this. But, in this instance, I think that just interrupts one of the workers and that a new worker is spawned.
So, I end up running ps aux | grep -i python and using kill -9 on the process ids in question.
Is there a better way to have the interrupt signal bring everything to a grinding halt? | Kill Python Multiprocessing Pool | 0 | 0 | 0 | 10,314 |
25,416,553 | 2014-08-21T00:32:00.000 | 0 | 0 | 1 | 0 | python,oop,rotation | 25,417,646 | 1 | false | 0 | 0 | Geometric objects that have a fixed boundary/end-points can be translated and rotated in place. But for a line, unless you talk about a line from point A to point B with a fixed length, you are looking at both end-points either being at infinity or -infinity (y = mx + c). Division using infinity or -infinity is not simple math and hence I believe complicates the rotation and translation algorithms | 1 | 1 | 0 | I run into an OOP problem when coding something in python that I don't know how to address in an elegant solution. I have a class that represents the equation of a line (y = mx + b) based on the m and b parameters, called Line. Vertical lines have infinite slope, and have equation x = c, so there is another class VerticalLine which only requires a c parameter. Note that I am unable to have a Line class that is represented by two points in the xy-plane, if this were a solution I would indeed use it.
I want to be able to rotate the lines. Rotating a horizontal line by pi/2 + k*pi (k an integer) results in a vertical line, and vice versa. So a normal Line would have to somehow be converted to a VerticalLine in-place, which is impossible in python (well, not impossible but incredibly wonky). How can I better structure my program to account for this problem?
Note that other geometric objects in the program have a rotation method that is in-place, and they are already used frequently, so if I could I would like the line rotation methods to also be in place. Indeed, this would be a trivial problem if the line rotation methods could return a new rotated Line or VerticalLine object as seen fit. | Elegant Solution to an OOP Issue involving type-changing in python | 0 | 0 | 0 | 45 |
25,419,510 | 2014-08-21T06:23:00.000 | 0 | 0 | 0 | 1 | android,python,ssh,kivy | 41,788,451 | 3 | false | 0 | 1 | Don't know you found the answer or not. But what i have understood is that you are trying to connect android device from Ubuntu. If I am right then (go on reading) you are following wrong steps.
First :- Your Ubuntu does not have ssh server by default so you get this error message.
Second :- You are using 127.0.0.1 address i.e your Ubuntu machine itself.
Method to do this shall be
Give your android machine a static address or if it gets dynamic its OK.
know the IP address of android and then from Ubuntu typessh -p8000 admin@IP_Of_andrid_device and this should solve the issue. | 3 | 1 | 0 | This seems to be a dumb question, but how do I ssh into the kivy-remote-shell?
I'm trying to use buildozer and seem to be able to get the application built and deployed with the command, buildozer -v android debug deploy run, which ends with the application being pushed, and displayed on my android phone, connected via USB.
However, when I try ssh -p8000 [email protected] from a terminal on the ubuntu machine I pushed the app from I get Connection Refused.
It seems to me that there should be a process on the host (ubuntu) machine in order to proxy the connection, or maybe I just don't see how this works?
Am I missing something simple, or do I need to dig in a debug a bit more? | How to connect to kivy-remote-shell? | 0 | 0 | 0 | 1,540 |
25,419,510 | 2014-08-21T06:23:00.000 | 1 | 0 | 0 | 1 | android,python,ssh,kivy | 25,423,631 | 3 | false | 0 | 1 | 127.0.0.1
This indicates something has gone wrong - 127.0.0.1 is a standard loopback address that simply refers to localhost, i.e. it's trying to ssh into your current computer.
If this is the ip address suggested by kivy-remote-shell then there must be some other problem, though I don't know what - does it work on another device? | 3 | 1 | 0 | This seems to be a dumb question, but how do I ssh into the kivy-remote-shell?
I'm trying to use buildozer and seem to be able to get the application built and deployed with the command, buildozer -v android debug deploy run, which ends with the application being pushed, and displayed on my android phone, connected via USB.
However, when I try ssh -p8000 [email protected] from a terminal on the ubuntu machine I pushed the app from I get Connection Refused.
It seems to me that there should be a process on the host (ubuntu) machine in order to proxy the connection, or maybe I just don't see how this works?
Am I missing something simple, or do I need to dig in a debug a bit more? | How to connect to kivy-remote-shell? | 0.066568 | 0 | 0 | 1,540 |
25,419,510 | 2014-08-21T06:23:00.000 | 2 | 0 | 0 | 1 | android,python,ssh,kivy | 25,426,085 | 3 | false | 0 | 1 | When the app is running, the GUI will tell you what IP address and port to connect to. | 3 | 1 | 0 | This seems to be a dumb question, but how do I ssh into the kivy-remote-shell?
I'm trying to use buildozer and seem to be able to get the application built and deployed with the command, buildozer -v android debug deploy run, which ends with the application being pushed, and displayed on my android phone, connected via USB.
However, when I try ssh -p8000 [email protected] from a terminal on the ubuntu machine I pushed the app from I get Connection Refused.
It seems to me that there should be a process on the host (ubuntu) machine in order to proxy the connection, or maybe I just don't see how this works?
Am I missing something simple, or do I need to dig in a debug a bit more? | How to connect to kivy-remote-shell? | 0.132549 | 0 | 0 | 1,540 |
25,422,847 | 2014-08-21T09:29:00.000 | -1 | 1 | 0 | 0 | python,robotframework | 25,830,228 | 5 | false | 1 | 0 | Actually, you can SET TAG to run whatever keyword you like (for sanity testing, regression testing...)
Just go to your test script configuration and set tags
And whenever you want to run, just go to Run tab and select check-box Only run tests with these tags / Skip tests with these tags
And click Start button :) Robot framework will select any keyword that match and run it.
Sorry, I don't have enough reputation to post images :( | 1 | 1 | 0 | In Robot Framework, the execution status for each test case can be either PASS or FAIL. But I have a specific requirement to mark few tests as NOT EXECUTED when it fails due to dependencies.
I'm not sure on how to achieve this. I need expert's advise for me to move ahead. | Customized Execution status in Robot Framework | -0.039979 | 0 | 0 | 3,561 |
25,424,056 | 2014-08-21T10:30:00.000 | 0 | 1 | 0 | 0 | python,serial-port,pyserial | 26,436,860 | 3 | true | 0 | 0 | It turns out. To drain serial output physically, we have serial.drainOutput() function. It works but if you're looking for a real time operation it might not help as python is not a very good language if you're expecting a real time performance.
Hope it helps. | 1 | 2 | 0 | I'm using PySerial library for a project and it works fine. Though, now that my requirements have changed and i need to implement a non blocking serial write option. I went through PySerial official documentation as well as several examples however, couldn't find a suitable option. Now my question is : is non-blocking write possible with PySerial ? If yes, how ?
Thanks in advance. | PySerial non blocking write? | 1.2 | 0 | 0 | 5,879 |
25,429,410 | 2014-08-21T14:47:00.000 | 0 | 1 | 0 | 0 | python,python-3.x,raspberry-pi,compatibility,python-2.x | 25,429,625 | 2 | false | 0 | 0 | Search for backports or try to split it up into different processes. | 2 | 3 | 0 | I'm currently building an application in my Raspberry Pi. This application have to use I2C bus along serial port. Previously I developed both applications independent between them; for I2C app, I used a python3 module for handling the bus. But for handling serial port I use a python 2 module.
Now I want to build an app that handle the two interfaces. Is this possible? How should I do it?
Thanks. | Is it possible to have an application using python 2 and python 3 modules? | 0 | 0 | 0 | 71 |
25,429,410 | 2014-08-21T14:47:00.000 | 0 | 1 | 0 | 0 | python,python-3.x,raspberry-pi,compatibility,python-2.x | 25,550,994 | 2 | true | 0 | 0 | Finally, I used 2to3 to convert python 2 modules. But due to the module interacts with serial port (pyserial), byte handling were not full correctly converted, so I have to edit the code after conversion using encode/decode functions. | 2 | 3 | 0 | I'm currently building an application in my Raspberry Pi. This application have to use I2C bus along serial port. Previously I developed both applications independent between them; for I2C app, I used a python3 module for handling the bus. But for handling serial port I use a python 2 module.
Now I want to build an app that handle the two interfaces. Is this possible? How should I do it?
Thanks. | Is it possible to have an application using python 2 and python 3 modules? | 1.2 | 0 | 0 | 71 |
25,431,187 | 2014-08-21T16:19:00.000 | 0 | 0 | 1 | 0 | python,crash,osx-snow-leopard,python-idle | 25,505,313 | 2 | true | 0 | 0 | SOLUTION:
Solved by installing Python version 3.1.0, in which the Python runtime and the IDLE editor both seem to work with no problems (so far).
I would add that:
I am only writing really simple scripts based on a tutorial and
The editor only works when started by clicking on File > New Window | 1 | 0 | 0 | I am new to Python and I have installed the 64-bit version 3.4.1 using the .dmg installer from the Python website, and when I start IDLE and try to create a new file, IDLE crashes and quits. Same thing happens when I try to load a Python file using the "File > Open" option.
I am running Mac OSX 10.6.8 Snow Leopard on an iMac Intel Core i3 21.5in, and the Python version is 3.4.1.
IDLE itself seems to work, it's only when I try to create a new file or load a file that it quits.
Just to add, the create new file opens a small blank window with no header, which then causes both windows to become unresponsive. I would include a screenshot but I don't have enough rep :(
EDIT: I just installed an older version of Python (3.3.5), and have encountered the same issue, which makes me think that maybe it's something to do with my setup. | Python 3.4 IDLE not working properly in MacOSX 10.6.8 Snow Leopard, crashes when the editor is started | 1.2 | 0 | 0 | 664 |
25,433,128 | 2014-08-21T18:07:00.000 | 0 | 0 | 0 | 0 | python,web2py | 25,433,301 | 2 | false | 1 | 0 | An easy way is to change the last element in the path name to something that isn't a valid Python identifier. Web2py internally represents views, models, apps and other constructs using python objects, and if you give something a name that isn't a valid identifier, web2py will pas over it.
For example, change beautify to beautify.IGNORE and see what happens.
I can't recall which objects have this effect immediately and which require the web2py server process to restart. I think (not sure) app name changes require a restart while views, controller etc. do not. | 1 | 0 | 0 | I've set up an instance of Web2Py on a hosted server and in the administrative interface, I've disabled the example app but it's still accessible. For example, (see what I did there?) if I type the address myserver.com/examples/template_examples/beautify then Web2Py happily dumps all sorts of nasty bits about my server onto the page for God and everybody to look at.
How do I make a Web2Py installed application inactive without deleting it? | Can't disable example app on Web2Py | 0 | 0 | 0 | 196 |
25,439,007 | 2014-08-22T03:16:00.000 | 1 | 0 | 0 | 0 | python,excel,openpyxl | 25,486,431 | 1 | false | 0 | 1 | The simple answer is that a cell is the smallest item to which you can apply styles to. You can work around this restriction by embedding formatting within the text but this is even messier than it sounds. | 1 | 3 | 0 | I am trying to port my code from pywin32 to openpyxl. But I can't find the way to change the color style on partial characters in the cell. In pywin32, I can use:
Range(Cell).GetCharacters(Start, Length).Font.ColorIndex to do this. But it seems there is no such method like that in openpyxl? | openpyxl - style on characters | 0.197375 | 0 | 0 | 1,007 |
25,440,006 | 2014-08-22T05:17:00.000 | 0 | 0 | 1 | 0 | python,matplotlib | 25,440,301 | 3 | false | 0 | 0 | You want to get the regular (zoomable) plot window, right? I think you can not do it in the same kernel as, unfortunately, you can't switch from inline to qt and such because the backend has already been chosen: your calls to matplotlib.use() are always before pylab. | 2 | 3 | 0 | I am running ipython remotely on a remote server. I access it using serveraddress:8888/ etc to write code for my notebooks.
When I use matplotlib of course the plots are inline. Is there any way to remotely send data so that plot window opens up? I want the whole interactive environment on matplotlib on my local machine and all the number crunching on the server machine? This is something very basic....but somehow after rumaging through google for quite a while i can't figure it out. | how to display matplotlib plots on local machine? | 0 | 0 | 0 | 2,520 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.