Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
25,182,812 | 2014-08-07T12:39:00.000 | 27 | 1 | 0 | 1 | 0 | python,pytest,pudb | 0 | 25,183,130 | 0 | 2 | 0 | true | 0 | 0 | Simply by adding the -s flag pytest will not replace stdin and stdout and debugging will be accessible, i.e. pytest -s my_file_test.py will do the trick.
In documentation provided by ambi it is also said that previously using explicitly -s was required for regular pdb too, now -s flag is implicitly used with --pdb flag.
However pytest does not implicitly support pUdb, so setting -s is needed. | 1 | 24 | 0 | 0 | Before my testing library of choice was unittest. It was working with my favourite debugger - Pudb. Not Pdb!!!
To use Pudb with unittest, I paste import pudb;pudb.set_trace() between the lines of code.
I then executed python -m unittest my_file_test, where my_file_test is module representation of my_file_test.py file.
Simply using nosetests my_file_test.py won't work - AttributeError: StringIO instance has no attribute 'fileno' will be thrown.
With py.test neither works:
py.test my_file_test.py
nor
python -m pytest my_file_test.py
both throw ValueError: redirected Stdin is pseudofile, has no fileno()
Any ideas about how to use Pudb with py.test | Using Python pudb debugger with pytest | 0 | 1.2 | 1 | 0 | 0 | 4,263 |
25,205,157 | 2014-08-08T13:54:00.000 | 0 | 0 | 0 | 0 | 0 | oracle,python-2.7,blob | 0 | 25,205,260 | 0 | 1 | 0 | false | 0 | 0 | If you have a pure BLOB in the database, as opposed to, say, an ORDImage that happens to be stored in a BLOB under the covers, the BLOB itself has no idea what sort of binary data it contains. Normally, when the table was designed, a column would be added that would store the data type and/or the file name. | 1 | 0 | 0 | 0 | This is not a question of a code, I need to extract some BLOB data from an Oracle database using python script. My question is what are the steps in dealing with BLOB data and how to read as images, videos and text? Since I have no access to the database itself, is it possible to know the type of BLOBs stored if it is pictures, videos or texts? Do I need encoding or decoding in order to tranfer these BLOBs into .jpg, .avi or .txt files ? These are very basic questions but I am new to programming so need some help to find a starting point :) | Reading BLOB data from Oracle database using python | 1 | 0 | 1 | 1 | 0 | 1,342 |
25,219,911 | 2014-08-09T15:08:00.000 | 0 | 0 | 0 | 0 | 1 | python,django | 1 | 46,187,663 | 0 | 3 | 0 | false | 1 | 0 | zypper install python-pip
pip install virtualenv
virtualenv name-env
Source name-env/bin/activate
(name-env) pip install django==version
pip install django | 1 | 0 | 0 | 0 | I keep receiving the following error message when trying to install Python-Django on my OpenSuse Linux VM:
The installation has failed. For more information, see the log file at
/var/log/YaST2/y2log. Failure stage was: Adding Repositories
Not sure how to add additional Repositories when I am using the opensuse download center. Does anyone know how to resolve this error?
Thank you. | OpenSuse Python-Django Install Issue | 0 | 0 | 1 | 0 | 0 | 1,428 |
25,235,040 | 2014-08-11T02:17:00.000 | -1 | 1 | 1 | 0 | 0 | python,c,assembly,bit | 0 | 25,236,304 | 0 | 2 | 0 | false | 0 | 0 | You can store bits from 9th onwards in other variable. Then make those vits 0 in EAX. Then do EAX << 7 and add those bits again to it. | 1 | 0 | 0 | 0 | For example: EAX = 10101010 00001110 11001010 00100000
I want to move EAX high 8 bits to right 7 times,what can i do in c or in python?
In asm : SHR ah,7
The result of EAX is:10101010 00001110 00000001 00100000
And how about SHR ax,7?
I have tried ((EAX & 0xff00) >> 8 ) >> 7,but i don't how to add it back to EAX? | how to move a number's high 8 bits 7 times in c or python? | 0 | -0.099668 | 1 | 0 | 0 | 384 |
25,239,361 | 2014-08-11T08:57:00.000 | 2 | 0 | 0 | 0 | 0 | python,mysql | 0 | 25,239,591 | 0 | 2 | 0 | true | 0 | 0 | You can just save the base64 string in a TEXT column type. After retrieval just decode this string with base64.decodestring(data) ! | 2 | 2 | 0 | 1 | I've done some research, and I don't fully understand what I found.
My aim is to, using a udp listener I wrote in python, store data that it receives from an MT4000 telemetry device. This data is received and read in hex, and I want that data to be put into a table, and store it as a string in base64. In terms of storing something as an integer by using the 'columnname' INT format, I would like to know how to store the information as base64? ie. 'data' TEXT(base64)? or something similar.
Could I do this by simply using the TEXT datatype, and encode the data in the python program?
I may be approaching this in the wrong way, as I may have misinterpreted what I read online.
I would be very grateful for any help. Thanks,
Ed | How to store base64 information in a MySQL table? | 0 | 1.2 | 1 | 1 | 0 | 5,730 |
25,239,361 | 2014-08-11T08:57:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql | 0 | 62,777,767 | 0 | 2 | 0 | false | 0 | 0 | You can storage a base64 string in a TEXT column type, but in my experience I recommend to use LONGTEXT type to avoid truncated errors in big base64 texts. | 2 | 2 | 0 | 1 | I've done some research, and I don't fully understand what I found.
My aim is to, using a udp listener I wrote in python, store data that it receives from an MT4000 telemetry device. This data is received and read in hex, and I want that data to be put into a table, and store it as a string in base64. In terms of storing something as an integer by using the 'columnname' INT format, I would like to know how to store the information as base64? ie. 'data' TEXT(base64)? or something similar.
Could I do this by simply using the TEXT datatype, and encode the data in the python program?
I may be approaching this in the wrong way, as I may have misinterpreted what I read online.
I would be very grateful for any help. Thanks,
Ed | How to store base64 information in a MySQL table? | 0 | 0 | 1 | 1 | 0 | 5,730 |
25,247,910 | 2014-08-11T16:22:00.000 | 3 | 0 | 1 | 0 | 0 | python,list,python-2.7 | 0 | 25,247,976 | 0 | 2 | 0 | true | 0 | 0 | Provided there are no other references to l, then memory will be released yes.
What happens is:
l[25:100] creates a new list object with references to the values from l indices 25 through to 100 (exclusive).
l is rebound to now refer the new list object.
The reference count for the old list object formerly bound by l is decremented by 1.
If the reference count dropped to 0, the old list object formerly bound by l is deleted and memory is freed. | 2 | 0 | 0 | 0 | My python list named l contains 100 items.
If I do,
l = l[25:100]
Will this release the memory for the first 25 items?
If not, how can this be achieved? | Will slicing a list release memory? | 0 | 1.2 | 1 | 0 | 0 | 77 |
25,247,910 | 2014-08-11T16:22:00.000 | 0 | 0 | 1 | 0 | 0 | python,list,python-2.7 | 0 | 25,249,574 | 0 | 2 | 0 | false | 0 | 0 | There is also an option that doesn't allocate a new list: del l[:25] will remove the first 25 entries from the existing list, shifting the following ones down. Any other references to l will still point to it, but it is altered into a shorter list.
Note that reference semantics also apply to each of the items; they may have references from elsewhere than the list. It is also not guaranteed that they will be freed right away. | 2 | 0 | 0 | 0 | My python list named l contains 100 items.
If I do,
l = l[25:100]
Will this release the memory for the first 25 items?
If not, how can this be achieved? | Will slicing a list release memory? | 0 | 0 | 1 | 0 | 0 | 77 |
25,261,552 | 2014-08-12T10:04:00.000 | 1 | 0 | 0 | 0 | 0 | python,sockets,tcp,udp,multiplayer | 0 | 25,261,988 | 0 | 1 | 0 | true | 0 | 0 | I was thinking I could create a new tcp/udp socket on a new port and let the matched(matched in the search room) players connect to that socket and then have a perfect isolated room for the two to interact with each another.
Yes, you can do that. And there shouldn't be anything hard about it at all. You just bind a socket on the first available port, pass that port to the two players, and wait for them to connect. If you're worried about hackers swooping in by, e.g., portscanning for new ports opening up, there are ways to deal with that, but given that you're not attempting any cheap protection I doubt it's an issue.
Or do I need a new server (machine/hardware) and than create a new socket on that one and let the peered players connect to it.
Why would you need that? What could it do for you? Sure, it could take some load off the first server, but there are plenty of ways you could load-balance if that's the issue; doing it asymmetrically like this tends to lead to one server at 100% while the other one's at 5%…
Or maybe there is another way of doing this.
One obvious way is to not do anything. Just let them keeping talking to the same port they're already talking to, just attach a different handler (or a different state in the client state machine, or whatever; you haven't given us any clue how you're implementing your server). I don't know what you think you're getting out of "perfect isolation". But even if you want it to be a different process, you can just migrate the two client sockets over to the other process; there's no reason to make them connect to a new port.
Another way to do it is to get the server out of the way entirely—STUN or hole-punch them together and let them P2P the game protocol.
Anything in between those two extremes doesn't seem worth doing, unless you have some constraints you haven't explained.
OBS. I am not going to have the game running on the server to deal with cheaters for now. Because this will be to much load for the server cpu on my setup.
I'm guessing that if putting even minimal game logic on the server for cheat protection is too expensive, spinning off a separate process for every pair of clients may also be too expensive. Another reason not to do it. | 1 | 1 | 0 | 0 | I am trying to create a multiplayer game for iPhone(cocos2d), which is almost finished, but the multiplayer part is left. I have searched the web for two days now, and can´t find any thing that answers my question.
I have created a search room(tcp socket on port 2000) that matches players that searches for a quick match to play. After two players has been matched, that server disconnects them from the search room to leave space for incoming searchers(clients/players)
Now I´m wondering how to create the play room(where two players interact and play)?
I was thinking I could create a new tcp/udp socket on a new port and let the matched(matched in the search room) players connect to that socket and then have a perfect isolated room for the two to interact with each another.
Or do I need a new server (machine/hardware) and than create a new socket on that one and let the peered players connect to it.
Or maybe there is another way of doing this.
OBS. I am not going to have the game running on the server to deal with cheaters for now. Because this will be to much load for the server cpu on my setup. | Multi multiplayer server sockets in python | 0 | 1.2 | 1 | 0 | 1 | 902 |
25,265,148 | 2014-08-12T13:08:00.000 | 0 | 1 | 0 | 0 | 0 | python,ssh | 0 | 25,265,180 | 0 | 3 | 0 | false | 0 | 0 | Have you considered Dropbox or SVN ? | 2 | 0 | 0 | 0 | I may be being ignorant here, but I have been researching for the past 30 minutes and have not found how to do this. I was uploading a bunch of files to my server, and then, prior to them all finishing, I edited one of those files. How can I update the file on the server to the file on my local computer?
Bonus points if you tell me how I can link the file on my local computer to auto update on the server when I connect (if possible of course) | update file on ssh and link to local computer | 0 | 0 | 1 | 0 | 1 | 2,518 |
25,265,148 | 2014-08-12T13:08:00.000 | 0 | 1 | 0 | 0 | 0 | python,ssh | 0 | 25,265,274 | 0 | 3 | 0 | false | 0 | 0 | I don't know your local computer OS, but if it is Linux or OSX, you can consider LFTP. This is an FTP client which supports SFTP://. This client has the "mirror" functionality. With a single command you can mirror your local files against a server.
Note: what you need is a reverse mirror. in LFTP this is mirror -r | 2 | 0 | 0 | 0 | I may be being ignorant here, but I have been researching for the past 30 minutes and have not found how to do this. I was uploading a bunch of files to my server, and then, prior to them all finishing, I edited one of those files. How can I update the file on the server to the file on my local computer?
Bonus points if you tell me how I can link the file on my local computer to auto update on the server when I connect (if possible of course) | update file on ssh and link to local computer | 0 | 0 | 1 | 0 | 1 | 2,518 |
25,269,845 | 2014-08-12T16:52:00.000 | 4 | 0 | 1 | 0 | 0 | python,kivy,setup-deployment | 0 | 61,918,928 | 0 | 3 | 0 | true | 0 | 1 | You need a web server and a database to get this working.
Create a licenses table in your database.
Each time a new client pays for your software or asks for a trial, you generate a new long random license, insert it in the licenses table, associate it to the client's email address and send it to the client via email.
Each time a client tries to install the software on their computers, you ask for a license and contact the webserver to ensure that the license exists and is still valid.
Using that, people can still just create multiple emails and thus potentially get an infinite amount of trial versions.
You can then try to add a file somewhere in the person's computer, a place where nobody would ever look for, and just paste the old license there so that when the app starts again (even from a new installation), it can read the license from there and contact the webserver without asking for a license. With this method, when your app contacts the server with an expired trial license, your server can reply with a "license expired" signal to let your app know that it has to ask for a non-trial license now, and the server should only accept non-trial licenses coming from that app from now on. This whole method breaks if your clients realize that your app is taking this information from a local file because they can just delete it when found.
Another idea that comes to mind is to associate the MAC address of a laptop (or any other unique identifier you can think of) to one license instead of an email address, either at license-creation time (the client would need to send you his MAC address when asking for a trial) or at installation time (your app can check for the MAC address of the laptop it's running on). | 2 | 7 | 0 | 0 | I have made a desktop application in kivy and able to make single executable(.app) with pyinstaller. Now I wanted to give it to customers with the trial period of 10 days or so.
The problem is how to make a trial version which stop working after 10 days of installation and even if the user un-install and install it again after trial period get over it should not work.
Giving partial feature in trial version is not an option.
Evnironment
Mac OS and Python 2.7 with Kivy | How to make trial period for my python application? | 0 | 1.2 | 1 | 0 | 0 | 2,478 |
25,269,845 | 2014-08-12T16:52:00.000 | 0 | 0 | 1 | 0 | 0 | python,kivy,setup-deployment | 0 | 69,546,902 | 0 | 3 | 0 | false | 0 | 1 | My idea is
Make a table in database
Use datetime module and put system date in that table as begining date
Use timedelta module timedelta(15)( for calculating the date that program needs to be expired here i used 15 day trial in code) and store it in table of database as expiry date
Now each time your app start put this logic that it checks on if current date is matches with expiry if it does show error it is expired of your explicit logic
Note:- make sure begining and expiry runs only once instead date will be changed again and again. | 2 | 7 | 0 | 0 | I have made a desktop application in kivy and able to make single executable(.app) with pyinstaller. Now I wanted to give it to customers with the trial period of 10 days or so.
The problem is how to make a trial version which stop working after 10 days of installation and even if the user un-install and install it again after trial period get over it should not work.
Giving partial feature in trial version is not an option.
Evnironment
Mac OS and Python 2.7 with Kivy | How to make trial period for my python application? | 0 | 0 | 1 | 0 | 0 | 2,478 |
25,273,987 | 2014-08-12T21:03:00.000 | 1 | 0 | 0 | 0 | 0 | python,shapefile,qgis | 0 | 29,585,421 | 0 | 3 | 0 | false | 0 | 1 | QgsGeometry has the method wkbType that returns what you want. | 1 | 7 | 0 | 0 | I'm writing a script that is dependent on knowing the geometry type of the loaded shapefile.
but I've looked in the pyqgis cookbook and API and can't figure out how to call it.
infact, I have trouble interpreting the API, so any light shed on that subject would be appreciated.
Thank you | How to get shapefile geometry type in PyQGIS? | 0 | 0.066568 | 1 | 0 | 0 | 8,852 |
25,279,746 | 2014-08-13T06:48:00.000 | 2 | 0 | 1 | 1 | 0 | python,windows | 0 | 25,279,812 | 0 | 4 | 0 | false | 0 | 0 | Create a service that runs permanently.
Arrange for the service to have an IPC communications channel.
From your desktop python code, send messages to the service down that IPC channel. These messages specify the action to be taken by the service.
The service receives the message and performs the action. That is, executes the python code that the sender requests.
This allows you to decouple the service from the python code that it executes and so allows you to avoid repeatedly re-installing a service.
If you don't want to run in a service then you can use CreateProcessAsUser or similar APIs. | 2 | 0 | 0 | 0 | I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu | How to launch a python process in Windows SYSTEM account | 0 | 0.099668 | 1 | 0 | 0 | 1,368 |
25,279,746 | 2014-08-13T06:48:00.000 | 0 | 0 | 1 | 1 | 0 | python,windows | 0 | 25,281,143 | 0 | 4 | 0 | false | 0 | 0 | You could also use Windows Task Scheduler, it can run a script under SYSTEM account and its interface is easy (if you do not test too often :-) ) | 2 | 0 | 0 | 0 | I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu | How to launch a python process in Windows SYSTEM account | 0 | 0 | 1 | 0 | 0 | 1,368 |
25,288,032 | 2014-08-13T13:52:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-2.7,nlp,nltk | 0 | 25,298,846 | 0 | 1 | 0 | true | 0 | 0 | The way how topic modelers usually pre-process text with n-grams is they connect them by underscore (say, topic_modeling or white_house). You can do that when identifying big rams themselves. And don't forget to make sure that your tokenizer does not split by underscore (Mallet does if not setting token-regex explicitly).
P.S. NLTK native bigrams collocation finder is super slow - if you want something more efficient look around if you haven't yet or create your own based on, say, Dunning (1993). | 1 | 0 | 1 | 0 | Background: I got a lot of text that has some technical expressions, which are not always standard.
I know how to find the bigrams and filter them.
Now, I want to use them when tokenizing the sentences. So words that should stay together (according to the calculated bigrams) are kept together.
I would like to know if there is a correct way to doing this within NLTK. If not, I can think of various non efficient ways of rejoining all the broken words by checking dictionaries. | Python NLTK tokenizing text using already found bigrams | 1 | 1.2 | 1 | 0 | 0 | 317 |
25,288,653 | 2014-08-13T14:20:00.000 | 6 | 0 | 1 | 0 | 0 | python,python-2.7,comparison,operators | 0 | 25,288,736 | 0 | 2 | 0 | false | 0 | 0 | That's just division. And, at least for integers a >= 0 and b > 0, a/b is truthy if a>=b. Because, in that scenario, a/b is a strictly positive integer and bool() applied to a non-zero integer is True.
For zero and negative integer arguments, I am sure that you can work out the truthiness of a/b for yourself. | 1 | 6 | 0 | 1 | I recently got into code golfing and need to save as many characters as possible.
I remember seeing someone say to use if a/b: instead of if a<=b:. However, I looked through Python documentation and saw nothing of the sort.
I could be remembering this all wrong, but I'm pretty sure I've seen this operator used and recommended in multiple instances.
Does this operator exist? If so, how does it work? | Using '/' as greater than less than in Python? | 0 | 1 | 1 | 0 | 0 | 482 |
25,297,446 | 2014-08-13T22:47:00.000 | 1 | 0 | 1 | 0 | 0 | python,recursion | 0 | 25,297,640 | 0 | 7 | 0 | false | 0 | 0 | Note: This answer is limited to your topmost question, i.e. "Is it advisable to write recursive functions in Python?".
The short answer is no, it's not exactly "advisable". Without tail-call optimization, recursion can get painfully slow in Python given how intensive function calls are on both memory and processor time. Whenever possible, it's best to rewrite your code iteratively. | 3 | 10 | 0 | 0 | I have written a verilog (logic gates and their connectivity description basically) simulator in python as a part of an experiment.
I faced an issue with the stack limit so I did some reading and found that Python does not have a "tail call optimization" feature (i.e. removing stack entries dynamically as recursion proceeds)
I mainly have two questions in this regard:
1) If I bump up the stack limit to sys.setrecursionlimit(15000) does it impact performance in terms of time (memory -- I do not care)?
2) Is there any way I can circumvent this limitation assuming that I can live without a stack-trace.
I ask this because Verilog mainly deals with state-machines which can be implemented in an elegant way using recursive functions.
Also, if I may add, in case of recursive function calls, if there is a bug, I rely more on the input which is causing this bug rather than the stack trace.
I am new to Python, so maybe experts might argue that the Python stack trace is quite useful to debug recursive function calls...if that is the case, I would be more than happy to learn how to do that.
Lastly, is it advisable to write recursive functions in Python or should I be moving to other languages?
If there any work-around such that I can continue using python for recursive functions, I would like to know if there any performance impact (I can do profiling though). | Is it advisable to write recursive functions in Python | 1 | 0.028564 | 1 | 0 | 0 | 1,765 |
25,297,446 | 2014-08-13T22:47:00.000 | 0 | 0 | 1 | 0 | 0 | python,recursion | 0 | 25,298,141 | 0 | 7 | 0 | false | 0 | 0 | I use sys.setrecursionlimit to set the recursion limit to its maximum possible value because I have had issues with large classes/functions hitting the default maximum recursion depth. Setting a large value for the recursion limit should not affect the performance of your script, i.e. it will take the same amount of time to complete if it completes under both a high and a low recursion limit. The only difference is that if you have a low recursion limit, it prevents you from doing stupid things (like running an infinitely recursive loop). With a high limit, rather than hit the limit, a horribly inefficient script that uses recursion too much will just run forever (or until it runs out of memory depending on the task).
As the other answers explain in more detail, most of the time there is a faster way to do whatever it is that you are doing other than a long series of recursive calls. | 3 | 10 | 0 | 0 | I have written a verilog (logic gates and their connectivity description basically) simulator in python as a part of an experiment.
I faced an issue with the stack limit so I did some reading and found that Python does not have a "tail call optimization" feature (i.e. removing stack entries dynamically as recursion proceeds)
I mainly have two questions in this regard:
1) If I bump up the stack limit to sys.setrecursionlimit(15000) does it impact performance in terms of time (memory -- I do not care)?
2) Is there any way I can circumvent this limitation assuming that I can live without a stack-trace.
I ask this because Verilog mainly deals with state-machines which can be implemented in an elegant way using recursive functions.
Also, if I may add, in case of recursive function calls, if there is a bug, I rely more on the input which is causing this bug rather than the stack trace.
I am new to Python, so maybe experts might argue that the Python stack trace is quite useful to debug recursive function calls...if that is the case, I would be more than happy to learn how to do that.
Lastly, is it advisable to write recursive functions in Python or should I be moving to other languages?
If there any work-around such that I can continue using python for recursive functions, I would like to know if there any performance impact (I can do profiling though). | Is it advisable to write recursive functions in Python | 1 | 0 | 1 | 0 | 0 | 1,765 |
25,297,446 | 2014-08-13T22:47:00.000 | 3 | 0 | 1 | 0 | 0 | python,recursion | 0 | 25,297,631 | 0 | 7 | 0 | false | 0 | 0 | A lot depends on the specific nature of the recursive solution you're trying to implement. Let me give a concrete example. Suppose you want the sum of all values in a list. You can set the recursion up by adding the first value to the sum of the remainder of the list - the recursion should be obvious. However, the recursive subproblem is only 1 smaller than the original problem, so the recursive stack will grow to be as big as the number of items in the list. For large lists this will be a problem. An alternate recursion is to note that the sum of all values is the sum of the first half of the list plus the sum of the second half of the list. Again, the recursion should be obvious and the terminating condition is when you get down to sublists of length 1. However, for this version the stack will only grow as log2 of the size of the list, and you can handle immense lists without stack problems. Not all problems can be factored into subproblems which are half the size, but when you can this is a good way to avoid stack overflow situations.
If your recursive solution is a tail recursion, you can easily be converted into a loop rather than a recursive call.
Another possibility if you don't have tail recursion is to implement things with a loop and explicitly store your intermediate state on an explicit stack. | 3 | 10 | 0 | 0 | I have written a verilog (logic gates and their connectivity description basically) simulator in python as a part of an experiment.
I faced an issue with the stack limit so I did some reading and found that Python does not have a "tail call optimization" feature (i.e. removing stack entries dynamically as recursion proceeds)
I mainly have two questions in this regard:
1) If I bump up the stack limit to sys.setrecursionlimit(15000) does it impact performance in terms of time (memory -- I do not care)?
2) Is there any way I can circumvent this limitation assuming that I can live without a stack-trace.
I ask this because Verilog mainly deals with state-machines which can be implemented in an elegant way using recursive functions.
Also, if I may add, in case of recursive function calls, if there is a bug, I rely more on the input which is causing this bug rather than the stack trace.
I am new to Python, so maybe experts might argue that the Python stack trace is quite useful to debug recursive function calls...if that is the case, I would be more than happy to learn how to do that.
Lastly, is it advisable to write recursive functions in Python or should I be moving to other languages?
If there any work-around such that I can continue using python for recursive functions, I would like to know if there any performance impact (I can do profiling though). | Is it advisable to write recursive functions in Python | 1 | 0.085505 | 1 | 0 | 0 | 1,765 |
25,312,626 | 2014-08-14T16:04:00.000 | 3 | 0 | 0 | 0 | 0 | python,django,model-view-controller | 0 | 25,312,789 | 0 | 6 | 0 | false | 1 | 0 | Just check that the object retrieved by the primary key belongs to the requesting user. In the view this would be
if some_object.user == request.user:
...
This requires that the model representing the object has a reference to the User model. | 1 | 5 | 0 | 0 | I'm new to the web development world, to Django, and to applications that require securing the URL from users that change the foo/bar/pk to access other user data.
Is there a way to prevent this? Or is there a built-in way to prevent this from happening in Django?
E.g.:
foo/bar/22 can be changed to foo/bar/14 and exposes past users data.
I have read the answers to several questions about this topic and I have had little luck in an answer that can clearly and coherently explain this and the approach to prevent this. I don't know a ton about this so I don't know how to word this question to investigate it properly. Please explain this to me like I'm 5. | How to prevent user changing URL to see other submission data Django | 0 | 0.099668 | 1 | 0 | 0 | 3,945 |
25,318,344 | 2014-08-14T22:09:00.000 | 0 | 1 | 1 | 0 | 0 | php,python,struct | 1 | 35,089,240 | 0 | 3 | 0 | false | 0 | 0 | If you are trying to pass a null value from PHP to a Python dictionary, you need to use an empty object rather than an empty array.
You can define a new and empty object like $x = new stdClass(); | 1 | 11 | 0 | 0 | I cannot find how to write empty Python struct/dictionary in PHP. When I wrote "{}" in PHP, it gives me an error. What is the equivalent php programming structure to Python's dictionary? | What is the equivalent php structure to python's dictionary? | 0 | 0 | 1 | 0 | 0 | 6,548 |
25,341,332 | 2014-08-16T14:53:00.000 | 0 | 0 | 0 | 0 | 1 | python,django,google-chrome,gunicorn | 1 | 25,341,679 | 0 | 1 | 0 | false | 1 | 0 | The session information, i.e. which user is logged in, is saved in a cookie, which is send from browser to server with each request. The cookie is set through the server with your login request.
For some reason, chrome does not send or save the correct cookie. If you have a current version of each browser, they should behave similar. Older browser versions may not be as strict as newer versions in respect to cookie security:
Same origin: are all pages located at the same sub-domain, or is the login page at some other domain?
path: do you set the cookie for a specific path, but use URLs with other paths?
http-only: Do you try to set or get a cookie with javascript, which is set http-only?
secure-only: Do you use https for the login-page but http for other pages?
Look at the developer tools in chrome Resources -> Cookies which cookies are set and if they change with each login. Delete all cookies, and try again. | 1 | 0 | 0 | 0 | i have a strange error with my website which created by django .
for the server i use gunicorn and nginx .yes it works well at first,when i use firefox to test my website.
i create an account ,login the user ,once i submit the data,the user get login .
one day i change to chrome to test my website ,i go to the login page,fill in the user name and password,click the submit button ,this user get login ,when i refresh the page,the strange thing is ,the website ask me to login again ,it means the user do not login at that time.this happens only in chrome .i test in IE and firefox ,all works well.
my english is not good,i description the error again.
when i use chrome ,i login one account,the page show the account get login already,however i refresh the page or i click to other page,the website show the user is not in login status.
this error only in chrome.
and if i stop guncorn ,i start the website use django command .manage.py runserver.
even i user chrome ,the error do not appear.
i do not know what exact cause the problem.
any one can help me. | django website with gunicorn errror using chrome when login user | 0 | 0 | 1 | 0 | 0 | 211 |
25,350,882 | 2014-08-17T15:52:00.000 | -1 | 0 | 0 | 1 | 1 | python,linux | 0 | 25,350,907 | 0 | 1 | 0 | false | 0 | 0 | history is not an executable file, but a built-in bash command. You can't run it with os.system. | 1 | 0 | 0 | 0 | I am trying to using os.system function to call command 'history'
but the stdout just show that 'sh :1 history: not found'
Other example i.e. os.system('ls') is works. Can anyone can tell me why 'history' does not work, and how to call 'history' command in Python script. | Call system command 'history' in Linux | 1 | -0.197375 | 1 | 0 | 0 | 227 |
25,351,113 | 2014-08-17T16:18:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-2.7,pygame | 0 | 25,352,369 | 0 | 1 | 0 | true | 0 | 1 | Alex Reynolds idea to use tar archives seems to be a perfect match. | 1 | 0 | 0 | 0 | I'm currently tasked with the difficult problem of figuring out how to efficiently pack an image and some text within a single file. In doing so, I need to make the file relatively small (it shouldn't be much bigger than the size of the image file alone), and the process of accessing and saving the information should be relatively fast.
Now, I have already found one way that works - converting the image to a string using pygame, storing it (and the text I need) within an python object, and then pickling the object. This works fine, but the file ends up being much MUCH larger than the image, since it's not being compressed. So to help with this, I then take the pickled object and compress it using gzip. Now I have another problem - the whole process is just a tad bit too slow, since I'll need to do hundreds of these files at a time, which can take several minutes (it shouldn't take longer than a 1/2 second to load a single file, and this method takes up to 2 seconds per file).
I had an idea to somehow put the two separate files, as they are, into one file like how someone would with a .zip, but without the need to further compress the data. As long as the image remains in it's original, compressed format (in this case, .png), simply storing it's data with some text should theoretically be both fast and wouldn't use much more memory. The problem is, I don't know how I would go about doing this.
Any ideas? | How would I go about serializing multiple file objects into one file? | 0 | 1.2 | 1 | 0 | 0 | 49 |
25,365,036 | 2014-08-18T13:53:00.000 | 2 | 0 | 1 | 0 | 0 | python,oop | 0 | 25,365,197 | 0 | 4 | 0 | false | 0 | 0 | Each object has its own copy of data members whereas the the member functions are shared. The compiler creates one copy of the member functions separate from all objects of the class. All the objects of the class share this one copy.
The whole point of OOP is to combine data and functions together. Without OOP, the data cannot be reused, only the functions can be reused. | 2 | 2 | 0 | 0 | I usually use classes similarly to how one might use namedtuple (except of course that the attributes are mutable). Moreover, I try to put lengthy functions in classes that won't be instantiated as frequently, to help conserve memory.
From a memory point of view, is it inefficient to put functions in classes, if it is expected that the class will be instantiated often? Keeping aside that it's good design to compartmentalize functionality, should this be something to be worried about? | Python OOP: inefficient to put methods in classes? | 0 | 0.099668 | 1 | 0 | 0 | 167 |
25,365,036 | 2014-08-18T13:53:00.000 | 6 | 0 | 1 | 0 | 0 | python,oop | 0 | 25,365,082 | 0 | 4 | 0 | true | 0 | 0 | Methods don't add any weight to an instance of your class. The method itself only exists once and is parameterized in terms of the object on which it operates. That's why you have a self parameter. | 2 | 2 | 0 | 0 | I usually use classes similarly to how one might use namedtuple (except of course that the attributes are mutable). Moreover, I try to put lengthy functions in classes that won't be instantiated as frequently, to help conserve memory.
From a memory point of view, is it inefficient to put functions in classes, if it is expected that the class will be instantiated often? Keeping aside that it's good design to compartmentalize functionality, should this be something to be worried about? | Python OOP: inefficient to put methods in classes? | 0 | 1.2 | 1 | 0 | 0 | 167 |
25,367,508 | 2014-08-18T16:05:00.000 | 1 | 0 | 0 | 0 | 0 | python-2.7,stored-procedures,pyramid,pymssql | 0 | 25,646,833 | 0 | 1 | 0 | true | 1 | 0 | The solution was rather trivial. Within one object instance, I was calling two different stored procedures without closing the connection after the first call. That caused a pending request or so in the MSSQL-DB, locking it for further requests. | 1 | 0 | 0 | 0 | From a pyramid middleware application I'm calling a stored procedure with pymssql. The procedure responds nicely upon the first request I pass through the middleware from the frontend (angularJS). Upon subsequent requests however, I do not get any response at all, not even a timeout.
If I then restart the pyramid application, the same above described happens again.
I'm observing this behavior with a couple of procedures that were implemented just yesterday. Some other procedures implemented months ago are working just fine, regardless of how often I call them.
I'm not writing the procedures myself, they are provided for.
From what I'm describing here, can anybody tell where the bug should be hiding most probably? | pyramid middleware call to mssql stored procedure - no response | 0 | 1.2 | 1 | 1 | 0 | 122 |
25,370,287 | 2014-08-18T19:02:00.000 | 4 | 0 | 1 | 0 | 0 | python,django,lifecycle | 0 | 25,370,876 | 0 | 1 | 0 | true | 1 | 0 | This is not a function of Django at all, but of whatever system is being used to serve Django. Usually that'll be wsgi via something like mod_wsgi or a standalone server like gunicorn, but it might be something completely different like FastCGI or even plain CGI.
The point is that all these different systems have their own models that determines process lifetime. In anything other than basic CGI, any individual process will certainly serve several requests before being recycled, but there is absolutely no general guarantee of how many - the process might last several days or weeks, or just a few minutes.
One thing to note though is that you will almost always have several processes running concurrently, and you absolutely cannot count on any particular request being served by the same one as the previous one. That means if you have any user-specific data you want to persist between requests, you need to store it somewhere like the session. | 1 | 3 | 0 | 0 | When using Django, how long does the Python process used to service requests stay alive? Obviously, a given Python process services an entire request, but is it guaranteed to survive across across requests?
The reason I ask is that I perform some expensive computations at when I import certain modules and would like to know how often the modules will be imported. | Django Process Lifetime | 0 | 1.2 | 1 | 0 | 0 | 266 |
25,373,895 | 2014-08-19T00:12:00.000 | 1 | 0 | 0 | 0 | 0 | python-2.7,opengl,pygame | 0 | 25,383,333 | 0 | 1 | 0 | true | 0 | 1 | I am no expert in Python but you could try:
Multiply the color of the image by a value like 0.1, 0.2, 0.3 (anything less that 1) which will give you a very dark texture. This would be the easiest method as it involves just reducing the color values of that texture.
Or you could try a more complex method such as drawing a transparent black quad over the original image to give it the illusion of being in a shadow. | 1 | 0 | 0 | 0 | I'm trying to make an OpenGL in python/pygame but I don't know how to add shadow. I don't want to make a lot of darker images for my game. Can someone help me? | How can you edit the brightness of images in PyGame? | 0 | 1.2 | 1 | 0 | 0 | 933 |
25,375,903 | 2014-08-19T04:58:00.000 | 0 | 0 | 0 | 0 | 1 | python,django,windows-7-x64,aptana3 | 1 | 25,376,287 | 0 | 1 | 0 | false | 1 | 0 | After some searching, finally figured out that the default program to run the django-admin.py was aptana studio 3, even though the program had supposedly been uninstalled completely from my system. I changed the default program to be the python console launcher and now it works fine. There goes 2 hours down the drain.. | 1 | 0 | 0 | 0 | I am having an issue with starting a new project from the command prompt. After I have created a virtual env and activated the enviroment, when I enter in .\Scripts\django-admin.py startproject new_project, a popup window shows up which says "AptanaStudio3 executable launcher was unable to locate its companion shared library"
I have tried uninstalling Aptana studio, but even when it is uninstalled, the error still occurs. Not sure what I need to do fix this. I have not unistalled/reinstalled python, i'm not even sure if that has anything to do with it. Many thanks in advance | Aptana Studio 3 newproject error with Django | 0 | 0 | 1 | 0 | 0 | 116 |
25,380,448 | 2014-08-19T09:52:00.000 | 1 | 0 | 1 | 0 | 0 | python,excel,csv,export-to-csv | 0 | 25,380,579 | 0 | 2 | 0 | false | 0 | 0 | You shall inspect the real content of CSV file you have created and you will see, that there are ways to enclose text in quotes. This allows distinction between delimiter and a character inside text value.
Check csv module documentation, it explains these details too. | 1 | 1 | 0 | 0 | I am using python's CSV module to output a series of parsed text documents with meta data. I am using the csv.writer module without specifying a special delimiter, so I am assuming it is delimited using commas. There are many commas in the text as well as in the meta data, so I was expecting there to be way more columns in the document rows, when compared to the header row.
What surprises me is that when I load the outputted file in Excel, everything looks exactly right. How does Excel know how to delimit this correctly??? How is it able to figure out which commas are text commas and which ones are delimiters?
Related question: Do people usually use CSV for saving text documents? Is this a standard practice? It seems inferior to JSON or creating a SQLite database in every sense, from long-term sustainability to ease of interpreting without errors. | Python CSV module - how does it avoid delimiter issues? | 0 | 0.099668 | 1 | 1 | 0 | 518 |
25,392,779 | 2014-08-19T20:53:00.000 | 0 | 0 | 1 | 0 | 0 | python,json,synchronization | 0 | 25,403,785 | 0 | 3 | 0 | false | 0 | 0 | If concurrency is not required, maybe consider writing 2 functions to read and write the data to a shelf file? Our is the idea to have the dictionary" aware" of changes to update the file without this kind of thing? | 2 | 5 | 0 | 0 | In perl there was this idea of the tie operator, where writing to or modifying a variable can run arbitrary code (such as updating some underlying Berkeley database file). I'm quite sure there is this concept of overloading in python too.
I'm interested to know what the most idiomatic way is to basically consider a local JSON file as the canonical source of needed hierarchical information throughout the running of a python script, so that changes in a local dictionary are automatically reflected in the JSON file. I'll leave it to the OS to optimise writes and cache (I don't mind if the file is basically updated dozens of times throughout the running of the script), but ultimately this is just about a kilobyte of metadata that I'd like to keep around. It's not necessary to address concurrent access to this. I'd just like to be able to access a hierarchical structure (like nested dictionary) within the python process and have reads (and writes to) that structure automatically result in reads from (and changes to) a local JSON file. | In Python, how do I tie an on-disk JSON file to an in-process dictionary? | 0 | 0 | 1 | 0 | 0 | 500 |
25,392,779 | 2014-08-19T20:53:00.000 | 1 | 0 | 1 | 0 | 0 | python,json,synchronization | 0 | 25,406,625 | 0 | 3 | 0 | false | 0 | 0 | This is a developpement from aspect_mkn8rd' answer taking into account Gerrat's comments, but it is too long for a true comment.
You will need 2 special container classes emulating a list and a dictionnary. In both, you add a pointer to a top-level object and override the following methods :
__setitem__(self, key, value)
__delitem__(self, key)
__reversed__(self)
All those methods are called in modification and should have the top-level object to be written to disk.
In addition, __setitem__(self, key, value) should look if value is a list and wrap it into a special list object or if it is a dictionary, wrap it into a special dictionnary object. In both case, the method should set the top-level object into the new container. If neither of them and the object defines __setitem__, it should raise an Exception saying the object is not supported. Of course you should then modify the method to take in account this new class.
Of course, there is a good deal of code to write and test, but it should work - left to the reader as an exercise :-) | 2 | 5 | 0 | 0 | In perl there was this idea of the tie operator, where writing to or modifying a variable can run arbitrary code (such as updating some underlying Berkeley database file). I'm quite sure there is this concept of overloading in python too.
I'm interested to know what the most idiomatic way is to basically consider a local JSON file as the canonical source of needed hierarchical information throughout the running of a python script, so that changes in a local dictionary are automatically reflected in the JSON file. I'll leave it to the OS to optimise writes and cache (I don't mind if the file is basically updated dozens of times throughout the running of the script), but ultimately this is just about a kilobyte of metadata that I'd like to keep around. It's not necessary to address concurrent access to this. I'd just like to be able to access a hierarchical structure (like nested dictionary) within the python process and have reads (and writes to) that structure automatically result in reads from (and changes to) a local JSON file. | In Python, how do I tie an on-disk JSON file to an in-process dictionary? | 0 | 0.066568 | 1 | 0 | 0 | 500 |
25,395,814 | 2014-08-20T02:22:00.000 | 0 | 1 | 0 | 1 | 0 | python,terminal,raspberry-pi,tesseract,raspbian | 0 | 25,409,597 | 0 | 1 | 0 | false | 0 | 0 | There are ways to do what you asked, but I think you lack some research of your own, as some of these answers are very "googlable".
You can print commands to LX terminal with python using "sys.stdout.write()"
For the boot question:
1 - sudo raspi-config
2 - change the Enable Boot to Desktop to Console
3 - there is more than one way to make your script auto-executable:
-you have the Crontab (which I think it will be the easiest, but probably not the best of the 3 ways)
-you can also make your own init.d script (best, not easiest)
-or you can use the rc.local
Also be carefull when placing an infinite loop script in auto-boot.
Make a quick google search and you will find everything you need.
Hope it helps.
D.Az | 1 | 0 | 0 | 0 | okay, So for a school project I'm using raspberry pi to make a device that basically holds both the functions of an ocr and a tts. I heard that I need to use Google's tesseract through a terminal but I am not willing to rewrite the commands each time I want to use it. so i was wondering if i could either:
A: Use python to print commands into the LX Terminal
B: use a type of loop command on the LX terminal and save as a script?
It would also be extremely helpful if I could find out how to make my RPI go staight to my script rather than the raspbian desktop when it first boote up.
Thanks in advance. | can I use python to paste commands into LX terminal? | 0 | 0 | 1 | 0 | 0 | 428 |
25,403,160 | 2014-08-20T11:07:00.000 | 1 | 0 | 0 | 1 | 0 | python,tornado | 0 | 25,410,159 | 0 | 1 | 0 | true | 0 | 0 | These methods are used internally; you shouldn't call them yourself. | 1 | 1 | 0 | 0 | I am learning the web framework Tornado. During the study of this framework, I found the class tornado.httpserver.HTTPserver. I know how to create a constructor of this class and create instance tornado.httpserver.HTTPserver in main() function. But this class tornado.httpserver.HTTPserver has 4 methods. I have not found how to use these methods.
1) def close_all_connections(self):
2) def handle_stream(self, stream, address):
3) def start_request(self, server_conn, request_conn):
4) def on_close(self, server_conn):
I know that 2-4 methods are inherited from the class tornado.tcpserver.TCPServer
Can someone illustrate how to use these methods of a class tornado.httpserver.HTTPserver? | How to call methods of class tornado.httpserver.HTTPserver? | 0 | 1.2 | 1 | 0 | 0 | 120 |
25,416,553 | 2014-08-21T00:32:00.000 | 0 | 0 | 1 | 0 | 0 | python,oop,rotation | 0 | 25,417,646 | 0 | 1 | 0 | false | 0 | 0 | Geometric objects that have a fixed boundary/end-points can be translated and rotated in place. But for a line, unless you talk about a line from point A to point B with a fixed length, you are looking at both end-points either being at infinity or -infinity (y = mx + c). Division using infinity or -infinity is not simple math and hence I believe complicates the rotation and translation algorithms | 1 | 1 | 0 | 0 | I run into an OOP problem when coding something in python that I don't know how to address in an elegant solution. I have a class that represents the equation of a line (y = mx + b) based on the m and b parameters, called Line. Vertical lines have infinite slope, and have equation x = c, so there is another class VerticalLine which only requires a c parameter. Note that I am unable to have a Line class that is represented by two points in the xy-plane, if this were a solution I would indeed use it.
I want to be able to rotate the lines. Rotating a horizontal line by pi/2 + k*pi (k an integer) results in a vertical line, and vice versa. So a normal Line would have to somehow be converted to a VerticalLine in-place, which is impossible in python (well, not impossible but incredibly wonky). How can I better structure my program to account for this problem?
Note that other geometric objects in the program have a rotation method that is in-place, and they are already used frequently, so if I could I would like the line rotation methods to also be in place. Indeed, this would be a trivial problem if the line rotation methods could return a new rotated Line or VerticalLine object as seen fit. | Elegant Solution to an OOP Issue involving type-changing in python | 0 | 0 | 1 | 0 | 0 | 45 |
25,419,510 | 2014-08-21T06:23:00.000 | 0 | 0 | 0 | 1 | 1 | android,python,ssh,kivy | 0 | 41,788,451 | 0 | 3 | 0 | false | 0 | 1 | Don't know you found the answer or not. But what i have understood is that you are trying to connect android device from Ubuntu. If I am right then (go on reading) you are following wrong steps.
First :- Your Ubuntu does not have ssh server by default so you get this error message.
Second :- You are using 127.0.0.1 address i.e your Ubuntu machine itself.
Method to do this shall be
Give your android machine a static address or if it gets dynamic its OK.
know the IP address of android and then from Ubuntu typessh -p8000 admin@IP_Of_andrid_device and this should solve the issue. | 3 | 1 | 0 | 0 | This seems to be a dumb question, but how do I ssh into the kivy-remote-shell?
I'm trying to use buildozer and seem to be able to get the application built and deployed with the command, buildozer -v android debug deploy run, which ends with the application being pushed, and displayed on my android phone, connected via USB.
However, when I try ssh -p8000 [email protected] from a terminal on the ubuntu machine I pushed the app from I get Connection Refused.
It seems to me that there should be a process on the host (ubuntu) machine in order to proxy the connection, or maybe I just don't see how this works?
Am I missing something simple, or do I need to dig in a debug a bit more? | How to connect to kivy-remote-shell? | 0 | 0 | 1 | 0 | 0 | 1,540 |
25,419,510 | 2014-08-21T06:23:00.000 | 1 | 0 | 0 | 1 | 1 | android,python,ssh,kivy | 0 | 25,423,631 | 0 | 3 | 0 | false | 0 | 1 | 127.0.0.1
This indicates something has gone wrong - 127.0.0.1 is a standard loopback address that simply refers to localhost, i.e. it's trying to ssh into your current computer.
If this is the ip address suggested by kivy-remote-shell then there must be some other problem, though I don't know what - does it work on another device? | 3 | 1 | 0 | 0 | This seems to be a dumb question, but how do I ssh into the kivy-remote-shell?
I'm trying to use buildozer and seem to be able to get the application built and deployed with the command, buildozer -v android debug deploy run, which ends with the application being pushed, and displayed on my android phone, connected via USB.
However, when I try ssh -p8000 [email protected] from a terminal on the ubuntu machine I pushed the app from I get Connection Refused.
It seems to me that there should be a process on the host (ubuntu) machine in order to proxy the connection, or maybe I just don't see how this works?
Am I missing something simple, or do I need to dig in a debug a bit more? | How to connect to kivy-remote-shell? | 0 | 0.066568 | 1 | 0 | 0 | 1,540 |
25,419,510 | 2014-08-21T06:23:00.000 | 2 | 0 | 0 | 1 | 1 | android,python,ssh,kivy | 0 | 25,426,085 | 0 | 3 | 0 | false | 0 | 1 | When the app is running, the GUI will tell you what IP address and port to connect to. | 3 | 1 | 0 | 0 | This seems to be a dumb question, but how do I ssh into the kivy-remote-shell?
I'm trying to use buildozer and seem to be able to get the application built and deployed with the command, buildozer -v android debug deploy run, which ends with the application being pushed, and displayed on my android phone, connected via USB.
However, when I try ssh -p8000 [email protected] from a terminal on the ubuntu machine I pushed the app from I get Connection Refused.
It seems to me that there should be a process on the host (ubuntu) machine in order to proxy the connection, or maybe I just don't see how this works?
Am I missing something simple, or do I need to dig in a debug a bit more? | How to connect to kivy-remote-shell? | 0 | 0.132549 | 1 | 0 | 0 | 1,540 |
25,422,847 | 2014-08-21T09:29:00.000 | -1 | 1 | 0 | 0 | 0 | python,robotframework | 0 | 25,830,228 | 0 | 5 | 0 | false | 1 | 0 | Actually, you can SET TAG to run whatever keyword you like (for sanity testing, regression testing...)
Just go to your test script configuration and set tags
And whenever you want to run, just go to Run tab and select check-box Only run tests with these tags / Skip tests with these tags
And click Start button :) Robot framework will select any keyword that match and run it.
Sorry, I don't have enough reputation to post images :( | 1 | 1 | 0 | 0 | In Robot Framework, the execution status for each test case can be either PASS or FAIL. But I have a specific requirement to mark few tests as NOT EXECUTED when it fails due to dependencies.
I'm not sure on how to achieve this. I need expert's advise for me to move ahead. | Customized Execution status in Robot Framework | 0 | -0.039979 | 1 | 0 | 0 | 3,561 |
25,440,006 | 2014-08-22T05:17:00.000 | 0 | 0 | 1 | 0 | 0 | python,matplotlib | 0 | 25,440,301 | 0 | 3 | 0 | false | 0 | 0 | You want to get the regular (zoomable) plot window, right? I think you can not do it in the same kernel as, unfortunately, you can't switch from inline to qt and such because the backend has already been chosen: your calls to matplotlib.use() are always before pylab. | 2 | 3 | 0 | 0 | I am running ipython remotely on a remote server. I access it using serveraddress:8888/ etc to write code for my notebooks.
When I use matplotlib of course the plots are inline. Is there any way to remotely send data so that plot window opens up? I want the whole interactive environment on matplotlib on my local machine and all the number crunching on the server machine? This is something very basic....but somehow after rumaging through google for quite a while i can't figure it out. | how to display matplotlib plots on local machine? | 0 | 0 | 1 | 0 | 0 | 2,520 |
25,440,006 | 2014-08-22T05:17:00.000 | 3 | 0 | 1 | 0 | 0 | python,matplotlib | 0 | 25,442,617 | 0 | 3 | 0 | true | 0 | 0 | There are a few possibilities
If your remote machine is somehow unixish, you may use the X Windows (then your session is on the remote machine and display on the local machine)
mpld3
bokeh and iPython notebook
nbagg backend of matplotlib.¨
Alternative #1 requires you to have an X server on your machine and a connection between the two machines (possibly tunneled through ssh, etc.) So, this is OS dependent, and the performance depends on the connection between the two machines.
Alternatives #2 and #3 are very new but promising. They have quite different approaches, mpl3d enables the use of standard matplotlib plotting commands, but with large datasets bokeh may be more useful.
Alternative #4 is probably the ultimate solution (see tcaswell's comments), but not yet available without using a development version of matplotlib (i.e. there may be some installation challenges). On the other hand, if you can hold your breath for a week, 1.4.0 will be out. | 2 | 3 | 0 | 0 | I am running ipython remotely on a remote server. I access it using serveraddress:8888/ etc to write code for my notebooks.
When I use matplotlib of course the plots are inline. Is there any way to remotely send data so that plot window opens up? I want the whole interactive environment on matplotlib on my local machine and all the number crunching on the server machine? This is something very basic....but somehow after rumaging through google for quite a while i can't figure it out. | how to display matplotlib plots on local machine? | 0 | 1.2 | 1 | 0 | 0 | 2,520 |
25,440,747 | 2014-08-22T06:22:00.000 | 1 | 0 | 1 | 1 | 0 | python,vb.net,arguments | 0 | 25,442,205 | 0 | 2 | 0 | true | 0 | 0 | If you don't have the source to the python exe convertor and if the arguments don't need to change on each execution, you could probably open the exe in a debugger like ollydbg and search for shellexecute or createprocess and then create a string in a code cave and use that for the arguments. I think that's your only option.
Another idea: Maybe make your own extractor that includes the python script, vbscript, and python interpreter. You could just use a 7zip SFX or something. | 1 | 1 | 0 | 0 | I have converted a python script to .exe file. I just want to run the exe file from a VB script. Now the problem is that the python script accepts arguments during run-time (e.g.: serial port number, baud rate, etc.) and I cannot do the same with the .exe file. Can someone help me how to proceed? | Running an .exe script from a VB script by passing arguments during runtime | 0 | 1.2 | 1 | 0 | 0 | 185 |
25,459,285 | 2014-08-23T06:55:00.000 | 0 | 0 | 0 | 0 | 0 | python,kivy | 0 | 25,461,440 | 0 | 1 | 0 | true | 0 | 1 | There isn't a property that lets you simply do this - the transition is a property of the screenmanager, not of the screen.
You could add your own screen change method for the screenmanager that knows about the screen names and internally sets the transition. | 1 | 0 | 0 | 0 | I have two screens and want to change the SlideTransition from my second to first screen to direction: 'right' while keeping the first to second transition the default. The docs only show how to change the transition for every transition. How would I make a transition unique to one screen, done in the kv file?
Note: I have declared my screen manager screens in the kv file also. | Making unique screen transitions - kivy | 0 | 1.2 | 1 | 0 | 0 | 249 |
25,468,191 | 2014-08-24T03:09:00.000 | 1 | 1 | 0 | 0 | 0 | python,unicode,utf-8,decode,encode | 1 | 25,468,417 | 0 | 1 | 0 | true | 0 | 0 | They return the same thing because b'\x53' == b'S'. It's the same of other characters in the ASCII table as they're represented by the same bytes.
You're getting a UnicodeDecodeError because you seem to be using a wrong encoding. If I run b'\xf9'.decode('iso-8859-1') I get ù so it's possible that the encoding is ISO-8859-1.
However, I'm not familiar with the MIDI protocol so you have to review it to see what bytes need to be interpreted as what encoding. If decode all the given bytes as ISO-8859-1 it doesn't give me a meaningful string so it may mean that these bytes stand for something else, not text? | 1 | 0 | 0 | 0 | I am dealing with the following string of bytes in Python 3.
b'\xf9', b'\x02', b'\x03', b'\xf0', b'y', b'\x02', b'\x03', b'S', b'\x00', b't', b'\x00', b'a'
This is a very confusing bunch of bytes for me because it is coming from a microcontroller which is emitting information according to the MIDI protocol.
My first question is about the letters near the end. Most all of the other bytes are true hexadecimal values (i.e. I know the b'\x00' is supposed to be a null character). However, the capital S, which is supposed to be a capital S, appears as such (a b'S'). According to the ASCII / HEX charts I have looked at, Uppercase S should be x53 (which is what b'\x53'.decode('utf-8') returns.
However, in Python when I do b'S'.decode('utf-8') it also returns a capita S, (how can it be both?)
Also, Some of the bytes (such as b'\xf9') are truly meant to be escaped (which is why they have the \x however, I am running into issues when trying to decode them. When running [byteString].decode('utf-8') on a longer version of the above string I get the following error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf9 in position 0: invalid start byte
Shouldn't those bytes be skipped over or printed as? Thanks | Issues decoding a Python 3 Bytes object based on MIDI | 0 | 1.2 | 1 | 0 | 0 | 312 |
25,489,450 | 2014-08-25T15:35:00.000 | 2 | 0 | 1 | 1 | 1 | python,cmd | 1 | 25,490,166 | 0 | 1 | 0 | true | 0 | 0 | Each call to os.system is a separate instance of the shell. The cd you issued only had effect in the first instance of the shell. The second call to os.system was a new shell instance that started in the Python program's current working directory, which was not affected by the first cd invocation.
Some ways to do what you want:
1 -- put all the relevant commands in a single bash file and execute that via os.system
2 -- skip the cd call; just invoke your tesseract command using a full path to the file
3 -- change the directory for the Python program as a whole using os.chdir but this is probably not the right way -- your Python program as a whole (especially if running in a web app framework like Django or web2py) may have strong feelings about the current working directory.
The main takeaway is, os.system calls don't change the execution environment of the current Python program. It's equivalent to what would happen if you created a sub-shell at the command line, issued one command then exited. Some commands (like creating files or directories) have permanent effect. Others (like changing directories or setting environment variables) don't. | 1 | 0 | 0 | 0 | I am very new to Python and I have been trying to find a way to write in cmd with python.
I tried os.system and subprocess too. But I am not sure how to use subprocess.
While using os.system(), I got an error saying that the file specified cannot be found.
This is what I am trying to write in cmd os.system('cd '+path+'tesseract '+'a.png out')
I have tried searching Google but still I don't understand how to use subprocess.
EDIT:
It's not a problem with python anymore, I have figured out. Here is my code now.
os.system("cd C:\\Users\\User\\Desktop\\Folder\\data\\")
os.system("tesseract a.png out")
Now it says the file cannot be open. But if I open the cmd separately and write the above code, it successfully creates a file in the folder\data. | Writing a line to CMD in python | 0 | 1.2 | 1 | 0 | 0 | 727 |
25,495,410 | 2014-08-25T22:40:00.000 | 0 | 0 | 0 | 1 | 0 | python,shell,flask | 0 | 25,496,797 | 0 | 1 | 0 | false | 1 | 0 | The one way I can think of doing this is to refresh the page. So, you could set the page to refresh itself every X seconds.
You would hope that the file you are reading is not large though, or it will impact performance. Better to have the output in memory. | 1 | 0 | 0 | 0 | In my project workflow, i am invoking a sh script from a python script file. I am planning to introduce a web user interface for this, hence opted for flask framework. I am yet to figure out how to display the terminal output of the shell script invoked by my python script in a component like text area or label. This file is a log file which is constantly updated till the script run is completed.
The solution i thought was to redirect terminal output to a text file and read the text file every X seconds and display the content. I can also do it via the Ajax way from my web application. Is there any other prescribed way to achieve this ?
Thanks | Display a contantly updated text file in a web user interface using Python flask framework | 0 | 0 | 1 | 0 | 0 | 268 |
25,550,116 | 2014-08-28T13:33:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,webserver,localhost | 0 | 70,518,273 | 0 | 7 | 0 | false | 1 | 0 | very simple,
first you need to add ip to allowed host,
ALLOWED_HOST =['*']
2. then execute python manage.py runserver 0.0.0.0:8000
now you can access the local project on different system in the same network | 3 | 23 | 0 | 0 | I am developing a web application on my local computer in Django.
Now I want my webapp to be accessible to other computers on my network. We have a common network drive "F:/". Should I place my files on this drive or can I just write something like "python manage.py runserver test_my_app:8000" in the command prompt to let other computers in the network access the web server by writing "test_my_app:8000" in the browser address field? Do I have to open any ports and how can I do this? | Access Django app from other computers | 0 | 0 | 1 | 0 | 0 | 29,704 |
25,550,116 | 2014-08-28T13:33:00.000 | 11 | 0 | 0 | 0 | 0 | python,django,webserver,localhost | 0 | 57,634,195 | 0 | 7 | 0 | false | 1 | 0 | Just add your own IP Address to ALLOWED_HOSTS
ALLOWED_HOSTS = ['192.168.1.50', '127.0.0.1', 'localhost']
and run your server python manage.py runserver 192.168.1.50:8000
and access your own server to other computer in your network | 3 | 23 | 0 | 0 | I am developing a web application on my local computer in Django.
Now I want my webapp to be accessible to other computers on my network. We have a common network drive "F:/". Should I place my files on this drive or can I just write something like "python manage.py runserver test_my_app:8000" in the command prompt to let other computers in the network access the web server by writing "test_my_app:8000" in the browser address field? Do I have to open any ports and how can I do this? | Access Django app from other computers | 0 | 1 | 1 | 0 | 0 | 29,704 |
25,550,116 | 2014-08-28T13:33:00.000 | 6 | 0 | 0 | 0 | 0 | python,django,webserver,localhost | 0 | 43,633,252 | 0 | 7 | 0 | false | 1 | 0 | Run the application with IP address then access it in other machines.
python manage.py runserver 192.168.56.22:1234
Both machines should be in same network, then only this will work. | 3 | 23 | 0 | 0 | I am developing a web application on my local computer in Django.
Now I want my webapp to be accessible to other computers on my network. We have a common network drive "F:/". Should I place my files on this drive or can I just write something like "python manage.py runserver test_my_app:8000" in the command prompt to let other computers in the network access the web server by writing "test_my_app:8000" in the browser address field? Do I have to open any ports and how can I do this? | Access Django app from other computers | 0 | 1 | 1 | 0 | 0 | 29,704 |
25,552,075 | 2014-08-28T15:00:00.000 | 1 | 0 | 0 | 0 | 0 | python,openerp | 0 | 25,993,349 | 0 | 1 | 0 | true | 1 | 0 | You can do it easily with python library called XlsxWriter. Just download it and add in openerp Server, look for XlsxWriter Documentation , plus there are also other python libraries for generating Xlsx reports. | 1 | 0 | 0 | 0 | I need to know, what are the steps to generate an Excel sheet in OpenERP?
Or put it this way, I want to generate an Excel sheet for data that I have retrieved from different tables through queries with a function that I call from a button on wizard. Now I want when I click on the button an Excel sheet should be generated.
I have installed OpenOffice, the problem is I don't know how to create that sheet and put data on it. Please will you tell me the steps? | What are the steps to create or generate an Excel sheet in OpenERP? | 0 | 1.2 | 1 | 1 | 0 | 596 |
25,557,693 | 2014-08-28T20:43:00.000 | 0 | 0 | 0 | 0 | 1 | python,windows-7,scrapy,pyinstaller,scrapy-spider | 1 | 52,980,333 | 0 | 2 | 0 | false | 1 | 0 | You need to create a scrapy folder under the same directory as runspider.exe (the exe file generated by pyinstaller).
Then copy the "VERSION" and "mime.types" files(default path: %USERPROFILE%\AppData\Local\Programs\Python\Python37\Lib\site-packages\scrapy) to the scrapy you just created in the scrappy folder you create . (If you only copy "VERSION", you will be prompted to find the "mime.types" file) | 1 | 1 | 0 | 0 | After installing all dependencies for scrapy on windows 32bit. I've tried to build an executable from my scrapy spider. Spider script "runspider.py" works ok when running as "python runspider.py"
Building executable "pyinstaller --onefile runspider.py":
C:\Users\username\Documents\scrapyexe>pyinstaller --onefile
runspider.py 19 INFO: wrote
C:\Users\username\Documents\scrapyexe\runspider.spec 49 INFO: Testing
for ability to set icons, version resources... 59 INFO: ... resource
update available 59 INFO: UPX is not available. 89 INFO: Processing
hook hook-os 279 INFO: Processing hook hook-time 279 INFO: Processing
hook hook-cPickle 380 INFO: Processing hook hook-_sre 561 INFO:
Processing hook hook-cStringIO 700 INFO: Processing hook
hook-encodings 720 INFO: Processing hook hook-codecs 1351 INFO:
Extending PYTHONPATH with C:\Users\username\Documents\scrapyexe 1351
INFO: checking Analysis 1351 INFO: building Analysis because
out00-Analysis.toc non existent 1351 INFO: running Analysis
out00-Analysis.toc 1351 INFO: Adding Microsoft.VC90.CRT to dependent
assemblies of final executable
1421 INFO: Searching for assembly
x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21
022.8_none ... 1421 INFO: Found manifest C:\Windows\WinSxS\Manifests\x86_microsoft.vc90.crt_1fc
8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91.manifest 1421 INFO:
Searching for file msvcr90.dll 1421 INFO: Found file
C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_
9.0.21022.8_none_bcb86ed6ac711f91\msvcr90.dll 1421 INFO: Searching for file msvcp90.dll 1421 INFO: Found file
C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_
9.0.21022.8_none_bcb86ed6ac711f91\msvcp90.dll 1421 INFO: Searching for file msvcm90.dll 1421 INFO: Found file
C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_
9.0.21022.8_none_bcb86ed6ac711f91\msvcm90.dll 1592 INFO: Analyzing C:\python27\lib\site-packages\PyInstaller\loader_pyi_boots trap.py
1621 INFO: Processing hook hook-os 1661 INFO: Processing hook
hook-site 1681 INFO: Processing hook hook-encodings 1872 INFO:
Processing hook hook-time 1872 INFO: Processing hook hook-cPickle 1983
INFO: Processing hook hook-_sre 2173 INFO: Processing hook
hook-cStringIO 2332 INFO: Processing hook hook-codecs 2963 INFO:
Processing hook hook-pydoc 3154 INFO: Processing hook hook-email 3255
INFO: Processing hook hook-httplib 3305 INFO: Processing hook
hook-email.message 3444 INFO: Analyzing
C:\python27\lib\site-packages\PyInstaller\loader\pyi_import ers.py
3535 INFO: Analyzing
C:\python27\lib\site-packages\PyInstaller\loader\pyi_archiv e.py 3615
INFO: Analyzing
C:\python27\lib\site-packages\PyInstaller\loader\pyi_carchi ve.py 3684
INFO: Analyzing
C:\python27\lib\site-packages\PyInstaller\loader\pyi_os_pat h.py 3694
INFO: Analyzing runspider.py 3755 WARNING: No django root directory
could be found! 3755 INFO: Processing hook hook-django 3785 INFO:
Processing hook hook-lxml.etree 4135 INFO: Processing hook hook-xml
4196 INFO: Processing hook hook-xml.dom 4246 INFO: Processing hook
hook-xml.sax 4296 INFO: Processing hook hook-pyexpat 4305 INFO:
Processing hook hook-xml.dom.domreg 4736 INFO: Processing hook
hook-pywintypes 5046 INFO: Processing hook hook-distutils 7750 INFO:
Hidden import 'codecs' has been found otherwise 7750 INFO: Hidden
import 'encodings' has been found otherwise 7750 INFO: Looking for
run-time hooks 7750 INFO: Analyzing rthook
C:\python27\lib\site-packages\PyInstaller\loader\rth
ooks\pyi_rth_twisted.py 8111 INFO: Analyzing rthook
C:\python27\lib\site-packages\PyInstaller\loader\rth
ooks\pyi_rth_django.py 8121 INFO: Processing hook hook-django.core
8131 INFO: Processing hook hook-django.core.management 8401 INFO:
Processing hook hook-django.core.mail 8862 INFO: Processing hook
hook-django.db 9112 INFO: Processing hook hook-django.db.backends 9153
INFO: Processing hook hook-django.db.backends.mysql 9163 INFO:
Processing hook hook-django.db.backends.mysql.base 9163 INFO:
Processing hook hook-django.db.backends.oracle 9183 INFO: Processing
hook hook-django.db.backends.oracle.base 9253 INFO: Processing hook
hook-django.core.cache 9874 INFO: Processing hook hook-sqlite3 10023
INFO: Processing hook hook-django.contrib 10023 INFO: Processing hook
hook-django.contrib.sessions 11887 INFO: Using Python library
C:\Windows\system32\python27.dll 12226 INFO: Warnings written to
C:\Users\username\Documents\scrapyexe\build\runspid
er\warnrunspider.txt 12256 INFO: checking PYZ 12256 INFO: rebuilding
out00-PYZ.toc because out00-PYZ.pyz is missing 12256 INFO: building
PYZ (ZlibArchive) out00-PYZ.toc 16983 INFO: checking PKG 16993 INFO:
rebuilding out00-PKG.toc because out00-PKG.pkg is missing 16993 INFO:
building PKG (CArchive) out00-PKG.pkg 19237 INFO: checking EXE 19237
INFO: rebuilding out00-EXE.toc because runspider.exe missing 19237
INFO: building EXE from out00-EXE.toc 19237 INFO: Appending archive to
EXE C:\Users\username\Documents\scrapyexe\dist\run spider.exe
running built exe "runspider.exe":
C:\Users\username\Documents\scrapyexe\dist>runspider.exe
Traceback (most recent call last):
File "", line 2, in
File "C:\python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line
270, in load_module
exec(bytecode, module.dict)
File "C:\Users\username\Documents\scrapyexe\build\runspider\out00-PYZ.pyz\scrapy"
, line 10, in
File "C:\Users\username\Documents\scrapyexe\build\runspider\out00-PYZ.pyz\pkgutil
", line 591, in get_data
File "C:\python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line
342, in get_data
fp = open(path, 'rb')
IOError: [Errno 2] No such file or directory: 'C:\Users\username\AppData\Local\
\Temp\_MEI15522\scrapy\VERSION'
I'm extremely helpful for any kind of help. I need to know how to build standalone exe from scrapy spider for windows.
Thank you very much for any help. | Pyinstaller scrapy error: | 0 | 0 | 1 | 0 | 0 | 2,324 |
25,604,247 | 2014-09-01T10:53:00.000 | 0 | 0 | 0 | 0 | 1 | python,emr,mrjob | 0 | 26,083,485 | 0 | 1 | 0 | true | 0 | 0 | Well, after many searches, it seems there is no such option | 1 | 2 | 0 | 0 | I'm trying to set an IAM role to my EMR cluster with mrjob 0.4.2.
I saw that there is a new option in 0.4.3 to do this, but it is still in development and I prefer to use the stable version instead.
Any idea on how to do this? I have tried to create the cluster using Amazon's console and then run the bootstrap+step actions using mrjob (by connecting to that cluster) but it didn't worked.
Another option is being able to change the default permissions for EMR instances so mrjob will be able to take advantage of it. | How to set IAM role with MrJob 0.4.2 on EMR | 0 | 1.2 | 1 | 0 | 0 | 142 |
25,625,097 | 2014-09-02T13:51:00.000 | 3 | 0 | 1 | 0 | 1 | python-3.x,pygame | 1 | 25,628,838 | 0 | 2 | 0 | false | 0 | 1 | Are you using a 64-bit operating system? Try using the 32-bit installer. | 2 | 2 | 0 | 0 | I'm trying to install Pygame and it returns me the following error "Python version 3.4 required which was not found in the registry". However I already have the Python 3.4.1 installed on my system. Does anyone know how to solve that problem?
I've been using Windows 8.1
Thanks in advance. | Error installing Pygame / Python 3.4.1 | 0 | 0.291313 | 1 | 0 | 0 | 1,096 |
25,625,097 | 2014-09-02T13:51:00.000 | 0 | 0 | 1 | 0 | 1 | python-3.x,pygame | 1 | 25,628,949 | 0 | 2 | 0 | false | 0 | 1 | Tips I can provide:
Add Python to your Path file in the Advanced settings of your Environmental Variables (just search for it in the control panel)
Something may have gone wrong with the download of Python, so re-install it. Also don't download the 64-bit version, just download the 32-bit version from the main pygame website
Once that's sorted out, transfer the entire Pygame file to the site packages in the Python directory and open up the pygame file and go to this directory in command prompt. Finally, run the Pygame setup from the command prompt which should be something like:
python setup.py
But this will only work if the pygame setup file is called setup.py (it's been a while since I downloaded it), you added Python to the Path file and you're currently in the correct directory in command prompt.
To test if it worked try importing pygame and see if you got an error or not. | 2 | 2 | 0 | 0 | I'm trying to install Pygame and it returns me the following error "Python version 3.4 required which was not found in the registry". However I already have the Python 3.4.1 installed on my system. Does anyone know how to solve that problem?
I've been using Windows 8.1
Thanks in advance. | Error installing Pygame / Python 3.4.1 | 0 | 0 | 1 | 0 | 0 | 1,096 |
25,648,912 | 2014-09-03T16:04:00.000 | 0 | 0 | 0 | 1 | 0 | python-2.7,tkinter | 0 | 25,689,654 | 0 | 1 | 0 | true | 0 | 1 | I am using Linux Mint. In order to make a program not show up in the foreground (i.e. be hidden behind all of the other windows), one should use root.lower() as aforementioned in the comments. However, please note (and this seems to happen on multiple platforms) that root.lower() will not change the focus of the window. Therefore, even if you use .lower() and run the script, and if you press [alt] + [F4], for example, the Tkinter window that was just opened (even though you cannot see it) will be closed.
I noticed, however, that it is prudent to place the root.lower() after attributes for the Tkinter root. For example, if you use root.attributes("-zoomed", True) to expand the window, be sure to place root.lower() after the root.attributes(..). Moreover, it did not work for me when I put root.lower() before root.attributes(..). | 1 | 0 | 0 | 0 | Comically enough, I was really annoyed when tkinter windows opened in the background on Mac. However, now I am on Linux, and I want tkinter to open in background.
I don't know how to do this, and when I google how to do it, all I can find are a lot of angry Mac users who can't get tkinter to open in the foreground.
I should note that I am using python2.7 and thus Tkinter not tkinter (very confusing). | Start Tkinter in background | 0 | 1.2 | 1 | 0 | 0 | 184 |
25,681,118 | 2014-09-05T07:58:00.000 | 0 | 1 | 0 | 0 | 1 | python,emacs,comments,abbreviation,python-mode | 0 | 25,692,694 | 0 | 1 | 0 | false | 0 | 0 | IIRC this was a bug in python.el and has been fixed in more recent versions. Not sure if the fix is in 24.3 or in 24.4.
If it's still in the 24.4 pretest, please report it as a bug. | 1 | 0 | 0 | 0 | I'm trying to use abbrev-mode in python-mode.
Abbrevs work fine in python code but doesn't work in python comments.
How can I make them work in python comments too?
Thanks. | Emacs: how to make abbrevs work in python comments? | 0 | 0 | 1 | 0 | 0 | 67 |
25,699,308 | 2014-09-06T10:28:00.000 | 0 | 0 | 0 | 1 | 0 | java,python,memory,vnc | 0 | 25,699,358 | 0 | 1 | 0 | true | 1 | 0 | directly access system display memory on Linux
You can't. Linux is a memory protected virtual address space operating system. Ohh, the kernel gives you access to the graphics memory through some node in /dev but that's not how you normally implement this kind of thing.
Also in Linux you're normally running a display server like X11 (or in the future something based on the Wayland protocol) and there might be no system graphics memory at all.
I have researched a bit and found that one way to achieve this is to capture the screen at a high frame rate (screen shot), convert it into RAW format, compress it and store it in an ArrayList.
That's exactly how its done. Use the display system's method to capture the screen. It's the only reliable way to do this. Note that if conversion or compression is your bottleneck, you'd have that with fetching it from graphics memory as well. | 1 | 1 | 0 | 0 | I am trying to create my own VNC client and would like to know how to directly access system display memory on Linux? So that I can send it over a Socket or store it in a file locally.
I have researched a bit and found that one way to achieve this is to capture the screen at a high frame rate (screenshot), convert it into RAW format, compress it and store it in an ArrayList.
But, I find this method a bit too resource heavy. So, was searching for alternatives.
Please, let me also know if there are other ways for the same (using Java or Python only)? | How to access system display memory / frame buffer in a Java program? | 0 | 1.2 | 1 | 0 | 0 | 886 |
25,712,856 | 2014-09-07T17:19:00.000 | 1 | 0 | 0 | 0 | 0 | python,size,pixels,turtle-graphics | 0 | 35,849,771 | 0 | 4 | 0 | false | 0 | 0 | I know exactly what you mean, shapesize does not equal the width in pixels and it had me buggered for a day or 2.
I ended up changing the turtle shape to a square and simply using print screen in windows to take a snap shot of the canvas with the square turtle in the middle, then took that print screen into Photoshop to zoom right up to the square until i could see the pixel grid, i found that the default size of the square turtle is 21x21 pixels, who knows why they made it 21 but i found if i take some solid value i want the turtle to be in pixels like 20 and divide that by the 21 i get 0.9523...
Rounding the value to 0.95 and putting that into the shapesize gave me a square turtle of exactly 20x20 pixels and made it much easier to work with, you can do this with any number you want the turtle to be in pixels but i have only tried it with the square turtle.
I hope that helps somewhat or gives you an idea of how to find the turtle size in pixels, ms-paint will do the same providing you goto the view tab and turn on grid then zoom in as far as you can. | 1 | 1 | 0 | 0 | I have got a problem. I want to get the pixeled size of my turtle in Python, but how do I get it? | Python: Turtle size in pixels | 0 | 0.049958 | 1 | 0 | 0 | 8,354 |
25,728,378 | 2014-09-08T16:03:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,task,customization,celery | 0 | 25,728,619 | 0 | 1 | 0 | false | 1 | 0 | As you know how to create & execute tasks, it's very easy to allow customers to create tasks.
You can create a simple form to get required information from the user. You can also have a profile page or some page where you can show user preferences.
The above form helps to get data(like how frequently he needs to receive eamils) from user. Once the form is submitted, you can trigger a task asynchronously which will does processing & send emails to customer. | 1 | 0 | 0 | 0 | I am new to Celery and I can't figure out at all how I can do what I need. I have seen how to create tasks by myself and change Django's file settings.py in order schedule it. But I want to allow users to create "customized" tasks.
I have a Django application that is supposed to give the user the opportunity to create the products they need (files that gather weather information) and to send them emails at a frequency they choose.
I have a database that contains the parameters of every product like geographical time, other parameters and frequency. I also have a script sendproduct.py that sends the product to the user.
Now, how can I simply create for each product, user created a task that execute my script with the given parameters and the given frequency? | How the user of my Django app can create "customized" tasks? | 0 | 0 | 1 | 0 | 0 | 32 |
25,739,840 | 2014-09-09T08:23:00.000 | 2 | 0 | 0 | 0 | 1 | php,python,django | 0 | 25,740,202 | 0 | 3 | 0 | true | 1 | 0 | You must use one of these possibilities:
Your friend gives you direct access (even only read access) to his database and you represent everything as Django models or use raw SQL. The problem with that approach is that you have a very high-coupling between the two systems. If he changes his table or scheme structure for some reason you will also have to be notified and change stuff on your end. Which is a real headache.
Your friend provides an API end-point from his system that you can access. This protocol can be simple GET requests to retrieve information that return JSON or any other format that suites you both. That's the simplest and best approach for the long run.
You can "fetch" content directly from his site, that returns raw HTML for every request, and then you can scrape the response you receive. That's also a headache in case he changes his site structure, and you'll need to be aware of that. | 1 | 0 | 0 | 1 | I am building a web app in django and I want to integrate it with the php web app that my friend has build.
Php web app is like forum where students can ask question to the teachers. For this they have to log in.
And I am making a app in django that displays a list of colleges and every college has information about teachers like the workshop/classes timing of the teachers. In my django app colleges can make their account and provide information about workshop/classes of the teachers.
Now what I want is the student that are registered to php web app can book for the workshop/classes provided by colleges in django app and colleges can see which students and how many students have booked for the workshop/classes.
Here how can I get information of students from php web app to django so that colleges can see which students have booked for workshop. Students cannot book for workshop untill they are logged in to php web app.
Please give me any idea about this.. How can I make this possible | passing user information from php to django | 1 | 1.2 | 1 | 0 | 0 | 274 |
25,748,396 | 2014-09-09T15:23:00.000 | 7 | 0 | 0 | 0 | 0 | python,django,django-admin | 0 | 25,763,182 | 0 | 5 | 0 | true | 1 | 0 | Worked it out - I set admin.site.index_template = "my_index.html" in admin.py, and then the my_index template can inherit from admin/index.html without a name clash. | 1 | 4 | 0 | 0 | I wish to make some modifications to the Django admin interface (specifically, remove the "change" link, while leaving the Model name as a link to the page for changes to the instances). I can achieve this by copying and pasting index.html from the admin application, and making the modifications to the template, but I would prefer to only override the offending section by extending the template - however I am unsure how to achieve this as the templates have the same name. I am also open to alternative methods of achieving this effect. (django 1.7, python 3.4.1) | Django Extend admin index | 0 | 1.2 | 1 | 0 | 0 | 4,302 |
25,749,566 | 2014-09-09T16:21:00.000 | 1 | 1 | 0 | 0 | 0 | python,go,rabbitmq | 0 | 25,749,840 | 0 | 1 | 0 | true | 0 | 0 | In order for you to test that all of your messages are published you may do it this way:
Stop consumer.
Enable acknowledgements in publisher. In python you can do it by adding extra line to your code: channel.confirm_delivery(). This will basically return a boolean if message was published. Optionally you may want to use mandatory flag in basic_publish.
Send as many messages as you want.
Make sure that all of the basic_publish() methods returnes True.
Count number of messages in Rabbit.
Enable Ack in consumer by doing no_ack = False
Consume all the messages.
This will give you an idea where your messages are getting lost. | 1 | 0 | 0 | 0 | I use Python api to insert message into the RabbitMQ,and then use go api to get message from the RabbitMQ.
Key 1: RabbitMQ ACK is set false because of performance.
I insert into RabbitMQ about over 100,000,000 message by python api,but when I use go api to get
message,I find the insert number of message isn’t equal to the get number.The insert action and the
get action are concurrent.
Key 2:Lost message rate isn’t over 1,000,000 percent 1.
Insert action has log,python api shows that all inserted message is successful.
Get action has log,go api shows that all get message is successful.
But the number isn’t equal.
Question1:I don’t know how to find the place where the message lost.Could anyone give me a suggestion how to find where the message lost?
Question2:Is there any strategy to insure the message not lose? | RabbitMQ message lost | 0 | 1.2 | 1 | 0 | 0 | 2,544 |
25,770,457 | 2014-09-10T16:21:00.000 | 0 | 0 | 0 | 0 | 1 | python,django,cookies | 0 | 72,094,059 | 0 | 2 | 0 | false | 1 | 0 | Sadly, there is no best way you can prevent this from what I know but you can send the owner of an account an email and set some type of 2fa. | 1 | 3 | 0 | 0 | I'm currently developing a site with Python + Django and making the login I started using the request.session[""] variables for the session the user was currently in, and i recently realized that when i do that it generates a cookie with the "sessionid" in it with a value every time the user logs in, something like this "c1cab412bc71e4xxxx1743344b3edbcc" and if I take that string and paste it in the cookie on other computer in other network and everything, i can have acces to the session without login in.
So what i'm asking here actually is if anyone can give me any tips of how can i add some security on my system or if i'm doing something wrong setting session variables?
Can anyone give me any suggestions please? | Django session id security tips? | 0 | 0 | 1 | 0 | 0 | 357 |
25,776,832 | 2014-09-10T23:44:00.000 | 1 | 0 | 0 | 1 | 0 | python,windows,wget,system-calls,mingw32 | 1 | 25,780,451 | 0 | 1 | 0 | true | 0 | 0 | There's no such thing as under "MinGW". You probably mean under MSYS, a Unix emulation environment for Windows. MSYS makes things look like Unix, but you're still running everything under Windows. In particular MSYS maps /bin to the drive and directory where you install MSYS. If you installed MSYS to C:\MSYS then your MSYS /bin directory is really C:\MSYS\bin.
When you add /bin to your MSYS PATH environment variable, MSYS searches the directory C:\MSYS\bin. When you add /bin to the Windows PATH environment using the command SETX, Windows will look in the \bin directory of the current drive.
Presumably your version of Python is the standard Windows port of Python. Since it's a normal Windows application, it doesn't interpret the PATH environment variable the way you're expecting it to. With /bin in the path, it will search the \bin directory of the current drive. Since wget is in C:\MSYS\bin not \bin of the current directory you an error when trying to run it from Python.
Note that if you run a Windows command from the MSYS shell, MSYS will automatically convert its PATH to a Windows compatible format, changing MSYS pathnames into Windows pathnames. This means you should be able to get your Python script to work by running Python from the MSYS shell. | 1 | 0 | 0 | 0 | I am trying to figure out a way to call wget from my python script on a windows machine. I have wget installed under /bin on the machine. Making a call using the subprocess or os modules seems to raise errors no matter what I try. I'm assuming this is related to the fact that I need to route my python system call through minGW so that wget is recognized.
Does anyone know how to handle this?
Thanks | System Call in Python via MINGW32 on Windows | 0 | 1.2 | 1 | 0 | 0 | 1,827 |
25,792,285 | 2014-09-11T16:09:00.000 | 0 | 1 | 0 | 1 | 0 | python,bash,shell | 0 | 25,974,955 | 1 | 2 | 0 | true | 0 | 0 | I combined two things:
ran automated tests with the old and new version of pythona nd compared results
used snakefood to track the dependencies and ran the parent scripts
Thanks for the os.walk and os.getppid suggestion, however, I didn't want to write/use any additional code. | 1 | 1 | 0 | 0 | I need to test whether several .py scripts (all part of a much bigger program) work after updating python. The only thing I have is their path. Is there any intelligent way how to find out from which other scripts are these called? Brute-forece grepping wasn't as good aas I expected. | How to find out from where is a Python script called? | 1 | 1.2 | 1 | 0 | 0 | 497 |
25,797,494 | 2014-09-11T21:48:00.000 | 0 | 0 | 0 | 0 | 0 | python,sockets,real-time,long-polling,webhooks | 0 | 25,800,249 | 0 | 2 | 0 | false | 0 | 0 | There is no way to send data to the client without having some kind of connection, e.g. either websockets or (long) polling done by the client. While it would be possible in theory to open a listener socket on the client and let the web server could connect to and sent the data to this socket, this will not work in reality. The main reason for thi is, that the client is often inside an internal network not reachable from outside, i.e. a typical home setup with multiple computers behind a single IP or a corporate setup with a firewall in between. In this case it is only possible to establish a connection from inside to outside, but not the other way. | 1 | 0 | 0 | 0 | I am trying to create a python application that can continuously receive data from a webserver. This python application will be running on multiple personal computers, and whenever new data is available it will receive it. I realize that I can do this either by long polling or web sockets, but the problem is that sometimes data won't be transferred for days and in that case long polling or websockets seem to be inefficient. I won't be needing that long of a connection but whenever data is available I need to be able to push it to the application. I have been reading about webhooks and it seems that if I can have a url to post that data to, I won't need to poll. But I am confused as to how each client would have a callback url because in that case a client would have to act as a server. Are there any libraries that help in getting this done? Any kind of resources that you can point me to would be really helpful! | Is it possible to implement webhooks on regular clients? | 1 | 0 | 1 | 0 | 1 | 191 |
25,800,481 | 2014-09-12T03:56:00.000 | 0 | 0 | 1 | 1 | 1 | python,visual-studio,cpython | 0 | 25,815,355 | 0 | 1 | 0 | false | 0 | 0 | OK, I figured it out
I was using VS 2013 while Python's build system was designed for VS 2010.
I ended up retargeting everything for 2013 (including a small modification to the tix makefile) and it compiled with all non-static symbols (AST and all) as expected.
Python.org's official pre-built Windows libraries still seem to omit the AST symbols. I don't mind building Python from source myself, but I think the official builds should package the whole shebang. | 1 | 0 | 0 | 0 | I'm writing a C application that makes use of Python's AST API to transform Python code expressions before emitting bytecode. I've been a longtime POSIX developer (currently OS X), but I wish learn how to port my projects to Windows as well.
I'm using the static libraries (.lib) generated by build.bat in Python's PCBuild directory. The trouble with these libraries is they somehow skip over the symbols in Python/Python-ast.c as well as Python/asdl.c. I need these APIs for their AST constructors, but I'm not sure how to get Visual Studio to export them.
Do I need to add __declspec(dllexport) for static libraries?
EDIT: I do not have this problem with static libraries generated on POSIX platforms | Exporting cpython AST symbols on Windows | 0 | 0 | 1 | 0 | 0 | 33 |
25,814,134 | 2014-09-12T17:59:00.000 | 0 | 0 | 1 | 0 | 0 | python,controls,communication | 0 | 25,814,194 | 0 | 2 | 0 | false | 0 | 0 | You have a lot of choices for exhanging messages between programs or components:
You can write output files that other programs can read and act on. You'd like to see if the consumer could watch a directory for a file and react when it arrived.
You could make them distributed components that exchanged messages via sockets or some higher level protocol like HTTP. The communication could be synchronous or asynchronous.
You could connect them as producers writing to message queues or topics and consumers listening to the queue or topic for events. | 1 | 0 | 0 | 0 | I have a (probably) simple question that the internet seems to be of no help with. I would like to make several python programs interact within another python program and have no idea how to get them to put input into each other. My eventual idea is to (as a proof of concept) have one program act as the environment and the others act as creatures in that environment. let me clarify: I am sure you have seen those programs that simulate natural environments with the creatures in them interacting. I would like to do the same kind of thing just on a smaller scale (text in the place of fancy 3d graphics if at all). The ultimate goal of this is not to have a complex ecosystem but to see how far I can push the communication between the programs (and my computer's power along the way).
P.S. I would like to continue to run it from the IDLE or from the command line. | Python programs communicating | 0 | 0 | 1 | 0 | 0 | 60 |
25,820,713 | 2014-09-13T06:59:00.000 | 0 | 0 | 1 | 0 | 0 | python,eclipse,pydev | 0 | 25,823,825 | 0 | 2 | 0 | true | 0 | 0 | Hmmm, I am not familiar with the IDE IDLE, nor do I typically run a file via the console, but maybe I understand your question. The core answer is you need a breakpoint so that execution does not terminate and therefore x=10 is resident in memory. If the breakpoint is set post x=10, then when you reach the breakpoint and execution stops and you type "x" you will get 10.
There is documentation online on the console about how to use it in context of a load of file from with the console. I tend instead to hit shift-F9 while in the file to run it in debug mode. This leaves you in debug console rather than interactive console (you'll see no prompting ">") but you'll still be able to have x=10 when you enter x at break.
Probably misunderstood but though I would give it a shot. Good luck! | 1 | 1 | 0 | 0 | So I installed PyDev in Eclipse and started testing it and I have come to an issue.
While using IDLE to run Python I could, for example, create a file, set a variable x = 10 and then make IDLE run said file. I would then be able to ask python for x and it would give me 10. I don't know how to do that in PyDev.
I created a python interactive console and then when prompted chose the "Console for currently active editor" but the console will not recognize x even though the editor has x defined to 10. I did save before creating the console, I also ran the file before I opened the console... I do not know what to do...
Thank you! | PyDev Interactive Console Issue | 0 | 1.2 | 1 | 0 | 0 | 251 |
25,838,717 | 2014-09-14T22:39:00.000 | 0 | 0 | 0 | 0 | 1 | pdf,python-sphinx,rst2pdf | 0 | 37,615,276 | 0 | 1 | 0 | false | 1 | 0 | We had a similar problem: bad pdf output on project with a lot of chapters and images.
We solved disabling the break page: in the conf.py, set the pdf_break_level value at 0. | 1 | 0 | 0 | 0 | My Sphinx input is six rst files and a bunch of PNGs and JPGs. Sphinx generates the correct HTML, but when I make pdf I get an output file that comes up blank in Adobe Reader (and comes up at over 5000%!) and does not display at all in Windows Explorer.
The problem goes away if I remove various input files or if I edit out what looks like entirely innocuous sections of the input, but I cannot get a handle on the specific cause. Any ideas on how to track this one down? Running Sphinx build with the -v option shows no errors.
I'm using the latest Sphinx (1.2.3) and the latest rst2pdf (0.93), with the default style. On Win7.
(added) This may help others with the same problem: I tried concatenating the rst files, then running rst2pdf on the concatenated file. That worked, though it gave me a bunch of warnings for bad section hierarchy and could not handle the Sphinx :ref: stuff. Could the bad section hierarchy thing (i.e. ==, --, ~~ in one file, ==, ~~, -- in another) be connected to the hopeless PDFs? Removing the conflict does not solve the problem, but that doesn't mean it's not a clue!
I could explore more if I could capture the output that Sphinx sends to rst2pdf. | Sphinx PDF output is bad. How do I chase down the cause? | 0 | 0 | 1 | 0 | 0 | 1,143 |
25,852,330 | 2014-09-15T16:09:00.000 | 1 | 1 | 1 | 0 | 0 | python,cron,crontab | 0 | 25,852,354 | 0 | 1 | 0 | false | 0 | 0 | That would happen if you're running the python program as root (which would happen if you're using root's crontab).
To fix it, just remove it with sudo rm /path/to/file.pyc, and make sure to run the program as your user next time. If you want to keep using root's crontab, you could use su youruser -c yourprogram, but the cleanest way would be simply to use your user's crontab | 1 | 1 | 0 | 0 | I have a python program, stored on Dropbox, which runs via cron on a couple of different machines. For some reason, recently one of the .pyc files is being created with root as the owner, which means that Dropbox doesn't have permission to sync it anymore.
Why would it do that, and how do I change it? | Python pyc files created with root as owner | 0 | 0.197375 | 1 | 0 | 0 | 935 |
25,858,087 | 2014-09-15T22:52:00.000 | 1 | 0 | 0 | 0 | 0 | android,python,python-2.7,kivy | 0 | 25,858,139 | 0 | 1 | 0 | true | 0 | 1 | I want to find a way to make it so that the screenshot() gets saved in /sdcard/Pictures.
The argument to screenshot is the filepath to save at, just write Window.screenshot('/sdcard/Pictures'). | 1 | 0 | 0 | 0 | Is there a way to use the OS module in python to save a jpeg created by the screenshot() function in Kivy? I am on Android so I want to find a way to make it so that the screenshot() gets saved in /sdcard/Pictures.
If I don't have to use the OS module, how would I do it?
Please use examples and add code snippets that other users and I can use for future reference.
I have been stuck on this issue for a long time.
Thanks in advance!!!! | How to use os module to save a jpeg to a cretin path- Using kivy screenshot() | 1 | 1.2 | 1 | 0 | 0 | 77 |
25,872,043 | 2014-09-16T14:54:00.000 | 1 | 0 | 0 | 0 | 0 | python,graph,networkx | 0 | 25,872,307 | 0 | 3 | 0 | false | 0 | 0 | You don't need to get rid of them, they don't do anything other than specify the encoding type. This can be helpful sometimes, but I can't think of a time when it isn't helpful. | 1 | 1 | 0 | 0 | I created a graph in Networkx by importing edge information in through nx.read_edgelist(). This all works fine and the graph loads.
The problem is when I print the neighbors of a node, I get the following for example...
[u'own', u'record', u'spending', u'companies', u'back', u'shares', u'their', u'amounts', u'are', u'buying']
This happens for all calls to the nodes and edges of the graph. It is obviously not changing the names of the nodes seeing as it is outside of the quotations.
Can someone advise me how to get rid of these 'u's when printing out the graph nodes.
I am a Python novice and I'm sure it is something very obvious and easy. | Networkx appends 'u' before node names after reading in from an edge list. How to get rid? | 0 | 0.066568 | 1 | 0 | 1 | 539 |
25,884,242 | 2014-09-17T07:01:00.000 | 1 | 0 | 0 | 1 | 0 | mongodb,python-2.7,apscheduler | 0 | 25,898,609 | 0 | 1 | 0 | false | 0 | 0 | Simply give the mongodb jobstore a different "database" argument. It seems like the API documentation for this job store was not included in what is available on ReadTheDocs, but you can inspect the source and see how it works. | 1 | 0 | 0 | 0 | I want to store the job in mongodb using python and it should schedule on specific time.
I did googling and found APScheduler will do. i downloaded the code and tried to run the code.
It's schedule the job correctly and run it, but it store the job in apscheduler database of mongodb, i want to store the job in my own database.
Can please tell me how to store the job in my own database instead of default db. | APScheduler store job in custom database of mongodb | 0 | 0.197375 | 1 | 1 | 0 | 708 |
25,891,415 | 2014-09-17T13:05:00.000 | 1 | 0 | 0 | 0 | 1 | python,spss | 0 | 25,896,919 | 0 | 2 | 0 | false | 0 | 0 | Glad that is solved. The 32/64-bit issue has been a regular confusion for Statistics users. | 2 | 0 | 0 | 0 | I originally installed Canopy to use Python, but it would not recognize the SPSS modules, so I removed canopy, re-downloaded python2.7, and changed my PATH to ;C:\Python27. In SPSS, I changed the default python file directory to C:\Python27.
Python still will not import the SPSS modules. I have a copy of SPSS 22, so python is integrated into it...
Any thoughts on what might be causing this, and how to fix it? | How can I get Python to recognize the SPSSClient Module? | 0 | 0.099668 | 1 | 0 | 0 | 399 |
25,891,415 | 2014-09-17T13:05:00.000 | 1 | 0 | 0 | 0 | 1 | python,spss | 0 | 25,892,696 | 0 | 2 | 0 | true | 0 | 0 | I figured this out only with some help from a friend who had a similar issue. I had downloaded python from python.org, without realizing it was 32 bit. All of the SPSS modules are 64 bit! I downloaded the correct version of python, and then copied the spss modules from my spss install (inside the python folder within spss) into my python library. modules are working now! | 2 | 0 | 0 | 0 | I originally installed Canopy to use Python, but it would not recognize the SPSS modules, so I removed canopy, re-downloaded python2.7, and changed my PATH to ;C:\Python27. In SPSS, I changed the default python file directory to C:\Python27.
Python still will not import the SPSS modules. I have a copy of SPSS 22, so python is integrated into it...
Any thoughts on what might be causing this, and how to fix it? | How can I get Python to recognize the SPSSClient Module? | 0 | 1.2 | 1 | 0 | 0 | 399 |
25,902,367 | 2014-09-18T00:24:00.000 | 1 | 0 | 1 | 1 | 1 | python,pycharm,pythonpath | 0 | 26,268,411 | 0 | 3 | 0 | false | 0 | 0 | There are multiple ways to solve this.
In PyCharm go to Run/Edit Configurations and add the environment variable PYTHONPATH to $PYTHONPATH: and hit apply. The problem with this approach is that the imports will still be unresolved but the code will run fine as python knows where to find your modules at run time.
If you are using mac or unix systems. Use the command "EXPORT PYTHONPATH=$PYTHONPATH:". If Windows, you will have to add the directory to the PYTHONPATH environment variable.
This is as plarke suggested. | 1 | 1 | 0 | 0 | I'm using PyCharm, and in the shell, I can't run a file that isn't in the current directory. I know how to change directories in the terminal. But I can't run files from other folders. How can I fix this? Using Mac 2.7.8. Thanks! | In Python, how can I run a module that's not in my path? | 0 | 0.066568 | 1 | 0 | 0 | 1,132 |
25,924,639 | 2014-09-19T00:42:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 25,924,663 | 0 | 2 | 0 | false | 0 | 0 | This is how you build the string: firstname[0] + surname[:4] | 1 | 0 | 0 | 0 | Write a function called getUsername which takes two input parameters, firstname (string) and surname (string), and both returns and prints a username made up of the first character of the firstname and the first four characters of the surname. Assume that the given parameters always have at least four characters. | python idle how to create username? | 0 | 0 | 1 | 0 | 0 | 180 |
25,935,799 | 2014-09-19T14:02:00.000 | 0 | 0 | 1 | 1 | 0 | python,linux | 0 | 25,936,008 | 0 | 1 | 0 | false | 0 | 0 | If you put something in the background, then it's no longer connected to the current shell (or the terminal). So you would need the background process to open a socket so the command line part could send it the command.
In the end, there is no way around creating a new connection to the server every time you start the command line process and close the connection when the command line process exits.
The only alternative is to use the readline module to simulate the command line inside of your script. That way, you can open the connection, use readline to ask for any number of commands to send to the server. Plus you need an "exit" command which terminates the command line process (which also closes the server connection). | 1 | 0 | 0 | 0 | I'm new to python. I'm trying to write an application with command line interface. The main application is communicating with server using tcp protocol. I want it to work in the background so I won't have to connect with the server every time I use interface. What is a proper approach to such a problem?
I don't want the interface to be an infinite loop. I would like to use it like this:
my_app.py command arguments.
Please note that I have no problems with writing interface (I'm using argparse library right now) but don't know what architecture would suit me best and how to implement it in python. | Command line interface application with background process | 0 | 0 | 1 | 0 | 0 | 59 |
25,943,156 | 2014-09-19T22:18:00.000 | 0 | 0 | 1 | 1 | 0 | python,linux,installation,virtualenv,redhat | 0 | 25,943,276 | 0 | 1 | 0 | true | 0 | 0 | With Linux you don't need to worry about where to install files, the OS takes care of that for you. Google CentOS Yum and read the Yum docs on how to install everything. You probably already have Python 2.7 installed, to check just open the terminal CTRL + ALT + T, and type python. This will start the python interpreter and display the version. The next step would be to see if pip and virtualenv are installed. You can simply type the command at the command prombt (exit python first). If you get something to the effect of command not found then you need to install them. Install pip with the Yum installer and virtualenv with pip. If everything is install then you just need to make your virtual environment, ex virtualenv name_of_directory, if the directory doesn't exist then yum will create it. And now you're done. | 1 | 0 | 0 | 0 | in what order should I install things? My goal is to have python 2.7.6 running on a virtualenv for a project for work. I am working on a Virtual Box machine in CentOS 6.5.
What folders should I be operating in to install things? I have never used linux before today, and was just kind of thrust into this task of getting a program running that requires python 2.7.6 and a bunch of packages for it. Thanks in advance if you can get me command line entries. I have opened about 3 Virtual Boxes and deleted them because I installed things in the wrong order. Please let me know how things should be installed, with command line entries, if possible. | Redhat Python 2.7.6 installation on virtualenv | 0 | 1.2 | 1 | 0 | 0 | 1,684 |
25,964,881 | 2014-09-21T23:22:00.000 | 0 | 0 | 1 | 0 | 0 | python,indexing,slice | 0 | 25,964,934 | 0 | 2 | 0 | true | 0 | 0 | Instead of using index, use split and split on the various separators. E.g., full_name.split(', ')[0] == 'Hun', full_name.split(', ')[1] == 'Attila', and full_name.split(' ')[-1] == 'The'. You can then easily recombine them with string formatting or simple concatenation. | 1 | 0 | 0 | 0 | I have recently started learning Python, and I received a question: "Write a Python program that asks the user to enter full name in the following format:
LastName, FirstName MiddleName
(for Example: "Hun, Attila The”).
The input will have a single space after the comma and only a single space between the first name and the middle name. Use string methods and operators in Python to convert this to a new string having the form:
FirstName MiddleInitial Period LastName (for example: "Attila T. Hun") and output it."
It is easy for me to do this if I make three different variables, and then reorder/slice them later. But how do I do this in one variable only. I know I will need to slice up until "," for the LastName, but I can't do "[0:,]" as I am not using an integer value, so how do i find the integer value for it, if the lastName will vary from user to user. | How to use the index method efficiently | 0 | 1.2 | 1 | 0 | 0 | 261 |
25,965,606 | 2014-09-22T01:33:00.000 | 0 | 0 | 0 | 0 | 1 | python,scrapy | 0 | 25,967,819 | 0 | 1 | 0 | false | 1 | 0 | You can maintain a list of urls that u have crawled , whenever you came across a url that is already in the list you can log it and increment a counter. | 1 | 1 | 0 | 0 | One of the Scrapy spiders (version 0.21) I'm running isn't pulling all the items I'm trying to scrape.
The stats show that 283 items were pulled, but I'm expecting well above 300 here. I suspect that some of the links on the site are duplicates, as the logs show the first duplicate request, but I'd like to know exactly how many duplicates were filtered so I'd have more conclusive proof. Preferably in the form of an additional stat at the end of the crawl.
I know that the latest version of Scrapy already does this, but I'm kinda stuck with 0.21 at the moment and I can't see any way to replicate that functionality with what I've got. There doesn't seem to be a signal emitted when a duplicate url is filtered, and DUPEFILTER_DEBUG doesn't seem to work either.
Any ideas on how I can get what I need? | Display the number of duplicate requests filtered in the post-crawler stats | 0 | 0 | 1 | 0 | 1 | 108 |
26,001,543 | 2014-09-23T17:49:00.000 | 1 | 0 | 0 | 0 | 0 | python,decimal,truncate | 0 | 26,002,508 | 0 | 3 | 0 | false | 0 | 0 | What's wrong with the good old-fashioned value - int(value)? | 1 | 1 | 0 | 0 | I am trying to make a calculator that converts cms into yards, feet, and inches. Example: 127.5 cm is 1 yard, 1 inch, etc. But I am just wondering how I am able to retain the value after the decimal place, is there a way to truncate the number before the decimal place. So if the user inputs a value that results into 3.4231 yards, I want to retain the value ".4231" so that I can convert that into feet, and then the same for inches from feet. Sorry if this is unclear. This is for python 3 | Truncating values before the decimal point | 1 | 0.066568 | 1 | 0 | 0 | 177 |
26,028,253 | 2014-09-24T23:41:00.000 | 1 | 0 | 1 | 1 | 0 | python,windows-8 | 0 | 26,028,450 | 0 | 1 | 0 | false | 0 | 0 | I created a file called myfile.txt. It showed up in Explorer as myfile.txt. If you just see myfile (no extension), then go to Folder Options, Advanced, and uncheck "Hide extensions for known file types".
I Right-clicked myfile.txt, selected Rename, and Windows selected just "myfile", not ".txt". I changed the selection to "txt", overwrote it with "py" and hit enter. Windows popped up a message warning that I was changing the extension. I clicked OK and the file was renamed.
An alternate approach is to open a command prompt, cd to the directory and use "move" to change the name.
Yet another option, if you are doing this from a text editor, click "Save As", change the save dialog's "Save As Type" drop-down to All Files, change the name to .py and hit OK. You end up with a .txt and a .py. | 1 | 1 | 0 | 0 | How do I change file extensions in windows 8? I tried and my system will not recognize the change.
I tried changing from .txt to .py for python so i can use in IDLE. | how to change file extensions in windows 8? i tried and it will not recognize the change | 0 | 0.197375 | 1 | 0 | 0 | 5,230 |
26,050,414 | 2014-09-26T00:53:00.000 | 1 | 0 | 0 | 1 | 0 | python-2.7,wxpython | 1 | 26,051,264 | 0 | 1 | 0 | false | 0 | 1 | I don't know if Cloud9 supports it but normally to run a remote GUI application you would have ssh forward the X11 communication over the ssh connection via a tunnel. So basically the application is running on the remote system and it is communicating with a local X11 server which provides you with the display and handling of the mouse and keyboard.
If you run ssh with the -X parameter then it will attempt to set up the X11 tunnel and set $DISPLAY in the remote shell so any GUI applications you run there will know how to connect to the X11 tunnel. Bw aware however that this is something that can be turned off on the remote end, so ultimately it is up to Cloud9 whether they will allow you to do this. | 1 | 0 | 0 | 0 | Does anyone know if it is possible to run python-gui apps, like wxPython, on a c9.io remote server? I have my home server set up with c9 via SSH, and no issues logging in and running apps in the terminal on the VM. However, when I try to run GUI apps, I get the following error message.
Unable to access the X Display, is $DISPLAY set properly?
After searching and searching, I can't seem to find a guide or anything in the docs that detail how to set $DISPLAY in the script. X display is installed and active on my server, but I don't know how to configure the c9 script to access it properly. Any assistance would be appreciated! | Running Python GUI apps on C9.io | 0 | 0.197375 | 1 | 0 | 0 | 686 |
26,054,851 | 2014-09-26T08:04:00.000 | 0 | 0 | 0 | 1 | 0 | c#,python,linux | 0 | 39,438,068 | 0 | 1 | 0 | false | 0 | 0 | For Linux you will find Statvfs in Mono.Unix.Native, in the Mono.Posix assembly. | 1 | 2 | 0 | 0 | I have a question. In my c# application I need to get the free space of a directory. According to my research, GetdiskfreespaceEx is proper and it works for me in my windows xp. Now I'm wondering if it works the same in a linux system. Since I wrote my c# program according to a python one, and for this function the developer of the python one made 2 cases: the windows os or else. For the windows case "ctypes.windll.kernel32.GetDiskFreeSpaceExW" is used while in else case "os.statvfs(folder)" is used.
I did some more research but haven't found anything saying if GetdiskfreespaceEx could be used for linux. Anyone could tell me that? If no, any ways to get free disk space for linux in c#? Thanks in advance! | Get free space of a directory in linux | 0 | 0 | 1 | 0 | 0 | 854 |
26,069,257 | 2014-09-26T22:36:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-2.7,python-3.x,wxpython,ipython | 0 | 26,082,455 | 0 | 2 | 0 | false | 0 | 0 | Check out the wxPython demo, I would probably use the wx.lib.masked.numctrl for that. | 1 | 0 | 0 | 0 | Hi I am a beginner on Python and was wondering how you can make a user input a number that contains two decimal places or less | Python How to force user input to be to 2dp | 0 | 0 | 1 | 0 | 0 | 130 |
26,070,040 | 2014-09-27T00:16:00.000 | 12 | 0 | 0 | 0 | 1 | python,postgresql,psycopg2,python-multiprocessing | 0 | 26,072,257 | 0 | 1 | 0 | true | 0 | 0 | You can't sanely share a DB connection across processes like that. You can sort-of share a connection between threads, but only if you make sure the connection is only used by one thread at a time. That won't work between processes because there's client-side state for the connection stored in the client's address space.
If you need large numbers of concurrent workers, but they're not using the DB all the time, you should have a group of database worker processes that handle all database access and exchange data with your other worker processes. Each database worker process has a DB connection. The other processes only talk to the database via your database workers.
Python's multiprocessing queues, fifos, etc offer appropriate messaging features for that. | 1 | 4 | 0 | 0 | I have a Python script running as a daemon. At startup, it spawns 5 processes, each of which connects to a Postgres database. Now, in order to reduce the number of DB connections (which will eventually become really large), I am trying to find a way of sharing a single connection across multiple processes. And for this purpose I am looking at the multiprocessing.sharedctypes.Value API. However, I am not sure how I can pass a psycopg2.connection object using this API across processes. Can anyone tell me how it might be done?
I'm also open to other ideas in order to solve this issue.
The reason why I did not consider passing the connection as part of the constructor to the 5 processes is mutual exclusion handling. I am not sure how I can prevent more than one process from accessing the connection if I follow this approach. Can someone tell me if this is the right thing to do? | Share connection to postgres db across processes in Python | 1 | 1.2 | 1 | 1 | 0 | 5,128 |
26,080,694 | 2014-09-28T01:13:00.000 | 1 | 0 | 0 | 0 | 0 | javascript,python,django,forms,security | 0 | 26,081,085 | 0 | 1 | 0 | false | 1 | 0 | I have a django app which need to receive post requests from another
site using html+javascript.
You don't have to ! You can build an API instead ;)
You create an API call - small site calls the API of main site. In that situation, the form is handled by a view in small site and the API is called via the server. Check out Django REST Framework.
Note: That solution wouldn't stop you from using AJAX but it would avoid cross domain issues. | 1 | 0 | 0 | 0 | I have a django app which need to receive post requests from another site using html+javascript.
For example i have mydjangosite.com and smallhtmlsite.com
What i want is : user visit smallhtmlsite.com and fill a form, then he pushing submit button and mydjangosite.com receive request and create objects(form saving models actually).
So i will have a view which will handle this requests. But how can these been done securely? | How to post a form to django site, securely and from another site | 0 | 0.197375 | 1 | 0 | 0 | 389 |
26,102,680 | 2014-09-29T14:49:00.000 | 1 | 0 | 0 | 0 | 1 | python,flask,windows-firewall | 1 | 28,746,633 | 0 | 2 | 0 | false | 1 | 0 | When running as a service the program running the service is not python.exe but rather pythonservice.exe. You will have to add that to the allowed programs in the Windows Firewall Setup. In my case it is located under C:\Python33\Lib\site-packages\win32\pythonservice.exe. | 1 | 3 | 0 | 0 | We have developed an app in python and are using flask to expose its api via http requests.
all this on WINDOWS -
Everything works ok and we have tested in-house with no problems and we are now trying to use the app in the real world - we have gotten our IT dept to give us a public facing ip/port address (forwarded through a firewall ??) and now we can't access the server/app at all.
After a bit of digging, we've found the problem has something to do with the Windows Firewall configuration, when its on it won't work, when its off everything is fine.
the flask app code is run like so: app.run(debug=False, host='0.0.0.0', port=8080)
the port 8080 is setup in the Firewall Exceptions as is python.exe in the Program Exceptions
netstat -a shows the app is sitting there awaiting connection.
If I try to access the site though chrome I get the error: ERR_CONNECTION_TIMED_OUT.
With the firewall on i'm never seeing any "hits" come through to the app at all.
Is there some other configuration I'm missing?
Many thanks. | how do i configure python/flask for public access with windows firewall | 0 | 0.099668 | 1 | 0 | 0 | 8,285 |
26,108,160 | 2014-09-29T20:19:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,sql,csv | 0 | 26,108,522 | 0 | 2 | 0 | true | 0 | 0 | The csv module can easily give you the column names from the first line, and then the values from the other ones. The hard part will be do guess the correct column types. When you load a csv file into an Excel worksheet, you only have few types : numeric, string, date.
In a database like MySQL, you can define the size of string columns, and you can give the table a primary key and eventually other indexes. You will not be able to guess that part automatically from a csv file.
At the simplest way, you can treat all columns as varchar(255). It is really uncommon to have fields in a csv file that do not fit in 255 characters. If you want something more clever, you will have to scan the file twice : first time to control the maximum size for each colum, and at the end, you could take the minimum power of 2 greater than that. Next step would be to control if any column contains only integers or floating point values. It begins to be harder to do that automatically, because the representation of floating point values may be different depending on the locale. For example 12.51 in an english locale would be 12,51 in a french locale. But Python can give you the locale.
The hardest thing would be eventual date or datetime fields, because there are many possible formats only numeric (dd/mm/yyyy or mm/dd/yy) or using plain text (Monday, 29th of september).
My advice would be to define a default mode, for example all string, or just integer and strings, and use configuration parameters or even a configuration file to finely tune conversion per column.
For the reading part, the csv module will give you all what you need. | 1 | 0 | 0 | 0 | I have CSV files that I want to make database tables from in mysql. I've searched all over and can't find anything on how to use the header as the column names for the table. I suppose this must be possible. In other words, when creating a new table in MySQL do you really have to define all the columns, their names, their types etc in advance. It would be great if MySQL could do something like Office Access where it converts to the corresponding type depending on how the value looks.
I know this is maybe a too broadly defined question, but any pointers in this matter would be helpful. I am learning Python too, so if it can be done through a python script that would be great too.
Thank you very much. | create database by load a csv files using the header as columnnames (and add a column that has the filename as a name) | 1 | 1.2 | 1 | 1 | 0 | 3,067 |
26,109,520 | 2014-09-29T21:52:00.000 | 2 | 0 | 1 | 0 | 0 | rethinkdb,rethinkdb-python | 0 | 26,109,631 | 0 | 1 | 0 | true | 0 | 0 | The process that appends new entries in your json file should probably run query to insert the same entries in RethinkDB.
Or you can have a cron job that
get the last entry saved from rethinkdb
read your json file for new entries
insert new entries | 1 | 1 | 0 | 0 | I have a log file by the name log.json.
A simple insert in rethinkdb works perfectly.
Now this json file get updated every second, how to make sure that the rethinkdb gets the new data automatically, is there a way to achieve this, or i have to simply use the API and insert into db as well as log in a file if i want to.
Thanks. | Insert json log files in rethinkdb? | 1 | 1.2 | 1 | 0 | 0 | 167 |
26,113,419 | 2014-09-30T05:33:00.000 | 2 | 0 | 0 | 1 | 0 | python,sockets,udp,padding | 0 | 26,326,206 | 0 | 1 | 0 | true | 0 | 0 | This is pretty much impossible without playing around with the Linux drivers. This isn't the best answer but it should guide anyone else looking to do this in the right direction.3
Type
sudo ethtool -d eth0
to see if your driver has pad short packets enabled. | 1 | 0 | 0 | 0 | I am trying to remove null padding from UDP packets sent from a Linux computer. Currently it pads the size of the packet to 60 bytes.
I am constructing a raw socket using AF_PACKET and SOCK_RAW. I created everything from the ethernet frame header, ip header (in which I specify a packet size of less than 60) and the udp packet itself.
I send over a local network and the observed packet in wireshark has null padding.
Any advice on how to overcome this issue? | Removing padding from UDP packets in python (Linux) | 0 | 1.2 | 1 | 0 | 1 | 1,669 |
26,116,350 | 2014-09-30T08:39:00.000 | 5 | 0 | 0 | 0 | 0 | python,text,plot,formatting,pyqtgraph | 0 | 26,129,313 | 0 | 2 | 0 | true | 0 | 1 | It is not documented, but the method you want is TextItem.setHtml(). | 1 | 1 | 0 | 0 | When creating a TextItem to be added to a plotItem in PyQtGraph, I know it is possible to format the text using html code, however I was wondering how to format the text (i.e. change font size) when updating the text through TextItem.setText()? or do I need to destroy/re-create a TextItem? | How to set font size using TextItem.setText() in PyQtGraph? | 1 | 1.2 | 1 | 0 | 0 | 1,991 |
26,136,042 | 2014-10-01T07:18:00.000 | 0 | 0 | 1 | 0 | 0 | python,path-finding,breadth-first-search | 0 | 26,190,251 | 0 | 1 | 0 | true | 0 | 0 | I found the answer. The split lists was a bad idea. I used linked lists instead. You can create a class called "Node" then connect the nodes yourself. Once of the properties of the node is the parent node. With the parent node, you can track all the way back up the tree. Look into linked lists. | 1 | 0 | 0 | 0 | So I understand breadth first search, but I am having trouble with the implementation. When there are multiple branches you are expanding the edges or successors of each branch on every iteration of the function. But when one of your nodes returns GOAL how do you get all of the previous moves to that location?
Do you create a list of actions you took, and if there is a fork in the path you create two new lists with the actions you took to get to the branch, plus the moves you will take while on the branch?
This is a pain in the butt to implement, especially in Python, in which I'm fairly new too. Is there a better way to organize this and get the path I need? | Breadth first search how to find a particular path out of several branches | 0 | 1.2 | 1 | 0 | 0 | 111 |
26,136,643 | 2014-10-01T07:53:00.000 | 1 | 0 | 0 | 1 | 0 | python,google-app-engine,cmd | 0 | 26,136,777 | 0 | 1 | 0 | true | 1 | 0 | dev_appserver doesn't "run a file" at all. It launches the GAE dev environment, and routes requests using the paths defined in app.yaml just like the production version.
If you need to route your requests to a specific Python file, you should define that in app.yaml. | 1 | 0 | 0 | 0 | In order to start Google app engine through the cmd i use: dev_appserver.py --port=8080, but i can't figure how does it pick a file to run from the current directory.
My question is, is there an argument specifying a file to run in the server from all possible files in the directory? | use dev_appserver.py for specific file in a directory | 1 | 1.2 | 1 | 0 | 0 | 52 |
26,138,460 | 2014-10-01T09:43:00.000 | 0 | 0 | 1 | 0 | 0 | python,multithreading,pyqt,pyqt4,usrp | 0 | 26,153,975 | 0 | 2 | 0 | false | 0 | 1 | There are several ways to do this, but basically either
Breakup your waterfall sink into chunks of work, which the GUI can execute periodically. For example, instead of continuously updating the waterfall sink in a function that GUI calls, have only a "short" update (one "time step"), and have the function return right after; make the function called periodically via QTimer.
Make the waterfall sink execute in a separate thread by using a QObject instantiated in a QThread instance; and make the sink function emit a signal at regular interval, say at every "time step" of the waterfall update. | 1 | 0 | 0 | 0 | I am trying to make a GUI in python using pyqt4 which incorporates a waterfall sink which is connected with an USRP. The problem is that the data should be shown in waterfall sink continuously which makes the GUI to freeze and I can not use the other buttons in meanwhile. I was checking to use threads, but till now what I understood is that in threads I can put just functions which will give an result at the end, but not the functions which will give results continuously and I want to see it in the main GUI.
Any idea how to make it possible to see the continuous results from waterfall sink and not to freeze the main GUI. | Using threads with python pyqt? | 0 | 0 | 1 | 0 | 0 | 352 |
Subsets and Splits