Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 5 sites setup using named vhosts.
site1.domain.net (PHP)
site2.domain.net (Python)
site3.domain.net (Ruby)
site4.domain.net (PHP)
site5.domain.net (PHP)
In the vhost for site1 I also have the ServerAlias set to *.domain.net as I want any undefined addresses to go to that address.
When I add the *.domain.net to that vhost, the python and the ruby sites redirect to site1 instead of their named vhost. All the php sites work fine.
My guess is the fact that the python and ruby sites are using wsgi and passenger respectively has something to do with why it is loading incorrectly.
I was reading something about UseCanonicalNames but I don't see how that impacts this.
I am not just interested in a solution but also a reason why (or how) these other two languages handle their vhost config and why such a change makes a difference.
Thank you for your time and help. | 0 | php,python,ruby,apache2,vhosts | 2013-04-09T15:02:00.000 | 0 | 15,905,487 | I don't think it has anything to do with the usage of mod_wsgi and Phusion Passenger. I think that's just how ServerAlias works.
You can try this alternative:
Remove the ServerAlias.
Setup a vhost for '*.domain.net' (or, if that doesn't work, '.domain.net' or 'domain.net') which redirects to site1.domain.net.
This also has the advantage that your users cannot bookmark a non-canonical subdomain name.
By the way did you know that Phusion Passenger also supports WSGI? | 0 | 39 | false | 1 | 1 | Apache2 Ruby and Python load default website when *.domain.net is set in vhost file | 15,921,626 |
1 | 3 | 0 | 1 | 0 | 0 | 1.2 | 0 | I'm just trying to do a simple batch insert test for 2k nodes and this is timing out. I'm sure it's not a memory issue because I'm testing with a ec2 xLarge instance and I changed the neo4j java heap and datastore memory parameters. What could be going wrong? | 0 | python,neo4j,py2neo | 2013-04-10T08:16:00.000 | 0 | 15,920,449 | There is an existing bug with large batches due to Python's handling of the server streaming format. There will be a fix for this released in version 1.5 in a few weeks' time. | 0 | 211 | true | 1 | 1 | py2neo Batch Insert timing out for even 2k nodes | 15,927,439 |
1 | 1 | 0 | 4 | 0 | 1 | 0.664037 | 0 | It seems the definition of weak typing (not to be confused with dynamic typing) is that a binary operator can work when both values are a different type.
Python programmers argue that Python is strongly typed because 1+"hello" will fail instead of silently doing something else. In contrast, other languages which are commonly considered weakly typed (e.g. PHP, JavaScript, Perl) will silently convert one or both of the operands. For example, in JavaScript, 1+"hello" -> "1hello", while in Perl, 1+"hello" -> 1, but 1+"5" -> 6.
Now, I had the impression that Java is considered a strongly typed language, yet auto(un)boxing and widening conversions seem to contradict this. For example, 1+new Integer(1) -> 2, hello+"1" -> "hello1", 'A'+1 -> 66, and long can be converted into float automatically even though it typically gets truncated. Is Java weakly typed? What's the difference between weak typing, autoboxing, and widening conversions? | 0 | java,php,javascript,python,weak-typing | 2013-04-10T20:23:00.000 | 0 | 15,935,699 | Weak Typing is when certain conversions and ad-hoc polymorphisms are implicitly performed if the compiler/interpreter feels the need for it.
Autoboxing is when when literals and non-object types are automatically converted to their respective Object types when needed. (For example, Java will allow you to call methods on a string literal as if it were a string object.) This has nothing to do with the typing system. It's really just syntactic sugar to avoid having to create objects explicitly.
Widening conversions are a form of weak typing. In a very strict strongly typed language, this wouldn't be allowed. But in languages like Java, it is allowed because it has no negative side effects. Something as tiny as this is hardly enough to no longer consider Java a strongly typed language.
Java also overloads the + operator for string concatenation. It's definitely a feature seen in weakly typed languages, but again, not a big enough deal to call Java weakly typed. (Even though I think it's a really stupid idea.) | 0 | 1,195 | false | 0 | 1 | What is the difference between weak typing, autoboxing, widening conversions? | 15,936,117 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I have installed the tramp on my emacs properly. I can use it to edit the remote txt files, however, once I create an *.py file on the remote host and edit it, after I input 2 letters, the whole emacs freeze, it doesn't respond. Could anyone give me some hints for this issus? | 0 | python,emacs,tramp | 2013-04-11T03:33:00.000 | 0 | 15,940,277 | I figured out my mistake. Tramp seems incompatible with one of the python's auto-complete package, and I removed it. Then tramp works well. | 0 | 480 | true | 0 | 1 | How to use emacs tramp to edit remote python file? | 15,959,250 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I have a C++ project that is called in Python (via boost-python) and I want to debug the C++ code from python process. How can I do that? In Windows with Visual Studio I can use the functionality attach to process. How to achieve the same in Eclipse?
Thanks | 0 | c++,python,eclipse,debugging,eclipse-cdt | 2013-04-11T08:26:00.000 | 1 | 15,944,011 | For me it works great just adding a debug configuration in C/C++ for the program /usr/bin/python (or whatever search path you have to the python interpreter) and then put the python program you want to run as the arguments. Put the breakpoints you want in the C-code and you should be all set for running the debug configuration and opening the debug perspective.
If it still does not work you may also check that you are using Legacy (or Standard) Process Launcher. For some reason the GDB process launcher does not seem to work here. | 0 | 750 | false | 0 | 1 | Debug a Python C++ extension from Eclipse (under Linux) | 30,459,774 |
2 | 4 | 0 | 0 | 0 | 0 | 0 | 1 | I'm writing a simple Twitter bot in Python and was wondering if anybody could answer and explain the question for me.
I'm able to make Tweets, but I haven't had the bot retweet anyone yet. I'm afraid of tweeting a user's tweet multiple times. I plan to have my bot just run based on Windows Scheduled Tasks, so when the script is run (for example) the 3rd time, how do I get it so the script/bot doesn't retweet a tweet again?
To clarify my question:
Say that someone tweeted at 5:59pm "#computer". Now my twitter bot is supposed to retweet anything containing #computer. Say that when the bot runs at 6:03pm it finds that tweet and retweets it. But then when the bot runs again at 6:09pm it retweets that same tweet again. How do I make sure that it doesn't retweet duplicates?
Should I create a separate text file and add in the IDs of the tweets and read through them every time the bot runs? I haven't been able to find any answers regarding this and don't know an efficient way of checking. | 0 | python,twitter | 2013-04-11T21:14:00.000 | 0 | 15,958,980 | Twitter is set such that you can't retweet the same thing more than once. So if your bot gets such a tweet, it will be redirected to an Error 403 page by the API. You can test this policy by reducing the time between each run by the script to about a minute; this will generate the Error 403 link as the current feed of tweets remains unchanged. | 0 | 2,022 | false | 0 | 1 | How do I make sure a twitter bot doesn't retweet the same tweet multiple times? | 30,488,072 |
2 | 4 | 0 | 0 | 0 | 0 | 0 | 1 | I'm writing a simple Twitter bot in Python and was wondering if anybody could answer and explain the question for me.
I'm able to make Tweets, but I haven't had the bot retweet anyone yet. I'm afraid of tweeting a user's tweet multiple times. I plan to have my bot just run based on Windows Scheduled Tasks, so when the script is run (for example) the 3rd time, how do I get it so the script/bot doesn't retweet a tweet again?
To clarify my question:
Say that someone tweeted at 5:59pm "#computer". Now my twitter bot is supposed to retweet anything containing #computer. Say that when the bot runs at 6:03pm it finds that tweet and retweets it. But then when the bot runs again at 6:09pm it retweets that same tweet again. How do I make sure that it doesn't retweet duplicates?
Should I create a separate text file and add in the IDs of the tweets and read through them every time the bot runs? I haven't been able to find any answers regarding this and don't know an efficient way of checking. | 0 | python,twitter | 2013-04-11T21:14:00.000 | 0 | 15,958,980 | You should store somewhere the timestamp of the latest tweet processed, that way you won't go throught the same tweets twice, hence not retweeting a tweet twice.
This should also make tweet processing faster (because you only process each tweet once). | 0 | 2,022 | false | 0 | 1 | How do I make sure a twitter bot doesn't retweet the same tweet multiple times? | 15,959,518 |
1 | 2 | 0 | 0 | 15 | 0 | 0 | 0 | I am building a two-factor authentication system based on the TOTP/HOTP.
In order to verify the otp both server and the otp device must know the shared secret.
Since HOTP secret is quite similar to the user's password, I assumed that similar best practices should apply. Specifically it is highly recommended to never store unencrypted passwords, only keep a salted hash of the password.
Neither RFCs, nor python implementations of HOTP/TOTP seem to cover this aspect.
Is there a way to use one-way encryption of the OTP shared secret, or is it a stupid idea? | 0 | python,authentication,encryption,google-authenticator | 2013-04-12T02:39:00.000 | 0 | 15,962,195 | Definition: HOTP(K,C) = Truncate(HMAC(K,C)) & 0x7FFFFFFF - where Kis a secret key and C is a counter. It is designed so that hackers cannot obtain K and C if they have the HOTP string since HMAC is a one-way hash (not bidirectional encryption).
K & C needs to be protected since losing that will compromise the entire OTP system. Having said that, if K is found in a dictionary and we know C (eg: current time), we can generate the entire dictionary of HOTP/TOTP and figure out K.
Applying one way encryption to HOTP/TOTP (ie: double encryption) would mathematically make it harder to decode, although it doesn't prevent other forms of attack (eg: keystroke logging) or applying the same encryption to the dictionary list of HOTP/TOTP.
It is human nature to reuse the same set of easily-remembered-password for EVERYTHING and hence the need to hide this password on digital devices or when transmitting over the internet.
Implementation of security procedure or protocol is also crucial, it is like choosing a good password K but leave it lying around the desk for everyone, or the server holding K (for HMAC) is not inside a private network protected by a few layers of firewall. | 0 | 3,075 | false | 0 | 1 | Is it possible to salt and or hash HOTP/TOTP secret on the server? | 16,110,166 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | Hi i am getting an error
"IOError: decoder jpeg not available"
when trying to implement some functions from the PIL.
What i would like to do is remove PIL, install the jpeg decoder then re-install the PIL, but im lost as to how to uninstall the PIL?
Any help would be greatly appreciated | 0 | jpeg,python-imaging-library,uninstallation,raspberry-pi | 2013-04-12T13:42:00.000 | 0 | 15,972,941 | You can do this to re-install PIL
pip install -I PIL | 0 | 429 | true | 0 | 1 | Remove PIL from raspberry Pi | 16,268,144 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I am using a C++ broker with clients written in C++, Python, and Java. If we run the system overnight, it reliably does not send/receive messages by morning. All messages are exchanged over topics with subjects designating the destination. I have 3 questions:
1.) Should we be using queues? Is there an advantage to using queues over topics? What is the design decision that picks a queue over a topic? Queues seem more rigid (i.e. if you know node A sent a request and wants a response, you would send a response right back; pub/sub).
2.) If a message goes unacknowledged, what can happen? I discovered that the Python module was missing a session.acknowledge(). Could this be causing our overnight failures? I discovered this problem today so I will hopefully have more insight tomorrow. The remedy has been to restart the qpidd service. (We are running on x64 Linux).
3.) Is this a good reason to use cluster fail over? | 0 | python,qpid | 2013-04-12T14:21:00.000 | 1 | 15,973,821 | 1) That depends on architecture. Both methods, queues and topics, can get messages from many sources to many destinations. Topics get messages to all listeners, queues get message to one of the listeners - whoever grabs the message first.
2) Are there any error or log messages pertaining to failure? I suspect you are running out of resources.
3) No, you should figure out why your messaging fails before 24 hours. | 0 | 329 | true | 0 | 1 | Qpid reliability | 16,147,555 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | If I take multiple images in different fluorescent channels (after staining the cells with some antibody/maker), how can I automatically quantitate the fraction of cells positive for each marker? Has anyone done something like this in Python?
I can already use Fiji (ImageJ) to count the cells containing only one staining type, but I can't make it run a selective count on merged images which contain two staining types. Since Fiji interacts well with python, I was thinking of writing a script that looks at each respective image containing only one staining type and then obtain the x-y coordinates of the respective image and check for matches between. I am not sure if that's a good idea though and I was wondering, if anyone has done something similar or has a more efficient way of getting the task done?
Thanks for your help! | 0 | python,opencv,imaging,imagej | 2013-04-13T09:03:00.000 | 0 | 15,986,114 | You could use cont = cv2.findcontours to find the almost round shaped cells and count them
with len(cont). | 0 | 1,007 | false | 0 | 1 | Cell Counting: Selective; Only count cells positive for all stainings | 15,991,413 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I want to implement one logic which is written in python, this code will do some searching stuffs, and I have a website done in PHP. can any one tell me whether I can include python script in PHP? if yes , how can I do that ?
Criteria :
Input to the python script will come from php or html [either text or file]. and output of python is directly displayed to the page or through php or store it in mysql and show it through PHP.[Please suggest me the best one in this]. | 0 | php,python-2.7 | 2013-04-15T13:38:00.000 | 0 | 16,016,645 | If you have access to exec, you can run the python interpreter. However, that's:
Overkill
Not necessarily wise
A major waste of resources
If your logic is simple, why don't you write it in PHP? Furthermore, if your logic is not simple...why don't you make an API of some sort to access it and favour communication rather than code deduplication? | 0 | 69 | false | 0 | 1 | Can Python Script be included in PHP? | 16,016,728 |
1 | 5 | 0 | 3 | 115 | 0 | 0.119427 | 0 | I'm using pytest for my test suite. While catching bugs in complex inter-components test, I would like to place import ipdb; ipdb.set_trace() in the middle of my code to allow me to debug it.
However, since pytest traps sys.stdin/sys.stdout ipdb fails. How can I use ipdb while testing with pytest.
I'm not interested in jumping to pdb or ipdb after a failure, but to place breaks anywhere in the code and be able to debug it there before the failure occurs. | 0 | python,pytest | 2013-04-15T19:05:00.000 | 0 | 16,022,915 | This is what I use
py.test tests/ --pdbcls=IPython.core.debugger:Pdb -s | 0 | 36,627 | false | 0 | 1 | How to execute ipdb.set_trace() at will while running pytest tests | 58,883,629 |
1 | 3 | 0 | 2 | 6 | 1 | 0.132549 | 0 | I have a Unicode string in Python. I am looking for a way to determine if there is any Chinese/Japanese character in the string. If possible it'll be better to be able to locate those characters.
It seems this is a bit different from a language detection problem. My string can be a mixture of English and Chinese texts.
My code has Internet access. | 0 | python | 2013-04-16T01:46:00.000 | 0 | 16,027,450 | You can use this regex [\u2E80-\u9FFF] to match CJK characters. | 0 | 4,416 | false | 0 | 1 | Is there a way to know whether a Unicode string contains any Chinese/Japanese character in Python? | 16,027,565 |
2 | 4 | 0 | 0 | 2 | 0 | 0 | 0 | I've written a high level motor controller in Python, and have got to a point where I want to go a little lower level to get some speed, so I'm interested in coding those bits in C.
I don't have much experience with C, but the math I'm working on is pretty straightforward, so I'm sure I can implement with a minimal amount of banging my head against the wall. What I'm not sure about is how best to invoke this compiled C program in order to pipe it's outputs back into my high-level python controller.
I've used a little bit of ctypes, but only to pull some functions from a manufacfturer-supplied DLL...not sure if that is an appropriate path to go down in this case.
Any thoughts? | 0 | python,c | 2013-04-16T02:46:00.000 | 0 | 16,027,942 | You can use Cython for setting the necessary c types and compile your python syntax code. | 0 | 1,391 | false | 0 | 1 | Best way to call C-functions from python? | 16,451,937 |
2 | 4 | 0 | 0 | 2 | 0 | 0 | 0 | I've written a high level motor controller in Python, and have got to a point where I want to go a little lower level to get some speed, so I'm interested in coding those bits in C.
I don't have much experience with C, but the math I'm working on is pretty straightforward, so I'm sure I can implement with a minimal amount of banging my head against the wall. What I'm not sure about is how best to invoke this compiled C program in order to pipe it's outputs back into my high-level python controller.
I've used a little bit of ctypes, but only to pull some functions from a manufacfturer-supplied DLL...not sure if that is an appropriate path to go down in this case.
Any thoughts? | 0 | python,c | 2013-04-16T02:46:00.000 | 0 | 16,027,942 | you can use SWIG, it is very simple to use | 0 | 1,391 | false | 0 | 1 | Best way to call C-functions from python? | 16,028,391 |
1 | 3 | 0 | 1 | 7 | 0 | 0.066568 | 0 | I'm working with custom a build system that manages a large number of git repositories and written primarily in python.
It would save me a lot of time if I could write a command that would report the current branch of all repositories, then report if the head of "branch" is the same as the head of "remotes/origin/branch".
We already have a command that will run a shell command inside every git repository, what I'm looking for is a method of getting some simply formatted information from git with regards to the relative position of branch and remotes/origin/branch. Something which is either going to be number of commits difference or a simple boolean value.
What's the method of getting this information out of git which is going to minimize the amount of parsing and processing I've got to do on the python side? | 0 | python,git | 2013-04-16T12:53:00.000 | 0 | 16,037,623 | git status shows how many commits you are ahead/behind the remote tracking branch. You need to perform git fetch first though, because otherwise git cannot know if anything new went into remote. | 0 | 2,039 | false | 0 | 1 | simplest possible way git can output the number of commits between "branch" and "remotes/origin/branch" | 16,037,728 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am using pydev through Aptana Studio 3 on a mac. Shortly after opening up Aptana, my computer heats way up, the fans go full power, and Aptana uses over 100% cpu even when it's not doing anything. I also have pydev on eclipse, but this spike doesn't occur. Has anyone else seen this? Is there any way to stop it? | 0 | python,aptana,pydev | 2013-04-17T04:34:00.000 | 1 | 16,051,571 | The only way to really know what's going on would be connecting jvisualvm (or some profiler or debugger) to your process to see what's going on (and then report an issue). On jvisualvm you can get a dump with the current processes, which may be enough already if you can say which is the thread that's running.
Note that the title should probably be 'aptana studio 3 massive cpu usage' if you're able to reproduce it there but not in pydev... | 0 | 284 | true | 0 | 1 | pydev massive cpu usage | 16,058,892 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | Is there an easy way to work in binary with Python?
I have a file of data I am receiving (in 1's and 0's) and would like to scan through it and look for certain patterns in binary. It has to be in binary because due to my system, I might be off by 1 bit or so which would throw everything off when converting to hex or ascii.
For example, I would like to open the file, then search for '0001101010111100110' or some string of binary and have it tell me whether or not it exists in the file, where it is, etc.
Is this doable or would I be better off working with another language? | 0 | python,search,binary | 2013-04-17T22:23:00.000 | 0 | 16,071,286 | You would be better working off another language. Python could do it (if you use for example,
file = open("file", "wb")
(appending the b opens it in binary), and then using a simple search, but to be honest, it is much easier and faster to do it in a lower-level language such as C. | 0 | 105 | false | 0 | 1 | Workin with binary in python | 16,071,379 |
1 | 1 | 0 | 3 | 3 | 1 | 1.2 | 0 | I am just starting out with pyramid and I am doing the tutorial. I would like to use some of the tutorial code as a starting point for the project that I am going to start, but I don't want to keep the project name as tutorial. It seems like once you give a project a name that name is used in many places. Is there a way to easily change the project name? I am sure I will have to manually edit some stuff. Just wondering if there may be an easy way to do this. | 0 | python,pyramid | 2013-04-18T14:08:00.000 | 0 | 16,085,288 | It's not a "project name". It's the name of a python package. Yes, you'll have to search/replace and rename that package everywhere in your code. You're probably better off just starting from a new project with the right name if you are only at the tutorial stage. | 0 | 468 | true | 0 | 1 | How do you change the name of a pyramid project? | 16,088,122 |
1 | 4 | 0 | 0 | 0 | 0 | 0 | 1 | is there a way to trace all the calls made by a web page when loading it? Say for example I went in a video watching site, I would like to trace all the GET calls recursively until I find an mp4/flv file. I know a way to do that would be to follow the URLs recursively, but this solution is not always suitable and quite limitative( say there's a few thousand links, or the links are in a file which can't be read). Is there a way to do this? Ideally, the implementation could be in python, but PHP as well as C is fine too | 0 | php,python,html,networking | 2013-04-19T22:23:00.000 | 0 | 16,114,358 | Chrome provides a built-in tool for seeing the network connections. Press Ctrl+Shift+J to open the JavaScript Console. Then open the Network tab to see all of the GET/POST calls. | 0 | 98 | false | 1 | 1 | Tracing GET/POST calls | 16,115,090 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I had putty on one server and run a python script available on that server. That script keep on throwing output on terminal. Later on, my internet connection went off but even then i was expecting my script to complete it job as script is on running on that server. But when internet connection resumed, I found that script has not done its job.
So is this expected ? If yes, then what to do to make sure that script runs on server even though internet connection goes off in-between?
Thanks in advance !!! | 0 | python,shell,python-2.7,putty | 2013-04-20T05:37:00.000 | 1 | 16,117,044 | On the server, you can install tmux or screen. These programs run the program in the background and enable you to open a 'window', If I use tmux:
Open tmux: tmux
Detach (run in background): press Ctrl-b d
reattach (open a 'window'): tmux attach | 0 | 289 | false | 0 | 1 | to keep the script running even after internet connection goes off | 16,117,169 |
1 | 3 | 0 | 0 | 9 | 0 | 0 | 0 | I have a an application with two processes, one in C and one in Python. The C process is where all the heavy lifting is done, while the Python process handles the user interface.
The C program writes to a large-ish buffer 4 times per second, and the Python process reads this data. To this point the communication to the Python process has been done by AMQP. I would much rather setup some for of memory sharing between the two processes to reduce overhead and increase performance.
What are my options here? Ideally I would simply have the Python process read the physical memory straight (preferable from memory and not from disk), and then taking care of race conditions with Semaphores or something similar. This is however something I have little experience with, so I'd appreciate any help I can get.
I am using Linux btw. | 0 | python,c,ipc | 2013-04-20T12:30:00.000 | 1 | 16,120,373 | How about writing the weight-lifting code as a library in C and then providing a Python module as wrapper around it? That is actually a pretty usual approach, in particular it allows prototyping and profiling in Python and then moving the performance-critical parts to C.
If you really have a reason to need two processes, there is an XMLRPC package in Python that should facilitate such IPC tasks. In any case, use an existing framework instead of inventing your own IPC, unless you can really prove that performance requires it. | 0 | 12,308 | false | 0 | 1 | How can I handle IPC between C and Python? | 16,121,274 |
1 | 1 | 0 | 0 | 2 | 1 | 0 | 0 | In a file a.py, I have the lines:
import gevent
gevent.monkey.patch_all()
import b
# etc, etc
In file b.py is it necessary to monkey patch again? Is there anything wrong with monkey patching multiple times? | 0 | python,gevent,monkeypatching | 2013-04-22T05:22:00.000 | 0 | 16,139,929 | Normally there's just one entry in sys.modules for each module. ie, the same module object is shares, so it affects the module as long as it's imported the same way.
It's possible to have the same module in sys.modules under two or more entries if it is imported differently. | 0 | 1,320 | false | 0 | 1 | Is Python's monkey patching local to the current module? | 16,140,423 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I'm trying to install a xlrd 0.9.2 package on Python3.2 on windows 7. When I launch the setup.py install I receive encoding error: 'utf8' codec can't decode...
The module (licenses.py) where the installer stops has an encoding declaration:
# -*- coding: cp1252 -*-
but it seems the python is ignoring it.
I was using Win cmd but also checked cygwin and have the same problem.
Few days ago I also had a problem with reading txt file that was in cp1252 even though I set this declaration in my script. I was using IDLE to run the script.
I'm not sure now if my python install has something missing or this is operating system issue | 0 | python,encoding,installation,xlrd | 2013-04-22T13:41:00.000 | 0 | 16,148,615 | Upgraded to Python 3.3 and the library got installed ok. Not sure if that's a problem with Python 3.2 or the instance I had. | 0 | 315 | false | 0 | 1 | python encoding error at xlrd installation | 16,168,365 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I am trying to compiling an autodiff python library, pyadolc, on Windows with Mingw. It requires boost python to call the underlying c++ library, adol-c.
I first compiled boost_python library (dll) with mingw. The dll generated are namead as libboost_python-mgw46-mt-1_53.dll and libboost_python-mgw46-mt-1_53.dll.a, sitting in /mingw/bin and /mingw/lib respectively.
Then when I build the pyadolc, the build script tries with command -lboost_python. It failed because the dll is named as libboost_python-mgw46-mt-1_53.dll, not libboost_python.dll.
So I renamed the dll as libboost_python.dll in /mingw/bin. It works and everything links fine.
However, when I tred in python shell
import adolc
it gave me an error: ImportError: No dll found for _adolc (something like that). Then I found that it was because it was looking for libboost_python-mgw46-mt-1_53.dll.
My question is: how does the dll naming work? what's the proper way to handle this kind of situation? Should I modify the build script or should I just rename the dll? I know in linux, I probably can just create a symbolic link of libboost_python.so to libboost_python-xxxx-mt-1_53.so. But in Windows xp, symbolic link to a file is not that easy. | 0 | python,windows,mingw,msys | 2013-04-22T17:55:00.000 | 0 | 16,153,597 | The best possibility is to change your build script to point to -lboost_python-mgw46-mt-1_53.dll
If you rename libboost_python-mgw46-mt-1_53.dll you have to rename libboost_python-mgw46-mt-1_53.dll.a to.
Often have dll's a reciprocal reference, if you now, only renames, the original names are not found.
So do not rename, instead use copy
copy
libboost_python-mgw46-mt-1_53.dll.a to libboost_python.a
and copy
libboost_python-mgw46-mt-1_53.dll to libboost_python.dll
With this method you have both versions. | 0 | 118 | false | 0 | 1 | How should I handle an "incorrectly" named dll? | 16,485,372 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I am facing an encoding issue while trying to pass a string from two C# modules using Ironpython code as a bridge.
Special characters like € , © gets distorted when the string is received by the recipient module.
Can anyone please advise if its a IronPython issue ? and how to fix this type of issue
Thanks,
Amit | 0 | c#,special-characters,ironpython | 2013-04-22T22:13:00.000 | 0 | 16,157,636 | Probably you are doing something wrong. There is no issues with encoding and IronPython. Check encoding for script you load before.. | 0 | 387 | false | 1 | 1 | special character encoding C# and Ironpython | 16,273,312 |
1 | 2 | 0 | 1 | 2 | 1 | 0.099668 | 0 | I have written a python27 module and installed it using python setup.py install.
Part of that module has a script which I put in my bin folder within the module before I installed it. I think the module has installed properly and works (has been added to site-packages and scripts). I have built a simple script "test.py" that just runs functions and the script from the module. The functions work fine (the expected output prints to the console) but the script does not.
I tried from [module_name] import [script_name] in test.py which did not work.
How do I run a script within the bin of a module from the command line? | 0 | python-2.7,module | 2013-04-23T13:11:00.000 | 0 | 16,170,268 | Are you using distutils or setuptools?
I tested right now, and if it's distutils, it's enough to have
scripts=['bin/script_name']
in your setup() call
If instead you're using setuptools you can avoid to have a script inside bin/ altogether and define your entry point by adding
entry_points={'console_scripts': ['script_name = module_name:main']}
inside your setup() call (assuming you have a main function inside module_name)
are you sure that the bin/script_name is marked as executable?
what is the exact error you get when trying to run the script? what are the contents of your setup.py? | 0 | 106 | false | 0 | 1 | How to execute python script from a module I have made | 16,173,774 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I have a numerical matrix of 2500*2500. To calculate the MIC (maximal information coefficient) for each pair of vectors, I am using minepy.MINE, but this is taking forever, can I make it faster? | 0 | python,python-2.7 | 2013-04-23T14:07:00.000 | 0 | 16,171,519 | first, use the latest version of minepy. Second, you can use a smaller value of "alpha" parameter, say 0.5 or 0.45. In this way, you will reduce the computational time in despite of characteristic matrix accuracy.
Davide | 1 | 425 | false | 0 | 1 | how to make minepy.MINE run faster? | 16,401,807 |
2 | 6 | 0 | 29 | 103 | 1 | 1 | 0 | I have my own package in python and I am using it very often. what is the most elegant or conventional directory where i should put my package so it is going to be imported without playing with PYTHONPATH or sys.path?
What about site-packages for example?
/usr/lib/python2.7/site-packages.
Is it common in python to copy and paste the package there ? | 0 | python,python-2.7 | 2013-04-24T15:38:00.000 | 0 | 16,196,268 | So if your a novice like myself and your directories are not very well organized you may want to try this method.
Open your python terminal. Import a module that you know works such as numpy in my case and do the following.
Import numpy
numpy.__file__
which results in
'/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site- packages/numpy/__init__.py'
The result of numpy.__file__ is the location you should put the python file with your module (excluding the numpy/__init__.py) so for me that would be
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site- packages
To do this just go to your terminal and type
mv "location of your module" "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site- packages"
Now you should be able to import your module. | 0 | 102,848 | false | 0 | 1 | Where should I put my own python module so that it can be imported | 31,109,017 |
2 | 6 | 0 | 1 | 103 | 1 | 0.033321 | 0 | I have my own package in python and I am using it very often. what is the most elegant or conventional directory where i should put my package so it is going to be imported without playing with PYTHONPATH or sys.path?
What about site-packages for example?
/usr/lib/python2.7/site-packages.
Is it common in python to copy and paste the package there ? | 0 | python,python-2.7 | 2013-04-24T15:38:00.000 | 0 | 16,196,268 | On my Mac, I did a sudo find / -name "site-packages". That gave me a few paths like /Library/Python/2.6/site-packages, /Library/Python/2.7/site-packages, and /opt/X11/lib/python2.6/site-packages.
So, I knew where to put my modules if I was using v2.7 or v2.6.
Hope it helps. | 0 | 102,848 | false | 0 | 1 | Where should I put my own python module so that it can be imported | 38,079,471 |
1 | 3 | 0 | 1 | 0 | 0 | 0.066568 | 0 | What's the best way to automatically query several dozen MySQL databases with a script on a nightly basis? The script usually returns no results, so I'd ideally have it email or notify me if any are ever returned.
I've looked into PHP, Ruby and Python for this, but I'm a little stumped as to how best to handle this. | 1 | php,python,mysql,sql,ruby | 2013-04-24T23:19:00.000 | 0 | 16,203,859 | I believe the only one can answer this question is you. All 3 examples you gave can do what you need to do with cron to automate the job. But the best script language to be used is the one you are most comfortable to use. | 0 | 307 | false | 0 | 1 | What's the best way to automate running MySQL scripts on several databases on a daily basis? | 16,203,901 |
1 | 2 | 0 | 3 | 2 | 1 | 0.291313 | 0 | I have a (single) .py script. In it, I need to import a library.
In order for this library to be found, I need to call sys.path.append. However, I do not want to hardcode the path to the library, but pass it as a parameter.
So my problem is that if I make a function (set_path) in this file, I need to import the file, and import fails because the path is not yet appended.
What are good ways to solve this problem?
Clarification after comments:
I am using IronPython, and the library path is the path to CPython/lib. This path is (potentially) different on every system.
As far as I know, I cannot pass anything via sys.argv, because the script is run in an embedded python interpreter, and there is no main function. | 0 | python,ironpython,python-import | 2013-04-25T15:03:00.000 | 0 | 16,218,288 | You should not do the import globally, but inside a function which gets called after you appended the path. | 0 | 741 | false | 0 | 1 | Python: sys.path.append vs. import? | 16,218,323 |
1 | 3 | 0 | 1 | 2 | 1 | 0.066568 | 0 | When I do next(ByteIter, '')<<8 in python, I got a name error saying
"global name 'next' is not defined"
I'm guessing this function is not recognized because of python version? My version is 2.5. | 0 | python,next | 2013-04-25T21:18:00.000 | 1 | 16,224,901 | though you could call ByteIter.next() in 2.6. This is not recommended however, as the method has been renamed in python 3 to next(). | 0 | 2,002 | false | 0 | 1 | Python: next() is not recognized | 16,225,009 |
1 | 2 | 0 | 1 | 2 | 1 | 0.099668 | 0 | Is there a way to check what a function or a method does inside of python itself similar to the help function in Matlab. I want to get the definition of the function without having to Google it. | 0 | python | 2013-04-25T22:27:00.000 | 0 | 16,225,782 | The help() function gives you help on almost everything but if your searching for something (like a module to use) then type help('modules') and it will search for available modules.
Then if you need to find information about a module load it and type dir(module_name) to see the methods that are defined in the module. | 0 | 2,004 | false | 0 | 1 | Python: Function Documentation | 16,225,924 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | We have multiple branches in SVN and use Hudson CI jobs to maintain our builds. We use SVN revision number as part of our application version number. The issue is when a Hudson job check out HEAD of a brach, it is getting HEAD number of SVN not last committed revision of that brach. I know, SVN maintains revision numbers globally, but we want to reflect last committed number of particular brach in our version.
is there a way to get last committed revision number of a brach using python script so that I can checkout that branch using that revision number?
or better if there a way to do it in Hudson itself?
Thanks. | 0 | python,svn,jenkins,hudson | 2013-04-26T20:42:00.000 | 0 | 16,244,894 | Except svn info you can also use svn log -q -l 1 URL or svn ls -v --depth empty URL | 0 | 1,110 | false | 0 | 1 | Using actual branch head revision number in Hudson | 16,254,689 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am able to easily call a python script from php using system(), although there are several options. They all work fine, except they all fail. Through trial and error I have narrowed it down to it failing on
import MySQLdb
I am not too familiar with php, but I am using it in a pinch. I understand while there could be reasons why such a restriction would be in place, but this will be on a local server, used in house, and the information in the mysql db is backed up and not to critical. Meaning such a restriction can be reasonably ignored.
But how to allow php to call a python script that imports mysql? I am on a Linux machine (centOs) if that is relevant. | 1 | php,python,mysql,linux | 2013-04-29T14:55:00.000 | 0 | 16,281,823 | The Apache user (www-data in your case) has a somewhat restricted environment. Check where the Python MySQLdb package is installed and edit the Apache user's env (cf Apache manual and your distrib's one about this) so it has a usable Python environment with the right PYTHONPATH etc. | 0 | 322 | true | 0 | 1 | call python script from php that connects to MySQL | 16,282,538 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | I'm using the Python library HSAudioTag, and I'm trying to read the track number in my files, however, without fail, the file returns 0 as the track number, even if it's much higher. Does anybody have any idea how to fix this?
Thanks. | 0 | python | 2013-04-29T21:45:00.000 | 0 | 16,288,787 | The solution was to go into the code, and change the following lines to: Line 118: self.track = u'' Lines 149-152: self.track = int(self._fields.get(TRACK, u'')) + 1 | 0 | 87 | true | 0 | 1 | Python HSAudioTag for WMA files always returns 0? | 21,212,477 |
1 | 3 | 0 | 2 | 4 | 0 | 0.132549 | 1 | There is a node where I ssh into and start a script remotely by Robot Framework (SSHLibrary.Start Command or Execute Command). This remote script starts a telnet connection to another node which is hidden from outside. This telnet call seems to be a blocking event to Robot. I use RIDE for test execution and it simply stops working. I can send stop signals inefficiently. Is it possible to spawn telnet within ssh? | 0 | python,testing,ssh,telnet,robotframework | 2013-04-30T10:47:00.000 | 0 | 16,298,022 | We haven't exactly used the method with telnet but with another ssh session or other shells that we cannot access otherwise...
Open an ssh connection to the first machine.
On this connection, use SSHLibrary keywords like Set Prompt, Write and Read or Read Until Prompt to manually open a telnet connection to the next machine.
Write and Read Keywords can be used a bit like the expect and spawn... | 0 | 2,735 | false | 0 | 1 | Is there a way to use telnet within an ssh connection in Robot Framework? | 16,315,259 |
1 | 4 | 0 | -1 | 6 | 1 | -0.049958 | 0 | I have a small python program which will be used locally by a small group of people (<15 people).But for accountability, i want to have a simple username+password check at the start of the program ( doesn't need to be super secure).For your information, I am just a beginner and this is my first time trying it.When i search around, i found that python has passlib for encryption. But even after looking though it i am still not sure how to implement my encryption.So, there are a few things that i want to know.
How do i store the passwords of users locally? The only way i know at the moment is to create a text file and read/write from it but that will ruin the whole purpose of encryption as people can just open the text file and read it from there.
What does hash & salt means in encryption and how does it work? (a brief and simple explanation will do.)
What is the recommended way to implement username and password check?
I am sorry for the stupid questions. But i will greatly appreciate if you could answers my question. | 0 | python,encryption,passwords | 2013-05-02T09:23:00.000 | 0 | 16,334,482 | You could use htpasswd which is installed with apache or can be downloaded seperately. Use subprocess.check_output to run it and you can create Python functions to add users, remove them, verify they have given the correct password etc. Pass the -B option to enable salting and you will know that it's secure (unlike if you implement salts yourself). | 0 | 6,052 | false | 0 | 1 | Password Protection Python | 16,334,819 |
1 | 1 | 0 | 2 | 2 | 0 | 1.2 | 0 | How do I fire a Ctrl+C with fabric, in other words is it possible to trigger KeyboardInterrupt manually via bash? | 0 | python,bash,fabric | 2013-05-02T11:28:00.000 | 1 | 16,336,919 | ctrl+c generates a SIGINT signal.
You can send a signal with kill -SIGINT pid where pid is the process id. you wish to signal. kill is a Bash built-in. | 0 | 414 | true | 0 | 1 | Simulate KeyboardInterrupt with fabric | 16,337,848 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I have a question about using a hash which has strings as keys. Let's say I have a hash which maps strings to doubles.
The questions is, I've heard some say that it is better to tokenize the strings into ints and have the hash map ints to doubles and not string to doubles? Will this generally be faster in Python or C++ (2 questions) or will it not matter. Let's say that we're using boost unsorted_map in C++ so it's ore like a Python dictionary.
Will this matter if the keys are actually (string, string) -- > double or in c++ unsorted_map>? | 0 | c++,python,string,hash,dictionary | 2013-05-02T17:59:00.000 | 0 | 16,344,677 | If you tokenize string you should be carefull not to have to different strings with same token. The std::unordered_map will also use hashes for quick search but also will take care of string with same hash but with different values. Of course it will take some time.
If you can tokenize the strings in such way that two strings would never have the same token, using map with ints as key is very good idea. | 0 | 136 | false | 0 | 1 | tokenizing strings to int for faster hash maps | 16,344,775 |
2 | 2 | 0 | 3 | 3 | 0 | 0.291313 | 0 | I'm currently in the process of writing some unit tests I want to constantly run every few minutes. If any of them ever fail, I want to grab the errors that are raised and do some custom processing on them (sending out alerts, in my case). Is there a standard way of doing this? I've been looking at unittest.TestResult, but haven't found any good example usage. Ideas? | 0 | python,unit-testing | 2013-05-04T00:31:00.000 | 0 | 16,369,398 | We use a continious integration server jenkins for such task. It has cron like scheduling and can send an email when build becomes unstable (a test fails). There is an extention to python's unittest module to produce junit style xml report supported by jenkins. | 0 | 127 | false | 0 | 1 | Custom onFailure Call in Unittest? | 16,394,313 |
2 | 2 | 0 | 0 | 3 | 0 | 1.2 | 0 | I'm currently in the process of writing some unit tests I want to constantly run every few minutes. If any of them ever fail, I want to grab the errors that are raised and do some custom processing on them (sending out alerts, in my case). Is there a standard way of doing this? I've been looking at unittest.TestResult, but haven't found any good example usage. Ideas? | 0 | python,unit-testing | 2013-05-04T00:31:00.000 | 0 | 16,369,398 | In the end, I wound up running the test and returning the TestResult object. I then look at the failures attribute of that object, and run post processing on each test in the suite that failed. This works well enough for me, and let's me custom design my post-process.
For any extra meta data per test that I need, I subclass unittest.TestResult and add to the addFailure method anything extra that I need. | 0 | 127 | true | 0 | 1 | Custom onFailure Call in Unittest? | 16,395,234 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I setup a new Ubuntu 12.10 Server on VPN hosting. I have installed all the required setup like Nginx, Python, MySQL etc. I am configuring this to deploy a Flask + Python app using uWSGI. Its working fine.
But to create a basic app i used Putty tool (from Windows) and created required app .py files.
But I want to setup a Git functionality so that i can push my code to required directory say /var/www/mysite.com/app_data so that i don't have to use SSH or FileZilla etc everytime i make some changes into my website.
Since i use both Ubuntu & Windows for development of app, setting up a Git kind of functionality would help me push or change my data easily to my Cloud Server.
How can i setup a Git functionality in Ubuntu ? and How could i access it and Deploy data using tools like GitBash etc. ?
Please Suggest | 0 | python,windows,git,ubuntu | 2013-05-04T03:28:00.000 | 1 | 16,370,283 | Create a bare repository on your server.
Configure your local repository to use the repository on the server as a remote.
When working on your local workstation, commmit your changes and push them to the repository on your server.
Create a post-receive hook in the server repository that calls "git archive" and thus transfers your files to some other directory on the server. | 0 | 1,205 | false | 0 | 1 | How to setup Git to deploy python app files into Ubuntu Server? | 16,375,343 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I'm running a python CGI script on my localhost that needs to import and use another python module that I wrote. I placed the CGI script in the Apache cgi-bin directory (I'm running this on windows). I've tried placing my custom module in the same directory, but it doesn't seem to be able to import that module. I would prefer to not have the custom module be another CGI script that is called via exec(). | 0 | python,apache,cgi | 2013-05-04T15:42:00.000 | 0 | 16,376,048 | You need to put your Python module somewhere that Python's import can see it. The easy ways to do that are:
Make a directory for the module, and add that directory to your PYTHONPATH environment variable.
Copy the module into your Python site-packages directory, which is under your Python installation directory.
In either case, you will need to make sure your module's name is not the same as the name of some other module that might be imported by Python in your CGI script. | 0 | 520 | false | 0 | 1 | Using custom module with Python CGI script | 16,376,236 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 | I am currently trying to run Pydev with Pymongo on an Python3.3 Interpreter.
My problem is, I am not able to get it working :-/
First of all I installed Eclipse with Pydev.
Afterwards I tried installing pip to download my Pymongo-Module.
Problem is: it always installs pip for the default 2.7 Version.
I read that you shouldn't change the default system Interpreter (running on Lubuntu 13.04 32-Bit) so I tried to install a second Python3.3 and run it in an virtual environement, but I can't find any detailed Information on how to use everything on my specific problem.
Maybe there is someone out there, that uses a similar configuration and can help me out to get everything running (in a simple way) ?
Thanks in advance,
Eric | 0 | python,ubuntu,pydev,pymongo | 2013-05-04T21:52:00.000 | 1 | 16,379,321 | You can install packages for a specific version of Python, all you need to do is specify the version of Python you want use from the command-line; e.g. Python2.7 or Python3.
Examples
Python3 pip your_package
Python3 easy_install your_package. | 0 | 1,303 | true | 0 | 1 | Using Python3 with Pymongo in Eclipse Pydev on Ubuntu | 16,379,374 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | After several hours of looking for the answer I've had no luck.
Can anyone point me to an example of how to create a torrent and seed that brand new torrent in python?
So far I can download just fine and I can produce torrent files. However, when I try to start my own torrent I get stuck on downloading rather than seeding. Obviously this is a problem since the swarm contains only my host, which is supposed to be the seeder.
Any advice? | 0 | python,bittorrent,libtorrent | 2013-05-04T23:18:00.000 | 0 | 16,379,844 | Make sure to set the download directory to the place where the original files reside, when adding the torrent to the session. The torrent will detect that the files are already there and hash them to verify that they are correct, and seed any pieces that matched the expected hash.
You can force libtorrent to trust you that the pieces/files are all there by setting the seed_mode in the add_torrent_params when adding the torrent. This will make libtorrent assume the files are there and not check them until they are requested. | 0 | 765 | false | 0 | 1 | How do I seed a directory or file using python-libtorrent? | 16,382,906 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I am new to Eclipse & PyDev (on Ubuntu 13.04) and want to try Python3.3 programming.
But I cannot choose python3.3 iterpreter, - I try to choose it in usr\lib\python3.3 , but:
- when I try to choose PYTHONPATH by clicking "New folder" - window doesn't open (I can do it onl after choosing auto-config, which will add python2.7 pates);
- I don't know the file in usr\lib\python3.3, which I need to choose, as python3.3 interpreter (auto-config returns me only 2.7 objects).
Can you advice me how to choose python3.3 interpreter (maybe the main is the file\path I need to choose in " usr\lib\python3.3" as interpreter file - in windows Eclipse I see python3.3.exe, - I need to find its equal in Ubuntu I think)?
Thanks! | 0 | eclipse,python-3.x,settings,pydev,interpreter | 2013-05-05T08:30:00.000 | 1 | 16,382,769 | Use the auto-config option. It will automatically find the libraries. | 0 | 986 | false | 0 | 1 | Choosing Python3.3 interpreter in Eclipse problems | 19,039,512 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I am new to Eclipse & PyDev (on Ubuntu 13.04) and want to try Python3.3 programming.
But I cannot choose python3.3 iterpreter, - I try to choose it in usr\lib\python3.3 , but:
- when I try to choose PYTHONPATH by clicking "New folder" - window doesn't open (I can do it onl after choosing auto-config, which will add python2.7 pates);
- I don't know the file in usr\lib\python3.3, which I need to choose, as python3.3 interpreter (auto-config returns me only 2.7 objects).
Can you advice me how to choose python3.3 interpreter (maybe the main is the file\path I need to choose in " usr\lib\python3.3" as interpreter file - in windows Eclipse I see python3.3.exe, - I need to find its equal in Ubuntu I think)?
Thanks! | 0 | eclipse,python-3.x,settings,pydev,interpreter | 2013-05-05T08:30:00.000 | 1 | 16,382,769 | You set the path usr\lib\python3.3 by typing it directly in the 'Interpreter Executable' field! You don't have to search for the Interpreter file. This will do the Auto Config for you. Afterwards you declare a name and you're done. | 0 | 986 | false | 0 | 1 | Choosing Python3.3 interpreter in Eclipse problems | 19,208,342 |
1 | 1 | 0 | 2 | 1 | 0 | 1.2 | 0 | Would like to know if there is a simple, easy way to have uWSGI pretty print exception messages (for Python specifically, not sure if the settings are particular to Python or not).
Thanks very much! | 0 | python,debugging,exception,uwsgi,pretty-print | 2013-05-06T22:07:00.000 | 0 | 16,408,074 | If you mean getting the exception message in the browser, just add --catch-exceptions
IMPORTANT: it could expose sensitive informations, do not use in production !!! | 0 | 823 | true | 0 | 1 | How to get uWSGI Python exception message pretty printing? | 16,416,281 |
4 | 8 | 0 | 0 | 56 | 0 | 0 | 0 | Is there some way to speed up the repeated execution of pytest? It seems to spend a lot of time collecting tests, even if I specify which files to execute on the command line. I know it isn't a disk speed issue either since running pyflakes across all the .py files is very fast.
The various answers represent different ways pytest can be slow. They helped sometimes, did not in others. I'm adding one more answer that explains a common speed problem. But it's not possible to select "The" answer here. | 0 | python,performance,pytest | 2013-05-07T11:11:00.000 | 0 | 16,417,546 | Pytest imports all modules in the testpaths directories to look for tests. The import itself can be slow. This is the same startup time you'd experience if you ran those tests directly, however, since it imports all of the files it will be a lot longer. It's kind of a worst-case scenario.
This doesn't add time to the whole test run though, as it would need to import those files anyway to execute the tests.
If you narrow down the search on the command line, to specific files or directories, it will only import those ones. This can be a significant speedup while running specific tests.
Speeding up those imports involves modifying those modules. The size of the module, and the transitive imports, slow down the startup. Additionally look for any code that is executed -- code outside of a function. That also needs to be executed during the test collection phase. | 0 | 20,629 | false | 0 | 1 | How to speed up pytest | 67,071,786 |
4 | 8 | 0 | 2 | 56 | 0 | 0.049958 | 0 | Is there some way to speed up the repeated execution of pytest? It seems to spend a lot of time collecting tests, even if I specify which files to execute on the command line. I know it isn't a disk speed issue either since running pyflakes across all the .py files is very fast.
The various answers represent different ways pytest can be slow. They helped sometimes, did not in others. I'm adding one more answer that explains a common speed problem. But it's not possible to select "The" answer here. | 0 | python,performance,pytest | 2013-05-07T11:11:00.000 | 0 | 16,417,546 | If you have some antivirus software running, try turning it off. I had this exact same problem. Collecting tests ran incredibly slow. It turned out to be my antivirus software (Avast) that was causing the problem. When I disabled the antivirus software, test collection ran about five times faster. I tested it several times, turning the antivirus on and off, so I have no doubt that was the cause in my case.
Edit: To be clear, I don't think antivirus should be turned off and left off. I just recommend turning it off temporarily to see if it is the source of the slow down. In my case, it was, so I looked for other antivirus solutions that didn't have the same issue. | 0 | 20,629 | false | 0 | 1 | How to speed up pytest | 45,336,546 |
4 | 8 | 0 | 3 | 56 | 0 | 0.07486 | 0 | Is there some way to speed up the repeated execution of pytest? It seems to spend a lot of time collecting tests, even if I specify which files to execute on the command line. I know it isn't a disk speed issue either since running pyflakes across all the .py files is very fast.
The various answers represent different ways pytest can be slow. They helped sometimes, did not in others. I'm adding one more answer that explains a common speed problem. But it's not possible to select "The" answer here. | 0 | python,performance,pytest | 2013-05-07T11:11:00.000 | 0 | 16,417,546 | For me, adding PYTHONDONTWRITEBYTECODE=1 to my environment variables achieved a massive speedup! Note that I am using network drives which might be a factor.
Windows Batch: set PYTHONDONTWRITEBYTECODE=1
Unix: export PYTHONDONTWRITEBYTECODE=1
subprocess.run: Add keyword env={'PYTHONDONTWRITEBYTECODE': '1'}
PyCharm already set this variable automatically for me.
Note that the first two options only remain active for your current terminal session. | 0 | 20,629 | false | 0 | 1 | How to speed up pytest | 65,135,225 |
4 | 8 | 0 | 3 | 56 | 0 | 0.07486 | 0 | Is there some way to speed up the repeated execution of pytest? It seems to spend a lot of time collecting tests, even if I specify which files to execute on the command line. I know it isn't a disk speed issue either since running pyflakes across all the .py files is very fast.
The various answers represent different ways pytest can be slow. They helped sometimes, did not in others. I'm adding one more answer that explains a common speed problem. But it's not possible to select "The" answer here. | 0 | python,performance,pytest | 2013-05-07T11:11:00.000 | 0 | 16,417,546 | In bash, try { find -name '*_test.py'; find -name 'test_*.py'; } | xargs pytest.
For me, this brings total test time down to a fraction of a second. | 0 | 20,629 | false | 0 | 1 | How to speed up pytest | 62,249,933 |
1 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 0 | I am using Raspberry Pi to function as a mini web server. At first, i came across web2py and started to learn it. It was tough for a beginner like me. Later, a friend in a forum introduced CherryPy to me and i started to work on the web application skeleton that he gave me. Soon, I abandoned web2py and proceeded with cherrypy as it is fairly straightforward.
Somehow, I think web2py could be a good choice too. Web application for both are written in python, with html, css and javascripts. So whatever i've done using cherrypy may be possible to transfer over to web2py. (is it true?)
I would like to find out what are the main differences between those 2 and their respective pros and cons. I hope to find out more about fellow users' experiences in using web2py and cherrypy. In such way, future visitors can make a comparison before they proceed in choosing which one to use. Thank you! | 0 | javascript,python,web2py,cherrypy | 2013-05-08T01:27:00.000 | 0 | 16,431,207 | web2py is using MVC model and each of the scripts are nicely separated. It can be deployed at pythoneverywhere.com. not too sure about cherrypy. | 0 | 1,858 | false | 1 | 1 | Which one is better to create a web application? web2py or cherrypy | 16,431,335 |
1 | 2 | 0 | 1 | 3 | 0 | 1.2 | 1 | I am not sure if this question belongs here as it may be a little to broad. If so, I apologize. Anyway, I am planning to start a project in python and I am trying to figure out how best to implement it, or if it is even possible in any practical way. The system will consist of several "nodes" that are essentially python scripts that translate other protocols for talking to different kinds of hardware related to i/o, relays to control stuff, inputs to measure things, rfid-readers etc, to a common protocol for my system. I am no programming or network expert, but this part I can handle, I have a module from an old alarm system that uses rs-485 that I can sucessfully control and read. I want to get the nodes talking to eachother over the network so I can distribute them to different locations (on the same subnet for now). The obvious way would be to use a server that they all connect to so they can be polled and get orders to flip outputs or do something else. This should not be too hard using twisted or something like it.
The problem with this is that if this server for some reason stops working, everything else does too. I guess what I would like is some kind of serverless communication, that has no single point of failure besides the network itself. Message brokers all seem to require some kind of server, and I can not really find anything else that seems suitable for this. All nodes must know the status of all other nodes as I will need to be able to make functions based on the status of things connected to other nodes, such as, do not open this door if that door is already open. Maybe this could be done by multicast or broadcast, but that seems a bit insecure and just not right. One way I thought of could be to somehow appoint one of the nodes to accept connections from the other nodes and act as a message router and arrange for some kind of backup so that if this node crashes or goes away, another predetermined node takes over and the other nodes connect to it instead. This seems complicated and I am not sure this is any better than just using a message broker.
As I said, I am not sure this is an appropriate question here but if anyone could give me a hint to how this could be done or if there is something that does something similar to this that I can study. If I am beeing stupid, please let me know that too :) | 0 | python,networking | 2013-05-09T01:29:00.000 | 0 | 16,452,913 | There are messaging systems that don't require a central message broker. You might start by looking at ZeroMQ. | 0 | 438 | true | 0 | 1 | Serverless communication between network nodes in python | 16,453,432 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | Will i have any advantage of using Node.js for task queue worker instead of any other language, like PHP/Python/Ruby?
I want to learn Redis for simple task queue tasks like sending big ammounts of email and do not want keeping users to wait for establishing connection etc.
So the questions is: does async nature of node.js help in this scenario or is it useless?
P.S. i know that node is faster than any of this language in memory consumption and computation because of effecient V8 engine, maybe it's possible to win on this field? | 0 | php,python,ruby,node.js,redis | 2013-05-09T07:31:00.000 | 1 | 16,456,682 | I have used Node.js for task worker for jobs that call runnable webpages written in PHP or running commands on certain hosts. In both these instances Node is just initializing (triggering) the job, waiting for and then evaluating the result. The heavy lifting / CPU intensive work is done by another system / program.
Hope this helps! | 0 | 1,154 | false | 1 | 1 | Any advantage of using node.js for task queue worker instead of other languages? | 16,471,242 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | Any reason to try and replace it with something else? I'm a beginner to python, but have encountered problems with C and importing CGI. List of instance where import cgi would not be the best option would be great for further understanding of language use. | 0 | python,python-3.x,cgi | 2013-05-09T17:33:00.000 | 0 | 16,467,684 | You should choose WSGI instead of CGI. CGI applications are generally slow as they need re- invocation of interpreter for every request, WSGI on the other hand pools them and is much more efficient. WSGI is also more mainstream. Do a little research on web and you will get better and more detailed answers.
In the past I have used CGI with python but generally the usage has been for image/chart generation,where the core lib was implemented in c/c++. | 0 | 91 | false | 0 | 1 | Any reason not to just import CGI in Python? | 16,467,920 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | Any reason to try and replace it with something else? I'm a beginner to python, but have encountered problems with C and importing CGI. List of instance where import cgi would not be the best option would be great for further understanding of language use. | 0 | python,python-3.x,cgi | 2013-05-09T17:33:00.000 | 0 | 16,467,684 | List of instance where import cgi would not be the best option
If you're writing a script which will be installed and run as a CGI script on a web server, and you're not using some other framework that replaces it, import cgi is always the best option.
So, the cases where it's not the best option:
You're not writing a CGI script.
You don't need access to FieldStorage or anything else from the gateway.
You're using a framework with its own replacement for cgi.
That's about it.
If you're not sure whether you want to use CGI or not in the first place… you probably don't. If you want the same general style of coding as CGI, WSGI is just as simple, more flexible, and usually faster.
But if you're just starting at this stuff, that may not even be the style you want. Start at the high level. Do you want to template web pages, or serve JSON to JavaScript code that does the dynamic stuff in the browser? What features do you need on your user sessions? And so on. Once you know what you want, then see if there's a framework—Django, Tornado, CherryPy, whatever—that looks like it'll make your design easier. Only then ask yourself whether you want WSGI, CGI, mod_python, an embedded server, … | 0 | 91 | false | 0 | 1 | Any reason not to just import CGI in Python? | 16,468,659 |
1 | 1 | 0 | 1 | 5 | 1 | 0.197375 | 0 | I'm writing a Python module that has only about twenty interesting types and global methods, but lots of constants and exceptions (about 70 constants for locales, 60 constants for encodings, 20 formatting attributes, more than 200 exceptions, and so on). As a result help() on this module produces about 16,000 lines of text and is littered with nearly identical descriptions of each exception. The constants are not that demanding, but it's still difficult to navigate them.
What would be a pythonic way to organize such a module? Just leave it as is and rely on other documentation? Move constants into separate dicts? Into submodules? Add them as class-level constants, where appropriate?
Note that this is a C extension so I cannot easily add a real submodule here. I've heard that sys.modules doesn't really check if the object there is a module, so one could add dictionaries there; this way I could probably create mymodule.locales, mymodule.encoding, and mymodule.exceptions and add them into sys.modules when my module is imported. Would this be a good idea or it's too hackish? | 0 | python,exception,constants,organization | 2013-05-10T11:14:00.000 | 0 | 16,481,015 | There are really two options to solve your problem. The first approach is to classify all the constants and exceptions, and have a smaller number of broader categories. This would allow you to easily navigate into which categories you want. A dictionary (or probably nested dictionaries) would be a good way to implement this, as you could maintain groups with titles in them. A second way you could do this if you wanted to customize the management a little bit more would be to make a class that would act a bit like a dictionary. It would have a list of children objects. This way, you could make unique, easier to access methods to navigate through all of your constants and exceptions, such as a new exception class that handles several similar exceptions. The other way to make it cleaner, which would require access to the source, would be to make all of those exceptions into a smaller group of exceptions that can each handle groups of similar problems. This would probably be a better way to deal with the exceptions, but you may not have access to the source to modify this. | 0 | 776 | false | 0 | 1 | How to organize a Python module with lots of constants and exceptions? | 20,154,892 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I am writing a Windows python program that needs to query WMI. I am planning to do this by using the subprocess module to call WMIC with the arguments I need.
I see a lot of examples online of using WMI via PowerShell, usually using the "commandlet" Get-WmiObject or the equivalent gwmi.
How do you do the equivalent of Get-WmiObject without using PowerShell, but rather with WMIC?
Specificially, from within CMD.EXE, I want to do powershell gwmi Win32_USBControllerDevice, but without using powershell; rather, I want to invoke WMIC directly.
Thanks, and sorry for the beginner question! | 0 | python,windows,wmi,wmic | 2013-05-10T21:27:00.000 | 1 | 16,491,077 | From CMD.EXE, I think the command I need is wmic path Win32_USBControllerDevice get *
So most likely the general pattern is:
PowerShell: gwmi MYCLASSNAME
translates into:
CMD.EXE: wmic path
MYCLASSNAME get * | 0 | 1,863 | false | 0 | 1 | Get-WmiObject without PowerShell | 16,491,345 |
1 | 3 | 0 | 0 | 4 | 0 | 0 | 0 | I'm trying to get test unit coverage with Sonar. To do so, I have followed these steps :
Generating report with python manage.py jenkins --coverage-html-report=report_coverage
Setting properties in /sonar/sonar-3.5.1/conf/sonar.properties:
sonar.dynamicAnalysis=reuseReports
sonar.cobertura.reportPath=/var/lib/jenkins/workspace/origami/DEV/SRC/origami/reports/coverage.xml
When I launch the tests, the reports are generated in the right place. However, no unit tests are detected by Sonar.
Am I missing a step or is everything just wrong? | 0 | python,django,jenkins,code-coverage,sonarqube | 2013-05-13T08:46:00.000 | 0 | 16,518,002 | On Jenkins I found that coverage.xml has paths that are relative to the directory in which manage.py jenkins is run.
In my case I need to run unit tests on a different machine than Jenkins. To allow Sonar to use the generated coverage.xml, it was necessary for me to run the tests from a folder in the same spot relative to the project as the workspace directory on Jenkins.
Say I have the following on Jenkins
/local/jenkins/tmp/workspace/my_build
+ my_project
+ app1
+ app2
Say on test machine I have the following
/local/test
+ my_project
+ app1
+ app2
I run unit tests from /local/test on the test machine. Then coverage.xml has the correct relative paths, which look like my_project/app1/source1.py or my_project/app2/source2.py | 0 | 6,621 | false | 1 | 1 | How to get tests coverage using Django, Jenkins and Sonar? | 19,887,503 |
1 | 1 | 0 | 3 | 4 | 0 | 1.2 | 0 | I am using ipdb to debug a python script.
I want to print a very long variable. Is there any ipdb pager like more or less used in shells?
Thanks | 0 | python,debugging,printing,pager,pdb | 2013-05-14T11:20:00.000 | 1 | 16,541,847 | You might want to create a function which accepts a text, puts this text into a temporary file, and calls os.system('less %s' % temporary_file_name).
To make it easier for everyday use: Put the function into a file (e.g: ~/.pythonrc) and specify it in your PYTHONSTARTUP.
Alternatively you can just install bpython (pip install bpython), and start the bpython shell using bpython. This shell has a "pager" functionality which executes less with your last output. | 0 | 725 | true | 0 | 1 | Is there any ipdb print pager? | 16,565,699 |
2 | 3 | 0 | 5 | 2 | 1 | 0.321513 | 0 | Well thats the question. Are there any projects for other languages which try to imitate what stackless python is doing for python? | 0 | python,haskell,compiler-construction,lisp,interpreter | 2013-05-14T12:16:00.000 | 0 | 16,542,897 | If you mean the stackless compilation with lightweight concurrency, Haskell has done that from the very beginning. IIRC the first compilation scheme for Haskell was called the G-machine. Later that was replaced by the STG-machine. This is actually necessary for efficient laziness, but easy concurrency and parallelism comes as an additional bonus.
Another notable language in this sector is Erlang and its bad joke imitation language Go, as well as continuation-based languages like Scheme. Unlike Haskell they don't use an STG compilation scheme. | 0 | 372 | false | 0 | 1 | Are there any Stackless Python like projects for other languages (Java, Lisp, Haskell, Go etc) | 16,543,659 |
2 | 3 | 0 | 4 | 2 | 1 | 0.26052 | 0 | Well thats the question. Are there any projects for other languages which try to imitate what stackless python is doing for python? | 0 | python,haskell,compiler-construction,lisp,interpreter | 2013-05-14T12:16:00.000 | 0 | 16,542,897 | Both Haskell and Erlang contain (in the standard implementation) microthreads/green threads with multi-core support, a preemptive scheduler, and some analogue of channels. The only rather unique feature of Stackless that I can think of is serialisation of threads, although you can sometimes fake it by providing a way of serializing function state. | 0 | 372 | false | 0 | 1 | Are there any Stackless Python like projects for other languages (Java, Lisp, Haskell, Go etc) | 16,543,577 |
1 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I'd like to know how to preform an action every hour in python. My Raspberry Pi should send me information about the temp and so on every hour. Is this possible?
I am new to python and linux, so a detailed explanation would be nice. | 0 | python,raspberry-pi,schedule | 2013-05-14T12:52:00.000 | 0 | 16,543,715 | The easiest way would be to set up a cron job to call a python script every hour. | 0 | 7,805 | false | 0 | 1 | How to schedule an action in python? | 16,543,818 |
2 | 2 | 0 | 13 | 12 | 1 | 1.2 | 0 | My python program consists of several files:
the main execution python script
python modules in *.py files
config file
log files
executables scripts of other languages.
All this files should be available only for root. The main script should run on startup, e.g. via upstart.
Where I should put all this files in Linux filesystem?
What's the better way for distribution my program? pip, easy_install, deb, ...? I haven't worked with any of these tool, so I want something easy for me.
The minimum supported Linux distributive should be Ubuntu. | 0 | python,linux,open-source | 2013-05-15T12:45:00.000 | 1 | 16,565,363 | For sure, if this program is to be available only for root, then the main execution python script have to go to /usr/sbin/.
Config files ought to go to /etc/, and log files to /var/log/.
Other python files should be deployed to /usr/share/pyshared/.
Executable scripts of other languages will go either in /usr/bin/ or /usr/sbin/ depending on whether they should be available to all users, or for root only. | 0 | 14,785 | true | 0 | 1 | Where I should put my python scripts in Linux? | 16,565,499 |
2 | 2 | 0 | 1 | 12 | 1 | 0.099668 | 0 | My python program consists of several files:
the main execution python script
python modules in *.py files
config file
log files
executables scripts of other languages.
All this files should be available only for root. The main script should run on startup, e.g. via upstart.
Where I should put all this files in Linux filesystem?
What's the better way for distribution my program? pip, easy_install, deb, ...? I haven't worked with any of these tool, so I want something easy for me.
The minimum supported Linux distributive should be Ubuntu. | 0 | python,linux,open-source | 2013-05-15T12:45:00.000 | 1 | 16,565,363 | If only root should access the scripts, why not put it in /root/ ?
Secondly, if you're going to distribute your application you'll probably need easy_install or something similar, otherwise just tar.gz the stuff if only a few people will access it?
It all depends on your scale..
Pyglet, wxPython and similar have a hughe userbase.. same for BeautifulSoup but they still tar.gz the stuff and you just use setuptools to deply it (whcih, is another option). | 0 | 14,785 | false | 0 | 1 | Where I should put my python scripts in Linux? | 16,565,490 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I want to design a website with minimal front end and a backend for periodic processing. Some 200 users will use the site.I have choosen php vs python. I have done few defect fixes in PHP and some automation scription in python and I have absolutely no web development experience. But I have application development experience in c++.
I want to develop the site with ease and minimum effort(no CMS as I want to learn the language). Can anyone suggest me which one choose ? | 0 | php,python,web | 2013-05-15T13:31:00.000 | 0 | 16,566,430 | I personally think if you have little web development experience you should go with PHP. You can directly embed it in your HTML and perhaps that will make it easier to understand for you. That's of course if you don't want to make complicated websites (yet).
After you familiarise yourself with web development, you can then decide again whether to use PHP or Python depending on the platform you want to use and what you want to achieve.
Moreover if you have C++ experience, PHP's syntax is IMO closer to C++. | 0 | 164 | true | 0 | 1 | from c++ development to PHP or Python | 16,566,525 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 1 | I want to check if a certain tweet is a reply to the tweet that I sent. Here is how I think I can do it:
Step1: Post a tweet and store id of posted tweet
Step2: Listen to my handle and collect all the tweets that have my handle in it
Step3: Use tweet.in_reply_to_status_id to see if tweet is reply to the stored id
In this logic, I am not sure how to get the status id of the tweet that I am posting in step 1. Is there a way I can get it? If not, is there another way in which I can solve this problem? | 0 | python,twitter,tweepy | 2013-05-15T20:51:00.000 | 0 | 16,574,746 | What one could do, is get the last nth tweet from a user, and then get the tweet.id of the relevant tweet. This can be done doing:
latestTweets = api.user_timeline(screen_name = 'user', count = n, include_rts = False)
I, however, doubt that it is the most efficient way. | 0 | 1,649 | false | 0 | 1 | How to get id of the tweet posted in tweepy | 16,589,445 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | In the Visual Event description, it says that it extracts "which elements have events attached to them". I can confirm this by running the bookmarklet and seeing all the colour highlights.
I would like to extract this information without the fancy presentation so that I can play around with it into a script (Ruby/Python/Perl). In other words, I would like to get a list of the divs (and their info ideally) from Visual Event.
Is there any way to do this without digging through the code on GitHub? Not to say that I'm not willing to do this, I was just wondering if there was an easier way. | 0 | javascript,python,ruby,perl | 2013-05-16T01:43:00.000 | 0 | 16,577,725 | There is no way to accomplish this very oddly specific task without digging through the code, although this isn't as hard as it seems considering it's quite legible and easy to build on your own system, even if you don't have any previous experience with JavaScript. | 0 | 67 | true | 1 | 1 | Extracting Visual Event 2 output into script | 16,579,794 |
1 | 6 | 0 | 2 | 5 | 1 | 0.066568 | 0 | I'm currently looking for a mature GA library for python 3.x. But the only GA library can be found are pyevolve and pygene. They both support python 2.x only. I'd appreciate if anyone could help. | 0 | python,genetic-algorithm | 2013-05-16T12:14:00.000 | 0 | 16,587,145 | Not exactly a GA library, but the book "Genetic Algorithms with Python" from Clinton Sheppard is quite useful as it helps you build your own GA library specified for your needs. | 0 | 11,702 | false | 0 | 1 | Any Genetic Algorithms module for python 3.x? | 45,485,156 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 0 | I am trying finding a library of Python that could detect the model of a graphics card. For a better graphics card, there is a higher score associated with it, because I need to configure a game's display based on the performance of graphic card. | 0 | python,graphic | 2013-05-16T15:25:00.000 | 0 | 16,591,538 | If you take this approach, you will have to maintain an ever changing database of graphics cards and the performance capabilities of each. Typically, options are available to the user for changing texture quality, terrain/water detail, shadows, lighting, anti-aliasing, etc. There is also usually a test where the game renders a scene, and based on the frame rate the game can set ideal presets for most of the graphics options. | 0 | 252 | false | 0 | 1 | Python Library Can Detect Graphic Card | 22,495,380 |
1 | 3 | 0 | 18 | 1 | 1 | 1 | 0 | Is it possible to run Python code from within the vim editor?
What is necessary to install the support along with Python syntax highlighting?
How would I install "python.vim : Enhanced version of the python syntax highlighting script" ?
I did not automatically create ~/.vim/syntax and I'm using a Mac, all I downloaded was the .app file, an executable that I don't know of its purpose and a readme file.
I've tried also creating a folder for the python.vim file, but that didn't work out either. | 0 | python,vim | 2013-05-16T20:49:00.000 | 0 | 16,597,216 | Personally:
When inside Vim editing my Python scripts, I simply hit CtrlZ so as to return in console mode.
Run my script with command $ python my_script.py.
When done, I enter $ fg in the command line and that gets me back inside Vim, in the state I was before hitting CtrlZ. (fg as in foreground)
Edit
Recently I have started using the :terminal mode of vim much more frequently.
I tend to prefer it to CtrlZZ because it may happen that I forget that I used Ctrl-z and open an additional vim session: it may become messy. Also, having a terminal pane is easier for dealing with line number in errors message, since the two views are available at the same time.
So the workflow I'm using nowadays has become:
:terminal (in my case I have a vim mapping with leader key) <leader>tm :terminal<cr> so that I don't even type :terminal manually)
Run my script with command $ python my_script.py.
$ exit in the bash command line if I want to close the terminal pane | 0 | 11,604 | false | 0 | 1 | Can Python be run from within the vim editor? | 16,606,672 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I would like to make an authentication system where users or an administrator can choose which login system they prefer. The problem is that different systems have different Login-Systems and different client-informations.
I thought I will make an simple User-Class in the c++ application and an administrator can extend this class with its own one or more User-Login-Systems in Python. Off course this service runs on a server.
How can I organize the different Login-Systems on the server and automatically use the prefered Login-System with the correct user-information class on the client application? | 0 | c++,python,login,operating-system | 2013-05-17T07:53:00.000 | 0 | 16,604,125 | Assuming that you would like to support login systems such as User/Pass, LDAP, OpenID, Oauth etc.. you have to model your authentication layer to be able to support all these mechanisms. I usually consider the above authentication methods as strategies.
Lets say you have an Authentication class with an authenticate method which accepts an object that implements an interface "AuthStrategy" and the various authentication methods can implement this interface.
Hope the object model is clear. | 0 | 398 | true | 0 | 1 | handling multiple login systems | 16,604,457 |
1 | 1 | 1 | 1 | 0 | 1 | 1.2 | 0 | I've recently discovered IronPython in C# and only tutorials I found were how to use python script in C#, but I've noticed, that IronPython has classes and methods you can use directly in C# like : PythonIterTools.product some_pr = new PythonIterTools.product(); and others, can anyone explain how does this work? | 0 | c#,ironpython | 2013-05-20T09:11:00.000 | 0 | 16,646,135 | Parts of IronPython's standard library are implemented in C#, mainly because the equivalents in CPython are written in C. You can access those parts directly from a C# (or any other static .NET language) directly, but they're not intended to be used that way and may not be easy to use. | 0 | 223 | true | 0 | 1 | Using IronPython in C# | 16,652,356 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | I’m writing a cherrypy application that needs to redirect to a particular page and I use HTTPRedirect(‘mynewurl’, status=303) to achieve this. This works inasmuch as the browser (Safari) redirects to ‘mynewurl’ without asking the user. However, when I attempt to unit test using nosetests with assertInBody(), I get a different result; assertInBody reports that ‘This resource can be found at mynewurl’ rather than the actual contents of ‘mynewurl’. My question is how can I get nosetests to behave in the same way as a Safari, that is, redirecting to a page without displaying an ‘ask’ message?
Thanks
Kevin | 0 | python,cherrypy,nose,nosetests | 2013-05-20T15:01:00.000 | 0 | 16,652,406 | With python unit tests, you are basically testing the server. And the correct response from server is the redirect exception and not the redirected page itself. I would recommend you testing this behaviour in two steps:
test if the first page/url throws correctly initialized (code, url) HTTPRedirect exception
test contents of the second page (on which is being redirected)
But of course, if you insist, you can resolve the redirect in Try/Except by yourself by inspecting the exception attributes and calling testing method on target url again. | 0 | 190 | true | 1 | 1 | Unit testing Cherrypy HTTPRedirect. | 16,652,717 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | So I have been taking a few classes on python and the whole time, I was wondering about modules. I can install them and run them with Eclipse but if I compile that program, so if it has an 'exe' extension, how would the module react on a computer that doesn't have it installed.
Example:
If I made some random little thing with something like pygame. I installed the pygame module on my computer, made an application with the pygame module and compiled it into an executable, how does the other computer that I run that file on. Or does it not work at all? | 0 | python,module | 2013-05-22T02:13:00.000 | 0 | 16,682,410 | Python modules are already executable - you don't compile them. If you want to run them on another computer, you can install python and any other dependent module such as pygame on that computer, copy the scripts over and run them.
Python has many ways to wrap scripts up into an installer to do the work for you. Its common to use python distutils to write a setup.py file which handles the install. From there you can use setup.py to bundle your scripts into zip files, tarballs, executables, rpms, etc... for other machines. You can document what the user needs to make your stuff go or you can use something like pip or distribute to write dependency files to automatically install pygame (and etc...).
There are many ways to handle this and its not particularly easy the first time round. For starters, read up on distutils in the standard python docs and then google for the pip installer. | 0 | 148 | false | 0 | 1 | Python modules on different devices | 16,682,782 |
2 | 4 | 0 | 0 | 5 | 0 | 0 | 0 | I am trying to come out with a small python script to monitor the battery state of my ubuntu laptop and sound alerts if it's not charging as well as do other stuff (such as suspend etc).
I really don't know where to start, and would like to know if there is any library for python i can use.
Any help would be greatly appreciated.
Thanks | 0 | python,linux,ubuntu | 2013-05-22T19:15:00.000 | 1 | 16,699,883 | You do not need to use any module for this.
Simply you can navigate to
/sys/class/power_supply/BAT0.
Here you will find a lot of files with information about your battery.
You will get current charge in charge_now file and total charge in charge_full file.
Then you can calculate battery percentage by using some math.
Note:- You may need root access for this. You can use sudo nautilus command to open directories in root mode. | 0 | 5,179 | false | 0 | 1 | Use Python to Access Battery Status in Ubuntu | 56,511,789 |
2 | 4 | 0 | 0 | 5 | 0 | 0 | 0 | I am trying to come out with a small python script to monitor the battery state of my ubuntu laptop and sound alerts if it's not charging as well as do other stuff (such as suspend etc).
I really don't know where to start, and would like to know if there is any library for python i can use.
Any help would be greatly appreciated.
Thanks | 0 | python,linux,ubuntu | 2013-05-22T19:15:00.000 | 1 | 16,699,883 | The the "power" library on pypi is a good bet, it's cross platform too. | 0 | 5,179 | false | 0 | 1 | Use Python to Access Battery Status in Ubuntu | 39,884,293 |
1 | 1 | 0 | 0 | 4 | 0 | 0 | 0 | I am a dummy in web apps. I have a doubt regaring the functioning of apache web server. My question is mainly centered on "how apache handles each incoming request"
Q: When apache is running in the mod_python/mod_php mode, then does a "fork" happen for each incoming reuest?
If it forks in mod_php/mod_python way, then where is the advantage over CGI mode, except for the fact that the forked process in mod_php way already contains an interpretor instance.
If it doesn't fork each time, how does it actually handle each incoming request in the mod_php/mod_python way. Does it use threads?
PS: Where does FastCGI stands in the above comparison? | 0 | apache,webserver,cgi,mod-python,mod-php | 2013-05-25T07:06:00.000 | 1 | 16,747,301 | With a modern version of Apache, unless you configure it in prefork mode, it should run threaded (and not fork). mod_python is threadsafe, and doesn't require that each instance of it is forked into its own space. | 0 | 459 | false | 0 | 1 | Does Apache really "fork" in mod_php/python way for request handling? | 21,819,195 |
1 | 2 | 0 | 3 | 1 | 1 | 1.2 | 0 | I have heard many times that C and Python/Ruby code can be integrated.
Now, my question is, can I use, for example a Python/Ruby ORM from within C? | 0 | python,c,ruby | 2013-05-25T16:34:00.000 | 0 | 16,751,639 | Yes, but the API would be unlikely to be very nice, especially because the point of an ORM is to return objects and C doesn't have objects, hence making access to the nice OOP API unwieldy.
Even in C++ is would be problematic as the objects would be Python/Ruby objects and the values Python/Ruby objects/values, and you would need to convert back and forth.
You would be better off using a nice database layer especially made for C. | 0 | 70 | true | 0 | 1 | Can I use a Python/Ruby ORM inside C? | 16,751,766 |
1 | 2 | 0 | 0 | 5 | 1 | 0 | 0 | When writing unit tests, it often happens that some tests sort of "depend" on other tests.
For example, lets suppose I have a test that checks I can instantiate a class. I have other tests that go right ahead and instantiate it and then test other functionality.
Lets also suppose that the class fails to instantiate, for whatever reason.
This results in a ton of tests giving errors. This is bad, because I can't see where the problem really is. What I need is a way of skipping these tests if my instantiation test has failed.
Is there a way of doing this with Python's unittest module?
If this isn't what I should do, what should I do so as to see where the problem really is when something breaks? | 0 | python,unit-testing | 2013-05-26T15:34:00.000 | 0 | 16,760,786 | I have no suggestion how to avoid running "dependent" tests, but I have a suggestion how you might better live with them: Make the dependencies more apparent and therefore make it easier to analyse test failures later. One simple possibility is the following:
In the test-code, you put the tests for the lower-level aspects at the top of the file, and the more dependent tests further to the bottom. Then, when several tests fail, first look at the test that is closest to the top of the file. | 0 | 500 | false | 0 | 1 | Ignore unittests that depend on success of other tests | 53,884,422 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | Im trying to make a algorithm in python to detect if my phone is in the area. Im using this to find my device:
bluetooth.discover_devices()
But it only detects my phone if I set my Bluetooth on my phone to "visible".
Is there a function or command to detect my phone when it's set to hidden?
Im fairly new to python so any form of help is very welcome!
Thanks in advance! | 0 | python,bluetooth,hidden,device | 2013-05-28T14:29:00.000 | 0 | 16,794,658 | You could attempt to connect to your phone. If it's nearby, the connection will succeed. Devices can be connectable when they are not discoverable. You would have to already know the device address of your phone (via discovery when your phone was visible) in order to initiate the connection. | 0 | 1,954 | false | 0 | 1 | How to find bluetooth devices not set to visible in python? | 16,795,191 |
1 | 4 | 0 | 1 | 4 | 1 | 0.049958 | 0 | I am creating an application related to files. And I was looking for ways to compute checksums for files. I want to know what's the best hashing method to calculate checksums of files md5 or SHA-1 or something else based on this criterias
The checksum should be unique. I know its theoretical but still I want the probablity of collisions to be very very small.
Can compare two files to be equal if there checksums are equal or not.
Speed(not very important, but still)
Please feel free to as elaborative as possible. | 0 | python,django,file,checksum | 2013-05-28T18:37:00.000 | 0 | 16,799,088 | md5 tends to work great for checksums ... same with SHA-1 ... both have very small probability of collisions although I think SHA-1 has slightly smaller collision probability since it uses more bits
if you are really worried about it, you could use both checksums (one md5 and one sha1) the chance that both match and the files differ is infinitesimally small (still not 100% impossible but very very very unlikely) ... (this seems like bad form and by far the slowest solution)
typically (read: in every instance I have ever encountered) an MD5 OR an SHA1 match is sufficient to assume uniqueness
there is no way to 100% guarantee uniqueness short of byte by byte comparisson | 0 | 2,494 | false | 0 | 1 | File Checksums in Python | 16,799,533 |
1 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 | I'm looking for a game which will allow me to test various artificial intelligence, reinforcement learning and machine learning algorithms. It would be great, if there will be good documentation or even helpful framework for writing AI. I know about TORCS, but do you know other games? It doesn't matter in which language it is written. It can be any arcade game, simulator, FPS, etc. | 0 | java,c++,python,machine-learning,artificial-intelligence | 2013-05-29T11:57:00.000 | 0 | 16,813,243 | Quake 3 is an ideal candidate for bot design.
open source code base.
Realistic scenario (compared to robocode which is a toy domain).
existing bots and I believe the first bots used in Quake 3 where the output of a Ph.D.
lots of documentation. | 0 | 520 | false | 0 | 1 | Game which allows to test AI algorithms | 16,813,957 |
1 | 1 | 0 | 1 | 2 | 1 | 0.197375 | 0 | I'm using Python with a Cygwin environment to develop data processing scripts and Python packages I'd like to actively use the scripts while also updating the packages on which those scripts depend. My question is what is the best practice, recommendation for managing the module loading path to isolate and test my development changes but not affect the working of a production script.
Python imports modules in the following order (see M. Lutz, Learning Python)
Home directory.
PYTHONPATH directories.
Standard library directories.
The contents of any *.pth file.
My current solution is to install my packages in a local (not in /usr/lib/python2.x/ ) site-packages directory and add a *.pth file in the global site-packages directory so these are loaded by default. In the development directory I then simply modify PYTHONPATH to load the packages I'm actively working on with local changes.
Is there a more standard way of handling this situation? Setting up a virtualenv or some other way of manipulating the module load path? | 0 | python | 2013-05-29T15:10:00.000 | 0 | 16,817,623 | This is just my opinion, but I would probably use a combination of virtualenvs and Makefiles/scripts in this case. I haven't done it for your specific use case, but I often set up multiple virtualenvs for a project, each with a different python version. Then I can use Makefiles to run my code or tests in one or all of my virtualenvs. Seems like it wouldn't be too hard to set up a makefile that would let you type make devel to run in the development envionment, and make production for the production environment.
Alternatively, you could use git branches to do this. Keep your production scripts on master, and use feature branches to isolate and test changes while still having your production scripts just a git checkout master away. | 0 | 170 | false | 0 | 1 | Separate Python paths for development and production | 16,817,965 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | i am trying to find if it is possible to send a message over Bluetooth to consoles like playstation 3 to make it turn on or off? since it is possible to be done from controllers. I been reading around about it. but was wondering if there is any information or exapmles that could help. as all i could find was python code which i am not really a pro in.
Any information will be much appreciated. | 0 | java,android,python,bluetooth,playstation | 2013-05-29T15:17:00.000 | 0 | 16,817,777 | Im pretty sure you need an emulator for this. Correct me if Im wrong. | 0 | 150 | false | 1 | 1 | Android to Playstation and other consoles | 16,818,550 |
1 | 1 | 0 | 0 | 2 | 1 | 0 | 0 | PyDev has its own jython interpreter, inside pydev.jython.VERSION
that jython has its own python libraries i.e. pydev.jython.VERSION/LIB/zipfile.py
Now if I write a jython script for pydev-jython-scripting, it will load only its internal Lib pydev.jython.VERSION/LIB/
How do I have this pydev-jython recognize PYTHONPATH, I tried appending to sys.path but there is some python version problem some invalid syntax
My system python installation has all the .py source, my pydev interpreter configuration has python interpreter setup and NOT jython and NOT ironpython
pydev-jython script does not recognize many of regular system python modules, why? | 0 | python,eclipse,pydev,jython | 2013-05-29T22:26:00.000 | 1 | 16,824,942 | The version that PyDev uses internally is Jython 2.1, so, you can't add newer libraries to that version unless they're compatible...
If you need to use a different version, you'd need to first update the version used inside PyDev itself (it wasn't updated so far because the current Jython size is too big -- PyDev has currently 7.5 MB and just the newer Jython jar is 10 MB -- with libs it goes to almost 16 MB, so making PyDev have 22 MB just for this upgrade is something I'm trying to avoid... now, I think there's probably too much bloat there in Jython, so, if that can be removed, it's something that may be worth revisiting...). | 0 | 205 | false | 0 | 1 | pydev eclipse, jython scripting , syspath | 18,384,815 |
1 | 2 | 0 | 1 | 4 | 1 | 0.099668 | 0 | my function reads from a file, and a doctest needs to be written in a way independent of an absolute path. What's the best way of wrting a doctest? Writing a temp file is expensive and not failproof. | 0 | python,doctest | 2013-05-30T08:50:00.000 | 0 | 16,831,701 | Your doctest could use module StringIO to provide a file object from a string. | 0 | 1,135 | false | 0 | 1 | How to write a doctest for a function that reads from a file? | 16,831,858 |
1 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | I have a question which has been on my mind for a while. I'm aware that languages like C are faster than Python and are therefore used to write operating systems. I've read somewhere that an operating system written in Python will be very slow. So here is my question:
As processor speed is continuously improved, does the execution speed of a particular language become less of a factor in operation system development? Will it be possible, in the future, to write an operating system solely in Python that will be running almost on the same speed with one written in C? Thank you. | 0 | python | 2013-05-31T09:12:00.000 | 0 | 16,853,789 | No. Think about it: If Python is slower than C for a processor running at speed X, what can you say about the speed of Python vs. C for a processor running at speed 2X?
But then... You can write operating systems in dynamic languages. And people do. Once you bootstrap the interpreter. But this won't become mainstream. At least not anytime soon. Because: The mainstream operating systems are already... well... mainstream. And people want to use all that processing power in their new processors for... um... processing stuff. And not for providing the underpinnings to... um... process stuff. | 0 | 190 | false | 0 | 1 | Python's speed in operating system development | 16,853,869 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I have created a python script that uses selenium to automate an online task. The script works perfect on my local machine (windows 7) and gives the output i am looking for. I am now trying to get it up and running from PHP on my hostmonster shared server which is running linux and having no luck.
I have installed this version of selenium on both my win7 comp and the server: pypi.python.org/pypi/selenium
Python version: 2.7.5
The script i wrote gets the following error at "import selenium":ImportError: No module named selenium
When i log into the server through ssh shell, i can type in "import selenium" and receive no errors. I can also type in "from selenium import webdriver" in the ssh shell and receive no errors.
Any help/guidance would be greatly appreciated. | 0 | php,python,selenium,hostmonster | 2013-06-02T09:08:00.000 | 0 | 16,881,335 | when i enter
import sys
and then
print sys.path
into ssh shell I receive the following:
['', '/home2/klickste/python/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/home2/klickste/python/lib/python2.7/site-packages/mechanize-0.2.5-py2.7.egg', '/home2/klickste/python/lib/python2.7/site-packages/html2text-3.200.3-py2.7.egg', '/home2/klickste/python/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg', '/home2/klickste/python/lib/python27.zip', '/home2/klickste/python/lib/python2.7', '/home2/klickste/python/lib/python2.7/plat-linux2', '/home2/klickste/python/lib/python2.7/lib-tk', '/home2/klickste/python/lib/python2.7/lib-old', '/home2/klickste/python/lib/python2.7/lib-dynload', '/home2/klickste/python/lib/python2.7/site-packages'] | 0 | 238 | false | 0 | 1 | Import selenium error on hostmonster shared linux server | 16,975,487 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 1 | I have created a python script that uses selenium to automate an online task. The script works perfect on my local machine (windows 7) and gives the output i am looking for. I am now trying to get it up and running from PHP on my hostmonster shared server which is running linux and having no luck.
I have installed this version of selenium on both my win7 comp and the server: pypi.python.org/pypi/selenium
Python version: 2.7.5
The script i wrote gets the following error at "import selenium":ImportError: No module named selenium
When i log into the server through ssh shell, i can type in "import selenium" and receive no errors. I can also type in "from selenium import webdriver" in the ssh shell and receive no errors.
Any help/guidance would be greatly appreciated. | 0 | php,python,selenium,hostmonster | 2013-06-02T09:08:00.000 | 0 | 16,881,335 | I have resolved the issue. I used the following command to install selenium outside of the python folder.
easy_install --prefix=$HOME/.local/ selenium
I also added these lines at the bottom of my .bashrc file located in my home directory
export PYTHONPATH=$HOME/.local/lib/python/site-packages:$PYTHONPATH
export PYTHONPATH=$HOME/.local/lib/python2.7/site-packages:$PYTHONPATH
export PATH=$HOME/.local/bin:$PATH | 0 | 238 | true | 0 | 1 | Import selenium error on hostmonster shared linux server | 16,991,512 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | I am building a back end that will handle requests from web apps and mobile device apps.
I am trying to decide if an TCP server is appropriate for this vs. Regular http GET and POST requests.
Use case 1:
1. Client on mobile device executes a search on the device for the word "red".
Word sent to server (unclear wether JSON or TCP somehow)
The word red goes to the server and the server pulls all rows from a mysql db that have red as their color (this could be ~5000 results).
Alternate step 2 (maybe TCP should make more sense here): there is a hashmap built with the word red as the key and the value a pointer to an array of all the objects with the word red (I think this will be a faster look up time).
Data is sent to the phone (either JSON or some other way, not sure). I am unclear on this step.
The phone parses, etc...
There is a possibility that I may want to keep the array alive on the server until the user finishes the query (since they could continue to filter down results).
Based on this example, what is the architecture I should be looking at?
Any different way is highly appreciated.
Thank you | 0 | python,http,ftp | 2013-06-03T03:51:00.000 | 0 | 16,889,768 | In your case I would use the HTTP because:
Your service is stateless.
If you use TCP you will have problem scaling up your service (since every request will be directed to the server that establish the TCP connection ), this relate to that your service is stateless. In HTTP you just add more servers behind load balane
For TCP you will need to state some port which can be block due to firewall and ect' - you can use port 80/8080 but I don't think this is good practice
if you service were more like suggestion that change as the use typein his word you may want to use TCP/HTTP Socket
TCP is used for more long term connection - like Security system that report the state of the system every X seconds - which is not the case | 0 | 110 | true | 0 | 1 | Should I build a TCP server or use simple http messages for a back-end? | 16,889,799 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 1 | I have a script that downloads a lot of fairly large (20MB+) files. I would like to be able to check if the copy I have locally is identical to the remote version. I realize I can just use a combination of date modified and length, but is there something even more accurate I can use (that is also available via paramiko) that I can use to ensure this? Ideally some sort of checksum?
I should add that the remote system is Windows and I have SFTP access only, no shell access. | 0 | python,sftp,checksum,paramiko | 2013-06-03T16:41:00.000 | 0 | 16,901,650 | I came with a similar scenario. the solution I currently take is to compare the remote file's size by using item.st_size for item in sftp.listdir_attr(remote_dir) with the local file's size by using os.path.getsize(local_file). when the two files are around 1MB or smaller,this solution is fine. However, a weird thing might happen: when the files are around 10MB or larger, the two size might differ slightly,e.g., one is 10000 Byte, another is 10003 Byte. | 0 | 614 | false | 0 | 1 | Python + Paramiko - Checking whether two files are identical without downloading | 68,450,855 |
2 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | Python is a relatively new language for me and I already see some of the trouble areas of maintaining a scripting language based project. I am just wondering how the larger community , with a scenario when one has to maintain a fairly large code base written by people who are not around anymore, deals with the following situations:
Return type of a function/method. Assuming past developers didn't document the code very well, this is turning out to be really annoying as I am basically reading code line by line to figure out what a method/function is suppose to return.
Code refactoring: I figured a lot of code need to be moved around, edited/deleted and etc. But lot of times simple errors, which would otherwise be compile time error in other compiled languages e.g. - wrong number of arguments, wrong type of arguments, method not present and etc, only show up when you run the code and the code reaches the problematic area. Therefore, whether a re-factored code will work at all or not can only be known once you run the code thoroughly. I am using PyLint with PyDev but still I find it very lacking in this respect. | 0 | python,scripting | 2013-06-04T10:09:00.000 | 0 | 16,915,118 | Others have already mentioned documentation and unit-testing as being the main tools here. I want to add a third: the Python shell. One of the huge advantages of a non-compiled language like Python is that you can easily fire up the shell, import your module, and run the code there to see what it does and what it returns.
Linked to this is the Python debugger: just put import pdb;pdb.set_trace() at any point in your code, and when you run it you will be dropped into the interactive debugger where you can inspect the current values of the variables. In fact, the pdb shell is an actual Python shell as well, so you can even change things there. | 0 | 51 | false | 0 | 1 | checking/verifying python code | 16,915,630 |
2 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | Python is a relatively new language for me and I already see some of the trouble areas of maintaining a scripting language based project. I am just wondering how the larger community , with a scenario when one has to maintain a fairly large code base written by people who are not around anymore, deals with the following situations:
Return type of a function/method. Assuming past developers didn't document the code very well, this is turning out to be really annoying as I am basically reading code line by line to figure out what a method/function is suppose to return.
Code refactoring: I figured a lot of code need to be moved around, edited/deleted and etc. But lot of times simple errors, which would otherwise be compile time error in other compiled languages e.g. - wrong number of arguments, wrong type of arguments, method not present and etc, only show up when you run the code and the code reaches the problematic area. Therefore, whether a re-factored code will work at all or not can only be known once you run the code thoroughly. I am using PyLint with PyDev but still I find it very lacking in this respect. | 0 | python,scripting | 2013-06-04T10:09:00.000 | 0 | 16,915,118 | You are right, that's an issue with dynamically typed interpreted languages.
There are to important things that can help:
Good documentation
Extensive unit-testing.
They apply to other languages as well of course, but here they are especially important. | 0 | 51 | false | 0 | 1 | checking/verifying python code | 16,915,300 |
1 | 3 | 0 | 1 | 1 | 0 | 0.066568 | 1 | If this is a stupid question, please don't mind me. But I spent some time trying to find the answer but I couldn't get anything solid. Maybe this is a hardware question, but I figured I'd try here first.
Does Serial Communication only work one to one? The reason this came up is because I had an arduino board listening for communication on its serial port. I had a python script feed bytes to the port as well. However, whenever I opened up the arduino's serial monitor, the connection with the python script failed. The serial monitor also connects to the serial port for communication for its little text input field.
So what's the deal? Does serial communication only work between a single client and a single server? Is there a way to get multiple clients writing to the server? I appreciate your suggestions. | 0 | python,serial-port,arduino,pyserial | 2013-06-05T20:35:00.000 | 0 | 16,949,369 | Edit:
I forgot about RS-485, which 'jdr5ca' was smart enough to recommend. My explanation below is restricted to RS-232, the more "garden variety" serial port. As 'jdr5ca' points out, RS-485 is a much better alternative for the described problem.
Original:
To expand on zmo's answer a bit, it is possible to share serial at the hardware level, and it has been done before, but it is rarely done in practice.
Likewise, at the software driver level, it is again theoretically possible to share, but you run into similar problems as the hardware level, i.e. how to "share" the link to prevent collisions, etc.
A "typical" setup would be two serial (hardware) devices attached to each other 1:1. Each would run a single software process that would manage sending/receiving data on the link.
If it is desired to share the serial link amongst multiple processes (on either side), the software process that manages the link would also need to manage passing the received data to each reading process (keeping track of which data each process had read) and also arbitrate which sending process gets access to the link during "writes".
If there are multiple read/write processes on each end of the link, the handshaking/coordination of all this gets deep as some sort of meta-signaling arrangement may be needed to coordinate the comms between the process on each end.
Either a real mess or a fun challenge, depending on your needs and how you view such things. | 0 | 1,057 | false | 0 | 1 | Serial Communication one to one | 16,951,886 |
1 | 3 | 0 | 0 | 2 | 1 | 0 | 0 | Which one should I use to maximize performance? os.path.isfile(path) or open(path)? | 0 | python,file,exists | 2013-06-06T12:47:00.000 | 0 | 16,962,528 | Afaik isfile() will be faster while open(path) is more secure, in the sence that if open() is able to actually open the file, you can be sure it's there. | 0 | 3,684 | false | 0 | 1 | checking if file exists: performance of isfile Vs open(path) | 16,962,634 |
Subsets and Splits