Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
15,888,186
2013-04-08T20:19:00.000
10
0
0
1
python-2.7,python-3.x,python-idle
24,211,913
11
false
0
0
I'm running Windows 7 64-bit. I saw the same errors today. I tracked down the cause for me, hopefully it'll help you. I had IDLE open in the background for days. Today I tried to run a script in IDLE, and got the "IDLE's subprocess didn't make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection." errors. So I closed all IDLE windows, and tried to restart IDLE. That then caused the same errors to pop up, and now IDLE wouldn't open successfully. The cause was an extra pythonw.exe process running in the background. If I open up an instance of IDLE, then open a second, the second has issues connecting, and closes. But it does not close the instances of pythonw.exe that it opened, one is left running in the background. That extra instance then prevents future attempts to open IDLE. Opening up Task Manager and killing all pythonw.exe processes fixed IDLE, and now it functions properly on my machine (1 instance open at a time though!).
11
10
0
Resolved April 15, 2013. In windows 7 (64bit) windows explorer when I right clicked a Python file and selected "edit with IDLE" the editor opens properly but when I run (or f5) the Python 3.3.1 program, it fails with the "IDLE's subprocess didn't make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection." error message. All other methods of starting IDLE for running my python 3.3.1 programs worked perfectly. Even the "Send to" method worked but it was unacceptably clunky. I've spend four days (so far) researching this and trying various things including reinstalling Python many times. And NO it's not the FireWall blocking it. I've tried totally turning Firewall off and it had no effect. Here's an important clue: In the beginning I installed and configured python 3.3 64 bit and everything worked including running from "edit with IDLE" but then recently when I needed a library only available in Python 2 I installed python 2.7.4 and from that point on the stated problem began. At one point I completely removed all traces of both versions and reinstalled Python 3.3.1 64 bit. Problem remained. Then I tried have both 32 bit versions installed but still no luck. Then at some point in my muddling around I lost the option to "edit with IDLE" and spent a day trying everything including editing in Regedit. No luck there either. I reinstalled Python 3.3.1 still no "edit with IDLE" then Finally I uninstalled all versions of Python and I removed python references to environment variables PATH and PYTHONPATH. Then I Deleted all the Python related keys in the windows registry, deleted the C:\python33 directory that the uninstall didn't bother to delete. Overkill, of course, then I restarted windows and installed Python 3.3.1 64 bit version again and thankfully the option to 'edit with IDLE' was back. I was momentarily happy, I opened windows explorer, right clicked on a python program, selected 'edit with IDLE' selected RUN (eyes closed) and you guessed it, same original error message "IDLE's subprocess didn't make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection." I am completely stuck on this issue and really need help. Pretty sure that you can see I and not a happy camper. And to top it all off, I guess I don't understand StackOverflow yet, I have had this plea for help up in various versions for 5 days and not one response from anyone. Believe me I've looked at every thing in stackoverflow plus other sites and I can't see the answer. Almost seems like I have to answer my own question and post it, trouble is, so far I can't. Anyway, thanks for listening. Yes I'm pretty new to Python but I've been programming and overcoming problems for many years (too many perhaps). anyone? Not personally having someone that is familiar with Python makes this difficult, how can I get in touch with an expert in Python for a quick phone conversation?
Can't run Python via IDLE from Explorer [2013] - IDLE's subprocess didn't make connection
1
0
0
72,468
15,889,166
2013-04-08T21:17:00.000
1
0
0
0
python,qt,pyqt
15,905,007
1
true
0
1
Overriding the paint-method of a corresponding delegate-class solves my problem. Thanks to cmannett85.
1
0
0
I'm writing an application that displays the content of a database table. My application shall display an image associated with each tuple/row, among textual and numeric data. The trouble is that I am dealing with huge data sets (up to 50k entries). The user will, of course, never see more than a few entries at once. But he should still be able to browse the data in a table-like view. Regarding the visual appearance, QTable would just do the job. But, after reading blog posts about this, QTable seems to be pretty slowish for such tasks, which is understandable since, for let's say 50k entries, on the order of 50k widgets QObjects need to be created. I though of using a QSlider as a replacement for the scrollbar of the table and a fixed layout. Whenever the slider is changed, the content of the gui-elements that make up the rows are adjusted. However, is there any Widget and/or solution that provides the look and feel of a QTable but allows me to generate only those widgets that are actually displayed? Many thanks in advance.
Qt: Alternatives to QTable for large datasets
1.2
0
0
292
15,889,424
2013-04-08T21:34:00.000
6
0
0
1
python,google-app-engine,amazon-ec2,boto
15,891,258
2
true
1
0
It sounds like you haven't copied the boto code to the root of your app engine directory. Boto works with GAE but Google doesn't supply you with the code. Once you copy it into the root of your GAE directory, the dev server should work, and after your next upload it will work on the prod server as well.
1
6
0
I'm new to Python and was hoping for help on how to 'import boto.ec2' on a GAE Python application to control Amazon EC2 instances. I'm using PyDev/Eclipse and have installed boto on my Mac, but using simply 'import boto' does not work (I get: : No module named boto.ec2). I've read that boto is supported on GAE but I haven't been able to find instructions anywhere. Thanks!
Running Boto on Google App Engine (GAE)
1.2
0
0
2,087
15,892,046
2013-04-09T01:57:00.000
1
0
0
1
python,macos,python-2.7,osx-mountain-lion
15,892,084
1
true
0
0
There should be a shell script in the /Applications/Python X.X. run this shell script to setup your PATH variable for whatever shell you use. For more info check the README.txt
1
0
0
I am using Mac OS X Mountain Lion and I have installed the "python.org version" of python 2.7.4. It seems like this newly version is invoked by the command python2 since it gives the version as 2.7.4, whereas the default (pre-installed mac) version is invoked with the command python (it displays version 2.7.2). Is this correct? How do I best change the command 'python' to point to the newly installed 'python2'?
Configuring Mac OS X Mountain Lion to use python.org's python 2.7.4
1.2
0
0
337
15,892,291
2013-04-09T02:29:00.000
0
0
1
0
python,file,io
15,892,396
4
false
0
0
well, you can choose the "r+w" mode, with which you need only open the file once
1
0
0
There are a lot of files, for each of them I need to read the text content, do some processing of the text, then write the text back (replacing the old content). I know I can first open the files as rt to read and process the content, and then close and reopen them as wt, but obviously this is not a good way. Can I just open a file once to read and write? How?
Ideal way to read, process then write a file in python
0
0
0
143
15,895,788
2013-04-09T07:23:00.000
1
0
0
0
python,apache,postgresql,mod-wsgi
15,897,981
2
false
0
0
No matter what approach you use, other apps running as www-data will be able to read your password and log in as you to the database. Using peer auth won't help you out, it'll still trust all apps running under www-data. If you want your application to be able to isolate its data from other databases you'll need to run it as a separate user ID. The main approaches with this are: Use the apache suexec module to run scripts as a separate user; Use fast-cgi (fcgi) or scgi to run the cgi as a different user; or Have the app run its own minimal HTTP server and have Apache reverse proxy for it Of these, by far the best option is usually to use scgi/fcgi. It lets you easily run your app as a different unix user but avoids the complexity and overhead of reverse proxying.
1
3
0
I'm writing a web application in Python (on Apache server on a Linux system) that needs to connect to a Postgres database. It therefore needs a valid password for the database server. It seems rather unsatisfactory to hard code the password in my Python files. I did wonder about using a .pgpass file, but it would need to belong to the www-data user, right? By default, there is no /home/www-data directory, which is where I would have expected to store the .pgpass file. Can I just create such a directory and store the .pgpass file there? And if not, then what is the "correct" way to enable my Python scripts to connect to the database?
"Correct" way to store postgres password in python website
0.099668
1
0
1,239
15,896,639
2013-04-09T08:11:00.000
1
0
0
0
python,python-2.7,python-requests
15,896,851
1
true
0
0
The urlopen object has a method info() which gives all kinds of useful header information, including Content-Length Occassionally this is not correctly set but should be in most cases and will help
1
0
0
I'm crawling a bunch of web pages using Python's request library, but occasionally the crawler will stumble upon an absolutely mammoth page, be it a PDF or video or otherwise gargantuan file. Is there a good way to limit the maximum size of file it will download?
Cap download size with Python requests library
1.2
0
1
535
15,897,268
2013-04-09T08:45:00.000
7
0
1
1
python,windows
15,897,564
1
true
0
0
You can install almost any 32 bit software on 64 bit Windows because it has a built-in 32 bit emulator. If you are going to use a 32 bit Python, make sure all the libraries you use are 32 bit too.
1
4
0
Is it possible to install 32-bit Python (2.7 series) on Windows 7/8 64-bit to develop 32-bit applications? I'm sure the answer is yes but be good to get confirmation.
Python 32-bit development on 64-bit Windows
1.2
0
0
13,622
15,898,775
2013-04-09T09:51:00.000
0
1
0
0
google-app-engine,python-2.7,google-talk
15,903,171
2
false
0
0
I think you are confused. Python runs ON appengine. Also theres a working java xmpp example provided.
2
0
0
I searched a lot for built web service like Google Talk, using Google Application Engine and Python. For that first step is to check the status of online user on the Gmail. I found many code of it on python using XMPP library but it work only on python not using Google Application Engine. There is also suggestion of using XMPP python API but for sending message we have to provide JID like [email protected] and message send.We can not send message from one email Id to another Email Id directly. Now I want to perform Oauth authentication in python for gtalk at domain level can anyone tell me how to do this?
Gtalk Service On Google App Engine Using Python
0
0
1
323
15,898,775
2013-04-09T09:51:00.000
0
1
0
0
google-app-engine,python-2.7,google-talk
15,904,726
2
false
0
0
You can only send messages from your app. There are two options: [email protected] or anything@your_app_id.appspotchat.com. If you wanted to behave like an arbitrary xmpp client, you'll have to use a third party xmpp library running over HTTP and handle the authentication with the user's XMPP server.
2
0
0
I searched a lot for built web service like Google Talk, using Google Application Engine and Python. For that first step is to check the status of online user on the Gmail. I found many code of it on python using XMPP library but it work only on python not using Google Application Engine. There is also suggestion of using XMPP python API but for sending message we have to provide JID like [email protected] and message send.We can not send message from one email Id to another Email Id directly. Now I want to perform Oauth authentication in python for gtalk at domain level can anyone tell me how to do this?
Gtalk Service On Google App Engine Using Python
0
0
1
323
15,900,399
2013-04-09T11:16:00.000
0
0
1
0
python
15,900,584
2
true
0
0
At least on Arch Linux, and presumably on other distros, there are two separate packages for pip, which if both installed give you two different commands: pip and pip3. Running pip ... will always install to the Python 2 site-packages, and pip3 ... to the Python 3 site-packages. This works both for system-wide packages (running as root) or for installing them into your home directory.
1
1
0
I understand that we can install different version of Python on a same box - but there are packages that are not supported common to both. So if I have two version of Python(2.x and 3.x) installed how can I automatically have packages deployed correctly for each version of Python using pip?
Python site packages: How can I maintain for both 2.x and 3.x version
1.2
0
0
65
15,901,098
2013-04-09T11:52:00.000
0
0
1
0
python,excel
15,901,181
2
false
0
0
It is possible to read and write CSV (comma separated values) files using Python 2.4 Distribution. Like most languages, file operations can be done with Python. Writing a CSV file with Python can be done by importing the CSV module and creating a write object that will be used with the WriteRow Method. Reading a CSV file can be done in a similar way by creating a reader object and by using the print method to read the file. As file operations require advanced concepts, some knowledge of programming with Python is required to read and write the CSV (comma separated values) files. Start by importing the CSV module: import csv We will define an object "writer" (named c), which can later be used to write the CSV file. c = csv.writer(open("MYFILE.csv", "wb")) Now we will apply the method to write a row. Writerow The method takes one argument - this argument must be a list and each list item is equivalent to a column. Here, we try to make an address book: c.writerow(["Name","Address","Telephone","Fax","E-mail","Others"])
1
0
0
I am trying to learn enough Python to write a program to extract data from multiple Excel files and compile them into one Excel file. I am hoping someone can help me with where to find a basic understanding of this. I know this is vague and all, but I am not really sure where to start just yet.
Very basic Python to Excel how to
0
0
0
731
15,901,180
2013-04-09T11:55:00.000
1
0
1
0
python,git,python-2.7,gitpython
15,903,002
1
true
0
0
You can look at the commit's parents and compare the contents of the two (or more, depending on the number of parents) trees.
1
3
0
Using gitpython, I am trying to get a list of changed paths; that is, of all the added, changed and deleted files. I can retrieve the changed and added files from the commit: checkout commit 'X' traverse repo.tree() and collect all the blobs' abspath If a file was deleted in a specific commit, it will not show up in the tree anymore. How can I get the names of all the deleted files?
GitPython: Determine files that were deleted in a specific commit
1.2
0
0
960
15,903,189
2013-04-09T13:24:00.000
0
0
0
0
python,proxy
15,903,624
1
false
0
0
For simple authentication (basic, digest), yes. Simply make sure your system's proxy settings (e.g, in IE) are in the format: joe:[email protected]:3128 If your proxy is using some other more complicated form of authentication (e.g., ntlm, kerberos), this is not so easy.
1
0
0
I'm aware that I can manually code in a proxy with user/password, but is it possible to get Python to just pull the proxy settings AND authentication from IE?
Can I get Python to use an authenticated Proxy without userinput?
0
0
1
90
15,903,357
2013-04-09T13:31:00.000
4
0
0
0
python,sql,sqlalchemy,raspberry-pi
15,904,750
2
false
0
0
What about a cron job instead of create a loop? I think it's a bit nicer.
1
1
0
I am having trouble finding this answer anywhere on the internet. I want to be able to monitor a row in a MySQL table for changes and when this occurs, run a Python function. This Python function I want to run has nothing to do with MySQL; it just enables a pin on a Raspberry Pi. I have tried looking into SQLAlchemy; however, I can't tell if it is a trigger or a data mapping. Is something like this even possible? Thanks in advance!
How to execute Python function when value in SQL table changes?
0.379949
1
0
6,558
15,904,042
2013-04-09T14:02:00.000
135
0
1
0
python,graph,matplotlib,border
15,904,277
2
true
0
0
Set the edgecolor to "none": bar(..., edgecolor = "none")
1
75
1
I'm using pyplot.bar but I'm plotting so many points that the color of the bars is always black. This is because the borders of the bars are black and there are so many of them that they are all squished together so that all you see is the borders (black). Is there a way to remove the bar borders so that I can see the intended color?
matplotlib bar graph black - how do I remove bar borders
1.2
0
0
72,444
15,904,973
2013-04-09T14:40:00.000
2
0
1
0
python,security,memory-management
15,905,534
2
true
0
0
Unless you use custom coded input methods to get the password, it will be in many more places then just your immutable string. So don't worry too much. The OS should take care that any data from your process is cleared before the memory is allocated to another process. This may of course fail if the page is copied to disk (swapped out or hibernated). Secure password entry is not easy. Maybe you can find a special library or module that handles this.
2
5
0
Say i store a password in plain text in a variable called passWd as a string. How does python release this variable once i discard of it (for instance, with del passWd or passWd= 'new random data')? Is the string stored as a byte-array meaning it can be overwritten in the memoryplace that it originally existed or is it a fixed set in a memory area which can't be modified and there for when assining a new value a new memory area is created and the old area is discareded but not overwritten by null? I'm questioning how Python implements the safety of memory areas and would like to know more about it, mainly because i'm curious :) From what i've gathered so far, using del (or __del__) causes the interpreter to not release memory areas of that variable automaticly which can cause issues, and also i'm not sure that del is so thurrow on deleting the values. But that's just from what i've gathered and not something in black or white :) The main reason for me asking, is I'm intending to write a hand-over application that gets a string, does some I/O, passes it along to another subsystem (bootloader for raspberry pi for instance) and the interface is written in Python (how odd that must sound in some peoples ears..) and i'm not worried that the data is compromised during the I/O calculations but that a memory dump might be occuring in between the two subsystem handovers. or if the system is frozen (say a hiberation) say 20min after the system is booted and i removed the variable as fast as i could, but somehow it's still in the memory despite me doing a del passWd :) (Ps. I've asked on Superuser, they refered me here aand i'm sorry for poor grammar!)
Python - Releasing/replacing a string variable, how does it get handled?
1.2
0
0
553
15,904,973
2013-04-09T14:40:00.000
0
0
1
0
python,security,memory-management
16,334,037
2
false
0
0
I finally whent with two solutions. ld_preload to replace the functionality of the string handling of Python on a lower level. One other option which is a bit easier was to develop my own C library that has more functionality then what Python offers through the standard string handling. Mainly the C code has a shread() function that writes over the memory area where the string "was" stored and some other error checks. However, @Ber gave me a good enough answer to start developing my own solution since (as he pointed out) there is no secure method in Python and python stores strings in way to many places and relies on the OS (which, on it's own isn't a bad thing except when you don't trust the OS you are installing your realtively secure application on).
2
5
0
Say i store a password in plain text in a variable called passWd as a string. How does python release this variable once i discard of it (for instance, with del passWd or passWd= 'new random data')? Is the string stored as a byte-array meaning it can be overwritten in the memoryplace that it originally existed or is it a fixed set in a memory area which can't be modified and there for when assining a new value a new memory area is created and the old area is discareded but not overwritten by null? I'm questioning how Python implements the safety of memory areas and would like to know more about it, mainly because i'm curious :) From what i've gathered so far, using del (or __del__) causes the interpreter to not release memory areas of that variable automaticly which can cause issues, and also i'm not sure that del is so thurrow on deleting the values. But that's just from what i've gathered and not something in black or white :) The main reason for me asking, is I'm intending to write a hand-over application that gets a string, does some I/O, passes it along to another subsystem (bootloader for raspberry pi for instance) and the interface is written in Python (how odd that must sound in some peoples ears..) and i'm not worried that the data is compromised during the I/O calculations but that a memory dump might be occuring in between the two subsystem handovers. or if the system is frozen (say a hiberation) say 20min after the system is booted and i removed the variable as fast as i could, but somehow it's still in the memory despite me doing a del passWd :) (Ps. I've asked on Superuser, they refered me here aand i'm sorry for poor grammar!)
Python - Releasing/replacing a string variable, how does it get handled?
0
0
0
553
15,905,113
2013-04-09T14:47:00.000
0
0
0
0
python
15,907,470
1
false
0
0
Create a DB user with limited access rights, for example, to that only table where it uploads data to. Hardcode that user in your script or pass it as command line arguments. There is little else you can do for a automated script because it has to use some username and password to connect to the DB somehow. You could encrypt the credentials and decrypt them in your script, but once a sufficiently determined attacker gets access to your user account and script extracting the username and password from a plain text script should not be too hard. You could use a compiled script to hide the credentials from the prying eyes, but again, it depends on how valuable access to your database is.
1
0
0
I have a couple of python scripts which I plan to put up on a server and run them repeatedly once a day. This python script does some calculation and finally uploads the data to a central database. Of course to connect to the database a password and username is required. Is it safe to input this username and password on my python script. If not is there any better way to do it?
Connecting to a database using python and running it as a cron job
0
1
0
284
15,905,460
2013-04-09T15:00:00.000
1
0
0
0
python,web-applications,3d-secure
15,905,890
1
false
1
0
It probably won't help, but IMO it's a bad idea to try to do that. You could maybe do it with an iframe, but even then, it's probably a bad idea. And for a very simple reason: when typing their card security code (or whatever), many users will want to check that they are actually on their bank website -- URL, favicon, HTTPS certificate, etc. So don't hide all of this by embedding this in your page. I usually hate popups, but for me they are totally acceptable in this situation.
1
1
0
I have a web based client application which heavily uses JavaScript and JQuery. While the client using the application, page content changes dynamically and refreshing the page causes the whole changed content to be lost. Now, I have to add 3d Secure payment method to my application. Problem is, (as those who used 3d secure systems might know) after Credit Card number is validated, I was redirected to related bank's 3d security page, where the bank want me to make a validation by entering Credit card's security code and the pin code sent via SMS to predefined phone number. If related information is right, then bank redirects me to my success url, or fail url if the transaction had failed. All is nice, but as I mentioned, I can not handle this redirection in my application page. Is it possible to start the process within a div? I am using python and django as framework, if it would help.
Handling redirections within a div instead of the page for 3D Secure operations
0.197375
0
0
511
15,905,487
2013-04-09T15:02:00.000
0
1
0
0
php,python,ruby,apache2,vhosts
15,921,626
1
false
1
0
I don't think it has anything to do with the usage of mod_wsgi and Phusion Passenger. I think that's just how ServerAlias works. You can try this alternative: Remove the ServerAlias. Setup a vhost for '*.domain.net' (or, if that doesn't work, '.domain.net' or 'domain.net') which redirects to site1.domain.net. This also has the advantage that your users cannot bookmark a non-canonical subdomain name. By the way did you know that Phusion Passenger also supports WSGI?
1
0
0
5 sites setup using named vhosts. site1.domain.net (PHP) site2.domain.net (Python) site3.domain.net (Ruby) site4.domain.net (PHP) site5.domain.net (PHP) In the vhost for site1 I also have the ServerAlias set to *.domain.net as I want any undefined addresses to go to that address. When I add the *.domain.net to that vhost, the python and the ruby sites redirect to site1 instead of their named vhost. All the php sites work fine. My guess is the fact that the python and ruby sites are using wsgi and passenger respectively has something to do with why it is loading incorrectly. I was reading something about UseCanonicalNames but I don't see how that impacts this. I am not just interested in a solution but also a reason why (or how) these other two languages handle their vhost config and why such a change makes a difference. Thank you for your time and help.
Apache2 Ruby and Python load default website when *.domain.net is set in vhost file
0
0
0
39
15,906,368
2013-04-09T15:40:00.000
1
0
0
0
image,image-processing,python-2.7,python-imaging-library
15,957,090
2
false
0
0
The above answer of Mark states his theory regarding what happens when a Zero-summing kernel is used with scale argument 0 or None or not passed/mentioned. Now talking about how PIL handles calculated pixel values after applying kernel,scale and offset, which are not in [0,255] range. My theory about how it normalizes calculated pixel value is that it simply do: Any resulting value <= 0 becomes 0 and anything > 255 becomes 255.
1
3
1
how does ImageFilter in PIL normalize the pixel values(not the kernel) between 0 and 255 after filtering with Kernel or mask?(Specially zero-summing kernel like:( -1,-1,-1,0,0,0,1,1,1 )) my code was like: import Image import ImageFilter Horiz = ImageFilter.Kernel((3, 3), (-1,-2,-1,0,0,0,1,2,1), scale=None, offset=0) #sobel mask im_fltd = myimage.filter(Horiz)
how does ImageFilter in PIL normalize the pixel values between 0 and 255 after filtering with Kernel or mask
0.099668
0
0
1,691
15,912,063
2013-04-09T20:40:00.000
0
0
1
0
python,python-2.7,python-3.x
16,027,464
4
false
0
0
You need to edit your environment variable to include your Python 3 or Python 2 path.
2
44
0
Is there a way to install python 3 over an installation of python 2 without ruining anything? The main issue is that I have code that runs by "python xxxxx.py abc123". Is there a way to change python 3 to be "python3 xxxx.py abc123"? The same command python is the conflict
How do I run python 2 and 3 in windows 7?
0
0
0
70,904
15,912,063
2013-04-09T20:40:00.000
2
0
1
0
python,python-2.7,python-3.x
15,912,336
4
false
0
0
Assuming that you install python3 in a separate directory, you could also rename the python 3 executable to be python3.exe.
2
44
0
Is there a way to install python 3 over an installation of python 2 without ruining anything? The main issue is that I have code that runs by "python xxxxx.py abc123". Is there a way to change python 3 to be "python3 xxxx.py abc123"? The same command python is the conflict
How do I run python 2 and 3 in windows 7?
0.099668
0
0
70,904
15,913,367
2013-04-09T22:01:00.000
4
0
1
0
python,python-3.x
15,913,396
2
false
0
0
In the past, I have used py2exe (windows) so that I do not have to ask them to install python. py2exe creates an exe which the user clicks and it runs without a problem. If you want to go beyond this, package it with something like innosetup and it adds an installer which is better.
1
8
0
I am planning to create a Python program and distribute it bundled with C# GUI. How can I distribute the Python part of the program without requiring users to have Python?
How can I distribute a Python program without requiring users to have a Python runtime?
0.379949
0
0
1,777
15,914,198
2013-04-09T23:12:00.000
4
0
0
0
python,connection-pooling,zodb
15,919,692
1
true
0
0
The pool size is only a 'guideline'; the warning is logged when you exceed that size; if you were to use double the number of connections an CRITICAL log message would be registed instead. These are there to indicate you may be using too many connections in your application. The pool will try to reduce the number of retained connections to the pool size as you close connections. You need to set it to the maximum number of threads in your application. For Tornado, which I believe uses asynchronous events instead of threading almost exclusively, that might be harder to determine; if there is a maximum number of concurrent connections configurable in Tornado, then the pool size needs to be set to that number. I am not sure how the ZODB will perform when your application scales to hundreds or thousands of concurrent connections, though. I've so far only used it with at most 100 or so concurrent connections spread across several processes and even machines (using ZEO or RelStorage to serve the ZODB across those processes). I'd say that if most of these connections only read, you should be fine; it's writing on the same object concurrently that is ZODB's weak point as far as scalability is concerned.
1
2
0
What's a reasonable default for pool_size in a ZODB.DB call in a multi-threaded web application? Leaving the actual default value 7 gives me some connection WARNINGs even when I'm the only one navigating through db-interacting handlers. Is it possible to set a number that's too high? What factors play into deciding what exactly to set it to?
Reasonable settings for ZODB pool_size
1.2
1
0
485
15,914,992
2013-04-10T00:36:00.000
0
0
0
0
python-2.7,pyqt,pyside
15,953,660
1
true
0
1
I'm guessing it does exactly what every other GUI toolkit does, which is try to execute that function for every button press. If you don't want that, then there are several approaches you can take: Disable the button until the function finishes Use a sentinel value, like self.running, and add a check to see if it is True. If so, return. Otherwise, set it to True and execute. Just remember to set it back to False at the end of the function Do what you mentioned and disconnect the function from the signal Those are the main ones. I'm sure there are some more esoteric solutions out there too though.
1
0
0
I have a PySide application that connects a number of buttons to a function call that spins off a bunch of external processes (this isn't ideal, but it is necessary). The function call takes less than a quarter of a second to run, but it still presents a chance for the user to outrun my code, causing lag, freezing, or crashing. This leads me to a few questions... What exactly does it mean for a callable to be connected to a signal? Suppose button A's click signal is connected a Python function f. If A is clicked, and then clicked again before f exits, does PySide simply queue another call to f, or does it do something fancier? One possible solution is to make f disconnect itself from the signal and then reconnect before it exits. Before I go off rewriting things to try this, does anyone know if this is at all efficient, or if there is a more elegant solution?
Preventing PySide from Outrunning Python
1.2
0
0
70
15,915,031
2013-04-10T00:42:00.000
4
0
1
0
python,mongodb,python-2.7,pymongo
15,915,744
4
false
0
0
I don’t know about Python, but I had a similar concern with C#. I decided to just run a real instance of Mongo on my workstation pointed at an empty directory. It’s not great because the code isn’t isolated but it’s fast and easy. Only the data access layer actually calls Mongo during the test. The rest can rely on the mocks of the data access layer. I didn’t feel like faking Mongo was worth the effort when really I want to verify the interaction with Mongo is correct anyway.
1
17
0
I have to implement nosetests for Python code using a MongoDB store. Is there any python library which permits me initializing a mock in-memory MongoDB server? I am using continuous integration. So, I want my tests to be independent of any MongoDB running server. Is there a way to mock mongoDM Server in memory to test the code independently of connecting to a Mongo server? Thanks in advance!
Use mock MongoDB server for unit test
0.197375
1
0
14,649
15,915,255
2013-04-10T01:08:00.000
0
0
1
0
python,arrays,os.walk
15,915,785
2
false
0
0
Python's lists are dynamic, you can change their length on-the-fly, so just store them in a list. Or if you wanted to reference them by name instead of number, use a dictionary, whose size can also change on the fly.
1
0
1
I am currently working with hundreds of files, all of which I want to read in and view as a numpy array. Right now I am using os.walk to pull all the files from a directory. I have a for loop that goes through the directory and will then create the array, but it is not stored anywhere. Is there a way to create arrays "on the go" or to somehow allocate a certain amount of memory for empty arrays?
Creating many arrays at once
0
0
0
57
15,916,324
2013-04-10T03:19:00.000
9
0
0
0
python,opengl,blender,cad
15,924,089
4
true
0
1
Does Blender use OpenGl or DirectX? All graphics output of Blender is done using OpenGL. Or does it use a programming language (python?) to do everything from scratch? Why "or"? An API doesn't substitute a programming language. Blender has been programmed in C, C++ and Python. OpenGL is used to render everthing on screen, including the user interface.
1
12
0
Does Blender use OpenGl or DirectX? Or is it all done from scratch?
What was Blender created in?
1.2
0
0
7,585
15,919,063
2013-04-10T07:00:00.000
1
0
0
0
python,mysql,string-matching
15,919,118
2
true
0
0
I highly doubt anything you're looking for exists, so I propose a simpler solution: Come up with a simple algorithm for normalizing your text, e.g.: Normalize whitespace Remove punctuation Then, calculate the hash of that and store it in a separate column (normalizedHash) or store an ID to a table of normalized hashes. Then you can compare the two different entries by their normalized content.
1
1
0
We need to store a text field ( say 2000 characters) and its unique hash ( say SHA1 ) in a MySQL table. To test that text already exists in the MySQL table, we generate SHA1 of the text , and find whether it exists in the unique field hash . Now lets assume there are two texts: "This is the text which will be stored in the database, and its hash will be generated" "This is the text,which will be stored in the database and its hash will be generated." Notice the minor differences. Lets say 1 has already been added to the database, the check for 2 will not work as their SHA1 hashes will be drastically different. One obvious solution is to use Leveinstein distance, or difflib to iterate over all already added text fields to fine near matches from the MySQL table. But that is not performance oriented. Is there a good hashing algorithm which has a correlation with the text content ? i.e. Two hashes generated for very similar texts will be very similar in themselves. That way it would be easier to detect possible duplicates before adding them in the MySQL table.
Good hashing algorithm with proximity to original text input , less avalanche effect?
1.2
1
0
496
15,920,203
2013-04-10T08:02:00.000
0
0
1
0
python,keylogger,pyhook
24,671,860
2
false
0
0
The inofficial installers mentioned by abarnert seem to work fine. They provide installers for almost every version of Python - except for 3.5, which is currently considered unstable anyway (07/10/14), so that's not a big issue.
1
5
0
I am coding a simple keylogger using Python. I hope to use pyHook to capture keyboard events.I couldn't find any packages of pyHook for python 3.3 which I have installed. Is there any other module for python 3.3 which provides similar functionalities?
pyHook for Python 3.3
0
0
0
15,271
15,920,449
2013-04-10T08:16:00.000
1
1
0
0
python,neo4j,py2neo
15,927,439
3
true
1
0
There is an existing bug with large batches due to Python's handling of the server streaming format. There will be a fix for this released in version 1.5 in a few weeks' time.
1
0
0
I'm just trying to do a simple batch insert test for 2k nodes and this is timing out. I'm sure it's not a memory issue because I'm testing with a ec2 xLarge instance and I changed the neo4j java heap and datastore memory parameters. What could be going wrong?
py2neo Batch Insert timing out for even 2k nodes
1.2
0
0
211
15,923,789
2013-04-10T10:51:00.000
1
0
1
0
python,virtualenv,ipython,ipython-notebook
15,933,347
2
false
0
0
I would suggest using an all in one installer like Anaconda or EPD. I don't think a single repo to clone will work as, I guess many thing like numpy will need a compile step.
1
1
0
I need the following working environment for a project with several developers: python3 ipython (notebook) numpy networkx matplotlib Installing these packages on different systems (OSX, Ubuntu, Suse) is time-consuming and problems are quite likely in my experience. Is it possible to package them (maybe with virtualenv) into a single repository which users can simply clone and start working?
Can I package the IPython Notebook, dependencies and additional modules?
0.099668
0
0
1,674
15,924,060
2013-04-10T11:03:00.000
0
0
0
0
python,opencv
15,925,696
1
false
0
0
You should have a look at Python Boost. This might help you to bind C++ functions you need to python.
1
1
1
There are many functions in OpenCV 2.4 not available using Python. Please advice me how to convert the C++ functions so that I can use in Python 2.7. Thanks in advance.
using python for opencv
0
0
0
61
15,925,368
2013-04-10T12:07:00.000
0
0
0
1
python,windows
16,104,252
2
false
0
0
My solution to this problem was to just use the IP address for the referenced machine. Worked like a charm and no problems with mapped drives...thanks for the responses.
1
5
0
I have a drive already mapped to a designated letter, 'R:\'. If I run the python script to access this space while logged on or with the computer unlocked it works fine. The problem occurs when I set task scheduler to run the script early in the morning before I come in. Basically I stay logged in and lock the machine, but at some point it looks like my network drive mappings time out (but reconnect when I unlock the machine in the morning) and this is why the script isn't able to find them. The error comes when trying to do an os.path.exists() to check for folders on this drive and create them if they don't already exist. From a 'try/except' loop I get the exception "The system cannot find the path specified: 'R:\'. My question: is there a way to force a refresh through python? I have seen other postings about mapping network drives...but not sure if this applies to my case since I already have the drive mapped. The letter it uses needs to stay the same as different applications have absolute references to it. Wondering if mapping the same drive will cause problems or not work, but also not wanting to temporarily map to another letter with a script and unmap when done...seems like an inefficient way to do this? Using python 2.6 (what another program requires). Thanks,
How to Refresh Network Drive Mappings in Python
0
0
0
3,454
15,930,722
2013-04-10T15:55:00.000
0
0
0
0
python,openerp
15,942,417
3
false
1
0
You can create a new Rule of Loan deduction in Salary Rule and specify Amount type there. This new rule would be effective in Employee Salary sleep structure which has taken loan from the company.
2
2
0
Can anyone help me with a formula or process to deduct loans taken by employees from their salary/wages please? Thanks
OpenERP Payroll Loan Deduction
0
0
0
1,052
15,930,722
2013-04-10T15:55:00.000
0
0
0
0
python,openerp
15,942,760
3
false
1
0
Create a Contract for employee by configuring wage and Salary structure. In Salary structure, Add a new rule for Home loan with Deduction type and assign Fixed / Percentage amount with negative (-) sign as its Deduction. Go to the Employee Payslip, select an employee and click on Compute Sheet. You will see salary structure in Salary Computation tab.
2
2
0
Can anyone help me with a formula or process to deduct loans taken by employees from their salary/wages please? Thanks
OpenERP Payroll Loan Deduction
0
0
0
1,052
15,931,684
2013-04-10T16:42:00.000
2
0
0
0
python,django,django-models
15,932,185
2
true
1
0
The simplest option would be to pass in the user ID as a query parameter to the next page, i.e. if the user starts at page http://myserver/userid.html, and enters a user ID of 1234, then they're redirected to the page http://myserver/sec.html?userid=1234. The second page can access the query parameter via the HttpRequest.GET dictionary.
1
0
0
I have a page called userid.html in which a user is entering the userid. If this user id exists, he is taken to next page, sec.html where he is asked a security question which he has already set. this security question is a context variable, and i need to render this page according to user id given in the previous page(userid.html), as security question of each user will be different. How can this be done in django? Thanks in advance
Django: i want to get a render a page according to specific userid obtained from another page
1.2
0
0
55
15,933,982
2013-04-10T18:48:00.000
3
0
0
1
python,google-app-engine
15,934,118
1
true
1
0
In Launcher go to Edit -> Preferences and set Python Path to match your python 27 path.
1
0
0
when I run my app using "Google App Engine Launcher" it gives me a warning sign. in the log console I found it using Python 3.3, how can I configure it to use python 2.7
google app engine python configuration
1.2
0
0
181
15,935,699
2013-04-10T20:23:00.000
4
1
1
0
java,php,javascript,python,weak-typing
15,936,117
1
false
0
0
Weak Typing is when certain conversions and ad-hoc polymorphisms are implicitly performed if the compiler/interpreter feels the need for it. Autoboxing is when when literals and non-object types are automatically converted to their respective Object types when needed. (For example, Java will allow you to call methods on a string literal as if it were a string object.) This has nothing to do with the typing system. It's really just syntactic sugar to avoid having to create objects explicitly. Widening conversions are a form of weak typing. In a very strict strongly typed language, this wouldn't be allowed. But in languages like Java, it is allowed because it has no negative side effects. Something as tiny as this is hardly enough to no longer consider Java a strongly typed language. Java also overloads the + operator for string concatenation. It's definitely a feature seen in weakly typed languages, but again, not a big enough deal to call Java weakly typed. (Even though I think it's a really stupid idea.)
1
0
0
It seems the definition of weak typing (not to be confused with dynamic typing) is that a binary operator can work when both values are a different type. Python programmers argue that Python is strongly typed because 1+"hello" will fail instead of silently doing something else. In contrast, other languages which are commonly considered weakly typed (e.g. PHP, JavaScript, Perl) will silently convert one or both of the operands. For example, in JavaScript, 1+"hello" -> "1hello", while in Perl, 1+"hello" -> 1, but 1+"5" -> 6. Now, I had the impression that Java is considered a strongly typed language, yet auto(un)boxing and widening conversions seem to contradict this. For example, 1+new Integer(1) -> 2, hello+"1" -> "hello1", 'A'+1 -> 66, and long can be converted into float automatically even though it typically gets truncated. Is Java weakly typed? What's the difference between weak typing, autoboxing, and widening conversions?
What is the difference between weak typing, autoboxing, widening conversions?
0.664037
0
0
1,195
15,936,042
2013-04-10T20:46:00.000
2
0
1
0
ipython,ipython-notebook
15,936,508
1
false
0
0
Possible, probably not too hard, but not trivial. Might need to update the underlying library (in progress), and to dig in CodeMirror doc to see if there is already such an option. If you are interested in diving in the code/submit Patch I would suggest asking for directions on the ML / on github.
1
1
0
Is there a way to autoclose brackets and quotes in IPython. Just like in RStudio when you type an opening bracket like [ it automatically types the closing bracket ] and moves the cursor inside the brackets. Same thing with quotes.
Autoclose brackets and quotes in IPython
0.379949
0
0
385
15,937,965
2013-04-10T22:56:00.000
0
0
1
0
python,multithreading,python-2.7
15,940,228
1
true
0
0
Yes, you can open the file multiple times and get independent access to it. Each file object will have its own buffers and position so for instance a seek on one will not mess up the other. It works pretty much like multiple program access and you have to be careful when reading / writing the same area of the file. For instance, a write that appends to the end of the file won't be seen by the reader until the write object flushes. Rewrites of existing data won't be seen by the reader until both the reader and writer flush. Writes won't be atomic, so if you are writing records the reader may see partial records. async Select or poll events on the reader may be funky... not sure about that one. An alternative is mmap but I haven't used it enough to know the gotchas.
1
0
0
I have a thread writing to a file(writeThread) periodically and another(readThread) that reads from the file asynchronously. Can readThread access the file using a different handle and not mess anything up? If not, does python have a shared lock that can be used by writeThread but does not block readThread ? I wouldn't prefer a simple non-shared lock because file access takes order of a millisecond and the writeThread write period is of the same order(the period depends on some external parameters). Thus, a situation may arise where even though writeThread may release the lock, it will re-acquire it immediately and thus cause starvation. A solution which I can think of is to maintain multiple copies of the file, one for reading and another for writing and avoid the whole situation all-together. However, the file sizes involved may become huge, thus making this method not preferable. Are there any other alternatives or is this a bad design ? Thanks
Python shared file access between threads
1.2
0
0
525
15,938,025
2013-04-10T23:02:00.000
5
0
0
0
python-2.7,scipy,sparse-matrix,scikit-learn
15,949,294
1
true
0
0
I think the easiest would be to create a new sparse matrix with your custom features and then use scipy.sparse.hstack to stack the features. You might also find the "FeatureUnion" from the pipeline module helpful.
1
5
1
I am working on a text classification problem using scikit-learn classifiers and text feature extractor, particularly TfidfVectorizer class. The problem is that I have two kinds of features, the first are captured by the n-grams obtained from TfidfVectorizer and the other are domain specific features that I extract from each document. I need to combine both features in a single feature vector for each document; to do this I need to update the scipy sparse matrix returned by TfidfVectorizer by adding a new dimension in each row holding the domain feature for this document. However, I can't find a neat way to do this, by neat I mean not converting the sparse matrix into a dense one since simply it won't fit in memory. Probably I am missing a feature in scikit-learn or something, since I am new to both scipy and scikit-learn.
How to Extend Scipy Sparse Matrix returned by sklearn TfIdfVectorizer to hold more features
1.2
0
0
692
15,939,357
2013-04-11T01:37:00.000
3
0
1
0
python,hex
15,939,391
2
false
0
0
If the first character has its high bit set (8-F), then subtract the value 0x1000... (number of 0s equal to the number of hexidecimal characters in the word) from your value after parsing it. e.g. FF20 has its high bit set, convert it to decimal and subtract 0x10000 from it (65536 - 65312 = -224) You can compute the value 0x100... by computing 2 to the power of 4*number of hexidecimal characters in word. e.g. 2^(4*4) = 65536
1
3
0
Crucial terminology aside, the word value of 0xFF20 is -224. From int( 'FF20', 16 ) I get 65312. Considering the high-level nature of the language, what is the method for getting the "word" value of a hex string in Python?
Word of Hexadecimal Value in Python
0.291313
0
0
1,097
15,940,277
2013-04-11T03:33:00.000
1
1
0
0
python,emacs,tramp
15,959,250
1
true
0
0
I figured out my mistake. Tramp seems incompatible with one of the python's auto-complete package, and I removed it. Then tramp works well.
1
0
0
I have installed the tramp on my emacs properly. I can use it to edit the remote txt files, however, once I create an *.py file on the remote host and edit it, after I input 2 letters, the whole emacs freeze, it doesn't respond. Could anyone give me some hints for this issus?
How to use emacs tramp to edit remote python file?
1.2
0
0
480
15,943,084
2013-04-11T07:34:00.000
0
0
1
0
python
15,943,189
1
false
0
0
A version number of Python is in x.y.z format, eg: 2.7.4. x means major release, it often broke a lot of things. y means minjor release, it often changed a few APIs. z means bug-fix release, just fixed some bug, all APIs are compatible with earlier version. So, the quick answer is: Python 2.7.4 just a bug-fix release of 2.7 series, you could and should upgrade to that version without break anything. Generally, you needn't do any extra work, too. Unfortunately, if Python release a version with a different y number a few days ago, although the modules for the earlier version should works with some modifies (if you are lucky enough, you don't need to modify the code), but the modules should rebuild to work on the latest Python. That means you have to wait for a new build, that takes about a month, it depends on the maintainers and packagers. Or you have to build from source by yourself. So, you could just keep the current version of Python you used, and upgrade it later. If they released a version with different x number, maybe Python changes a lots of APIs, break almost everything. You need to think about it carefully. Good luck.
1
2
0
The reason I ask is that some of the packages I have installed were specific to 2.7.3 as windows installers from PyPI. Would these need to be re-installed or would pip update these? Thanks.
Should I upgrade python 2.7.3 to 2.7.4
0
0
0
406
15,943,769
2013-04-11T08:14:00.000
4
0
0
0
python,pandas,dataframe
61,413,025
15
false
0
0
Either of this can do it (df is the name of the DataFrame): Method 1: Using the len function: len(df) will give the number of rows in a DataFrame named df. Method 2: using count function: df[col].count() will count the number of rows in a given column col. df.count() will give the number of rows for all the columns.
1
1,569
1
How do I get the number of rows of a pandas dataframe df?
How do I get the row count of a Pandas DataFrame?
0.053283
0
0
3,270,072
15,944,011
2013-04-11T08:26:00.000
0
1
0
1
c++,python,eclipse,debugging,eclipse-cdt
30,459,774
1
false
0
0
For me it works great just adding a debug configuration in C/C++ for the program /usr/bin/python (or whatever search path you have to the python interpreter) and then put the python program you want to run as the arguments. Put the breakpoints you want in the C-code and you should be all set for running the debug configuration and opening the debug perspective. If it still does not work you may also check that you are using Legacy (or Standard) Process Launcher. For some reason the GDB process launcher does not seem to work here.
1
1
0
I have a C++ project that is called in Python (via boost-python) and I want to debug the C++ code from python process. How can I do that? In Windows with Visual Studio I can use the functionality attach to process. How to achieve the same in Eclipse? Thanks
Debug a Python C++ extension from Eclipse (under Linux)
0
0
0
750
15,947,024
2013-04-11T10:59:00.000
1
0
0
0
python,django
15,947,251
1
true
1
0
The models for one given app usually lives in the app's "models.py" module. Now Rails and Django might not have the same definition of what's an "app" is. In Django, you have a "project" which consists of one or more (usually more) "apps", and it's considered good practice to try to make apps as independant (hence potentially reusable) as possible, and there are indeed quite a few reusable apps available, so it's pretty uncommon to have all of the project's models in a single models.py module. But anyway: if what you really want is "to see the entire database schema", then the best solution is to ask the database itself, whatever framework you use.
1
1
0
I come from Ruby on rails world. In rails, there is a file called schema.rb. It lists all the tables, columns and their types of the entire rails app. Is there anyway in django to see the entire database schema at one place?
How to see django website schema
1.2
0
0
60
15,947,092
2013-04-11T11:03:00.000
1
0
1
0
python,python-3.x,parallel-processing,thread-safety,python-multithreading
16,046,229
1
false
0
0
You cannot use the same class for both in-process locks, and cross-process locks. The implementations are quite different. Your current strategy is the correct one.
1
4
0
I have been trying to code a cache in python 3 and I want to avoid concurrency issues for both, threads and process. I have been using threading for thread-safe code, and multiprocessing for process safety. I can solve my problem using Lock from threading and Lock from multiprocessing at the same time. But I was wondering if there is a "generic" Lock to do this stuff or something like that. Thank you in advance ;-)
Python 3 "Lock" for both: threads and processes
0.197375
0
0
619
15,948,358
2013-04-11T12:06:00.000
3
0
1
0
python,circusd
15,948,568
1
true
0
0
Found it: it's not in the config section of the docs, but you can set respawn to False in the config and if/when the process dies it won't be restarted. Found in the method signature of the circus.watcher.Watcher documentation under Circus Library.
1
0
0
I'm using circus to manage a number of loosely coupled processes; a main process that needs to run once, and then a number of secondary processes that start a few moments later. The secondary processes need to be restarted until the work is complete, but the primary process must only be executed once. It would seem that once the process finishes, despite various settings, it's re-run. I've tried setting max_retry to: -1, which has no effect and continuously re-runs the process 0, which doesn't run the process at all 1, which also has no effect and continuously re-runs the process Is there any way I can safely and successfully end the main process after it's first run?
Circus: run a process once?
1.2
0
0
414
15,948,887
2013-04-11T12:32:00.000
3
0
1
0
ipython-notebook
15,951,011
1
true
0
0
Each cell in an IPython notebook has to be independant and is runned individually. When you shift enter into the first cell you actually totally define my_function with only a body of one line. i=0 in your case. Adding code to a function after it has been defined is not possible. This is the same when you enter somethig in (I)Python prompt. You don't define function across multiple query of input. as for Why the i in undefined, is because the scope of i is restricted to the function, but this is classical in most programming languages. So don't think of the IPython notebook as a full text file with markdown, but more as a differents things to be done in each cell, and do them sequentially.
1
1
0
I'm using ipython notebook. For commenting my function I would like to insert a markdown in the definition of my function. But in the second code cell, it's start like there is non connection with what I write before. Example: Cell 1: def my_function(one,two): i=0 Markdown: i is use for index Cell 2: i+=2 First I have an 'IndentationError: unexpected indent' and secondly an 'NameError: name 'i' is not defined
Ipython Notebook: use of different cell for one function
1.2
0
0
4,771
15,951,036
2013-04-11T14:11:00.000
2
0
0
0
python,django,module,tablename
15,951,290
4
false
1
0
There's really no correct answer to this. In general, the way in which you break down any programming task into 'modules' is very much a matter of personal taste. My own view on the subject is to start with a single module, and only break it into smaller modules when it becomes 'necessary', e.g. when a single module becomes excessively large. With respect to the apps, if all the apps share the same database tables, you'll probably find it easier to do everything in a single app. I think using multiple Django apps is only really necessary when you want to share the same app between multiple projects.
1
3
0
I'm very new to django and python as well. I want to try out a project written in django. Let say the project have 3 modules User CRUD Forgot password login Booking CRUD Search Default (basically is for web users to view) Home page about us All these have different business logic for the same entity. Should I create 3 apps for this? If 3 different apps, then the table name is all different as it will automatic add a prefix for table name. Any suggestion?
django structure for multiple modules
0.099668
0
0
1,660
15,951,228
2013-04-11T14:20:00.000
1
0
0
1
python,sync,google-sheets,google-spreadsheet-api
16,007,383
1
false
0
0
Google Apps Script will let you create a "web app", and you can pass params to it. There are good docs on how to do this, should be quick and easy way compared to using GData style spreadsheet API. (No Auth2 etc)
1
1
0
I would like to create a system that works in the background on the computer and when I turned on the computer it would send the time that the computer was turned on to a spreadsheet on Google SpreadSheet and when it were turned off It also sync the time that the computer was turned off in the spreadsheet of Google Spreadsheet. How could I create this?
How to sync the computer local data in Google Spreadsheet?
0.197375
0
0
270
15,953,303
2013-04-11T15:49:00.000
13
0
1
1
python,python-idle
15,954,961
3
true
0
0
If you're running IDLE from a Windows shortcut, you can just right-click on the shortcut, choose "Properties", and change the field "Start in" to any directory you like.
1
8
0
The installation directory is "d:\python2.7", and every time I open IDLE and click on menu File and Open item, the default directory is also "d:\python2.7". So I have to change the directory to where I want. Is there any way I can change it? Using configuration file or changing environment variable? I tried to add PYTHONPATH in environment variable, but it doesn't work. I also import os, and use os.chdir(), but it only changes the working directory, not what I want. Thank you.
How to change default directory for IDLE in windows?
1.2
0
0
11,874
15,956,099
2013-04-11T18:24:00.000
3
0
0
0
python,amazon-web-services,amazon-s3,boto,data-transfer
15,957,021
1
true
1
0
If you are using the copy_key method in boto then you are doing server-side copying. There is a very small per-request charge for COPY operations just as there are for all S3 operations but if you are copying between two buckets in the same region, there is no network transfer charges. This is true whether you run the copy operations on your local machine or on an EC2 instance.
1
1
0
I wrote a little script that copies files from bucket on one S3 account to the bucket in another S3 account. In this script I use bucket.copy_key() function to copy key from one bucket in another bucket. I tested it, it works fine, but the question is: do I get charged for copying files between S3 to S3 in same region? What I'm worry about that may be I missed something in boto source code, and I hope it's not store the file on my machine, than send it to another S3. Also (sorry, if its to much questions in one topic) if I upload and run this script from EC2 instance will I get charge for bandwidth?
Will I get charge for transfering files between S3 accounts using boto's bucket.copy_key() function?
1.2
1
1
322
15,957,071
2013-04-11T19:21:00.000
3
0
1
0
python,numpy,pip,easy-install
59,262,156
2
false
0
0
I was facing the same error while installing the requirements for my django project. This worked for me. Upgrade your setuptools version via pip install --upgrade setuptools and run the command for installing the packages again.
1
3
1
I am running Python 2.7.2 on my machine. I am trying to install numpy with easy_install and pip, but none of them are able to do so. So, when I try: sudo easy_install-2.7 numpy I get this error: "The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available." Moreover, when I try with pip: sudo pip-2.7 install numpy I get this error: RuntimeError: Broken toolchain: cannot link a simple C program Is there any fix available for this?
easy_install and pip giving errors when trying to install numpy
0.291313
0
0
5,561
15,958,249
2013-04-11T20:27:00.000
0
0
0
0
python,sql,phpmyadmin,mysql-python,host
16,370,493
1
false
0
0
If your phpmyadmin runs on the same machine as mysql-server, 127.0.0.1 is enough (and safer if your mysql server binds to 127.0.0.1, rather than 0.0.0.0) if you use tcp(rather than unix socket).
1
0
0
Things to note in advance: I am using wampserver 2.2 Ive forwarded port 80 I added a rule to my firewall to accept traffic through port 3306 I have added "Allow from all" in directory of "A file i forget" My friend can access my phpmyadmin server through his browser I am quite the novice, so bear with me. I am trying to get my friend to be able to alter my databases on my phpmyadmin server through python. I am able to do so on the host machine using "127.0.0.1" as the HOST. My Question is, does he have to use my external ip as the HOST or my external ip/phpmyadmin/ as the HOST? And if using the external ip iscorrect...What could the problem be?
What do I use for HOST to connect to a remote server with mysqldb python?
0
1
0
120
15,958,678
2013-04-11T20:53:00.000
1
0
1
0
python,database,singleton
15,960,691
2
false
0
0
If you're using an object oriented approach, then abamet's suggestion of attaching the database connection parameters as class attributes makes sense to me. The class can then establish a single database connection which all methods of the class refer to as self.db_connection, for example. If you're not using an object oriented approach, a separate database connection module can provide a functional-style equivalent. Devote a module to establishing a database connection, and simply import that module everywhere you want to use it. Your code can then refer to the connection as db.connection, for example. Since modules are effectively singletons, and the module code is only run on the first import, you will be re-using the same database connection each time.
2
7
0
So there has been a lot of hating on singletons in python. I generally see that having a singleton is usually no good, but what about stuff that has side effects, like using/querying a Database? Why would I make a new instance for every simple query, when I could reuse a present connection already setup again? What would be a pythonic approach/alternative to this? Thank you!
DB-Connections Class as a Singleton in Python
0.099668
1
0
6,703
15,958,678
2013-04-11T20:53:00.000
7
0
1
0
python,database,singleton
15,958,721
2
true
0
0
Normally, you have some kind of object representing the thing that uses a database (e.g., an instance of MyWebServer), and you make the database connection a member of that object. If you instead have all your logic inside some kind of function, make the connection local to that function. (This isn't too common in many other languages, but in Python, there are often good ways to wrap up multi-stage stateful work in a single generator function.) If you have all the database stuff spread out all over the place, then just use a global variable instead of a singleton. Yes, globals are bad, but singletons are just as bad, and more complicated. There are a few cases where they're useful, but very rare. (That's not necessarily true for other languages, but it is for Python.) And the way to get rid of the global is to rethink you design. There's a good chance you're effectively using a module as a (singleton) object, and if you think it through, you can probably come up with a good class or function to wrap it up in. Obviously just moving all of your globals into class attributes and @classmethods is just giving you globals under a different namespace. But moving them into instance attributes and methods is a different story. That gives you an object you can pass around—and, if necessary, an object you can have 2 of (or maybe even 0 under some circumstances), attach a lock to, serialize, etc. In many types of applications, you're still going to end up with a single instance of something—every Qt GUI app has exactly one MyQApplication, nearly every web server has exactly one MyWebServer, etc. No matter what you call it, that's effectively a singleton or global. And if you want to, you can just move everything into attributes of that god object. But just because you can do so doesn't mean you should. You've still got function parameters, local variables, globals in each module, other (non-megalithic) classes with their own instance attributes, etc., and you should use whatever is appropriate for each value. For example, say your MyWebServer creates a new ClientConnection instance for each new client that connects to you. You could make the connections write MyWebServer.instance.db.execute whenever they want to execute a SQL query… but you could also just pass self.db to the ClientConnection constructor, and each connection then just does self.db.execute. So, which one is better? Well, if you do it the latter way, it makes your code a lot easier to extend and refactor. If you want to load-balance across 4 databases, you only need to change code in one place (where the MyWebServer initializes each ClientConnection) instead of 100 (every time the ClientConnection accesses the database). If you want to convert your monolithic web app into a WSGI container, you don't have to change any of the ClientConnection code except maybe the constructor. And so on.
2
7
0
So there has been a lot of hating on singletons in python. I generally see that having a singleton is usually no good, but what about stuff that has side effects, like using/querying a Database? Why would I make a new instance for every simple query, when I could reuse a present connection already setup again? What would be a pythonic approach/alternative to this? Thank you!
DB-Connections Class as a Singleton in Python
1.2
1
0
6,703
15,958,980
2013-04-11T21:14:00.000
0
1
0
0
python,twitter
30,488,072
4
false
0
0
Twitter is set such that you can't retweet the same thing more than once. So if your bot gets such a tweet, it will be redirected to an Error 403 page by the API. You can test this policy by reducing the time between each run by the script to about a minute; this will generate the Error 403 link as the current feed of tweets remains unchanged.
2
0
0
I'm writing a simple Twitter bot in Python and was wondering if anybody could answer and explain the question for me. I'm able to make Tweets, but I haven't had the bot retweet anyone yet. I'm afraid of tweeting a user's tweet multiple times. I plan to have my bot just run based on Windows Scheduled Tasks, so when the script is run (for example) the 3rd time, how do I get it so the script/bot doesn't retweet a tweet again? To clarify my question: Say that someone tweeted at 5:59pm "#computer". Now my twitter bot is supposed to retweet anything containing #computer. Say that when the bot runs at 6:03pm it finds that tweet and retweets it. But then when the bot runs again at 6:09pm it retweets that same tweet again. How do I make sure that it doesn't retweet duplicates? Should I create a separate text file and add in the IDs of the tweets and read through them every time the bot runs? I haven't been able to find any answers regarding this and don't know an efficient way of checking.
How do I make sure a twitter bot doesn't retweet the same tweet multiple times?
0
0
1
2,022
15,958,980
2013-04-11T21:14:00.000
0
1
0
0
python,twitter
15,959,518
4
false
0
0
You should store somewhere the timestamp of the latest tweet processed, that way you won't go throught the same tweets twice, hence not retweeting a tweet twice. This should also make tweet processing faster (because you only process each tweet once).
2
0
0
I'm writing a simple Twitter bot in Python and was wondering if anybody could answer and explain the question for me. I'm able to make Tweets, but I haven't had the bot retweet anyone yet. I'm afraid of tweeting a user's tweet multiple times. I plan to have my bot just run based on Windows Scheduled Tasks, so when the script is run (for example) the 3rd time, how do I get it so the script/bot doesn't retweet a tweet again? To clarify my question: Say that someone tweeted at 5:59pm "#computer". Now my twitter bot is supposed to retweet anything containing #computer. Say that when the bot runs at 6:03pm it finds that tweet and retweets it. But then when the bot runs again at 6:09pm it retweets that same tweet again. How do I make sure that it doesn't retweet duplicates? Should I create a separate text file and add in the IDs of the tweets and read through them every time the bot runs? I haven't been able to find any answers regarding this and don't know an efficient way of checking.
How do I make sure a twitter bot doesn't retweet the same tweet multiple times?
0
0
1
2,022
15,959,936
2013-04-11T22:25:00.000
1
0
0
0
python,django,web-applications,view
15,960,018
3
false
1
0
After following the django tutorial, as suggested in a comment above, you'll want to create a view that has a text field and a submit button. On submission of the form, your view can run the script that you wrote (either imported from another file or copy and pasted; importing is probably preferable if it's complicated, but yours sounds like it's just a few lines), then return the number that you calculated. If you want to get really fancy, you could do this with some javascript and an ajax request, but if you're just starting, you should do it with a simple form first.
2
5
0
I have a written a short python script which takes a text and does a few things with it. For example it has a function which counts the words in the text and returns the number. How can I run this script within django? I want to take that text from the view (textfield or something) and return a result back to the view. I want to use django only to give the script a webinterface. And it is only for me, maybe for a few people, not for a big audience. No deployment. Edit: When I first thought the solution would be "Django", I asked for it explicitly. That was of course a mistake because of my ignorance of WSGI. Unfortunately nobody advised me of this mistake.
How can I run my python script from within a web browser and process the results?
0.066568
0
0
20,845
15,959,936
2013-04-11T22:25:00.000
2
0
0
0
python,django,web-applications,view
21,333,600
3
false
1
0
What nobody told me here, since I asked about Django: What I really needed was a simple solution called WSGI. In order to make your python script accessible from the webbrowser you don't need Django, nor Flask. Much easier is a solution like Werkzeug or CherryPy.
2
5
0
I have a written a short python script which takes a text and does a few things with it. For example it has a function which counts the words in the text and returns the number. How can I run this script within django? I want to take that text from the view (textfield or something) and return a result back to the view. I want to use django only to give the script a webinterface. And it is only for me, maybe for a few people, not for a big audience. No deployment. Edit: When I first thought the solution would be "Django", I asked for it explicitly. That was of course a mistake because of my ignorance of WSGI. Unfortunately nobody advised me of this mistake.
How can I run my python script from within a web browser and process the results?
0.132549
0
0
20,845
15,960,436
2013-04-11T23:11:00.000
1
0
1
0
ipython,ipython-notebook
15,969,554
1
false
0
0
Cells/Notebooks have no knowledge of dependencies, but you can use the kernel restart, and cell run-all or run all-below to refresh all the notebook.
1
0
0
In Mathcad Prime 2 you have this feature where if you change anything at the top of a worksheet it automatically recomputes all subsequent formulas that are affected by the correction. It figures out which of the formulas are now affected by this change and recomputes only those formulas, leaving everything else intact. Is there a way to do the same thing in ipython. If I change a cell, ipython should first find out which other cells now contain wrong results and have to be recomputed. Then it should automatically recompute only those cells.
How to autoexecute ipython notebook cells just like in Mathcad
0.197375
0
0
633
15,962,195
2013-04-12T02:39:00.000
0
1
0
0
python,authentication,encryption,google-authenticator
16,110,166
2
false
0
0
Definition: HOTP(K,C) = Truncate(HMAC(K,C)) & 0x7FFFFFFF - where Kis a secret key and C is a counter. It is designed so that hackers cannot obtain K and C if they have the HOTP string since HMAC is a one-way hash (not bidirectional encryption). K & C needs to be protected since losing that will compromise the entire OTP system. Having said that, if K is found in a dictionary and we know C (eg: current time), we can generate the entire dictionary of HOTP/TOTP and figure out K. Applying one way encryption to HOTP/TOTP (ie: double encryption) would mathematically make it harder to decode, although it doesn't prevent other forms of attack (eg: keystroke logging) or applying the same encryption to the dictionary list of HOTP/TOTP. It is human nature to reuse the same set of easily-remembered-password for EVERYTHING and hence the need to hide this password on digital devices or when transmitting over the internet. Implementation of security procedure or protocol is also crucial, it is like choosing a good password K but leave it lying around the desk for everyone, or the server holding K (for HMAC) is not inside a private network protected by a few layers of firewall.
1
15
0
I am building a two-factor authentication system based on the TOTP/HOTP. In order to verify the otp both server and the otp device must know the shared secret. Since HOTP secret is quite similar to the user's password, I assumed that similar best practices should apply. Specifically it is highly recommended to never store unencrypted passwords, only keep a salted hash of the password. Neither RFCs, nor python implementations of HOTP/TOTP seem to cover this aspect. Is there a way to use one-way encryption of the OTP shared secret, or is it a stupid idea?
Is it possible to salt and or hash HOTP/TOTP secret on the server?
0
0
0
3,075
15,962,315
2013-04-12T02:57:00.000
1
0
0
1
python,google-app-engine
15,987,618
2
true
0
0
I think if you can sequence the orders on a commodity, then you can clear orders offline. By "offline", I mean you can come to me at the end of the day, tell me the sequence of the orders, and I can tell you which trades happened. The one snag is, what if a buyer does not have the funds when a transaction should have cleared? You can address this in two ways: Put the funds in an escrow so that any orders that can clear do. Drop buy orders when you try to clear them if the funds are not available. As you suggested, you'll probably need cross-entity group transactions in order to make sure the fund transfers are correct (i.e. funds are neither created nor destroyed). You can sequence orders by time (e.g. paced_at = db.DateTimeProperty(auto_now_add=True)). If two orders have the same time, then use something (preferably something fair and deterministic) to break the tie. Hash (for fairness) of the numeric id (for determinism) might not be a bad choice here. App Engine is inherently multithreaded in the sense that an app can have many concurrent instances (instances are not necessarily threads within the same process though). Instances are created automatically, and there is currently no way to cap the number of instances at 1. If the state of your app is in Datastore (as opposed to local memory, memecache, or somewhere else), and your transactions are correct, then your app will be multithreaded "for free". Of course, your transactions being correct is not trivial. Another tool to keep in mind is that tasks can be transactional. This may come in handy if you want to do offline book clearing using tasks.
1
0
0
I'm working on a small project on Google App Engine, trying to implement a site that allows participants to buy and sell fake goods, similarly to a stock market where the system will show real time-ish BID/ASK spreads. As a quick example: A Seller places and order to sell 10 Boxes for 8.00 (Order 1) A Buyer places then an order to purchase 5 boxes for up to 9.00 (Order 2) When the second order is placed, the system will need to do multiple tasks, all contingent on all of the tasks completing successfully. Take the funds (8.00 x 5) to pay for the boxes from the Buyer and give them to the Seller Take the boxes (5) from the Seller and give them to the Buyer Update the orders as complete (OID 2 ) or update as partially filled (OID 1) so that they cannot be double matched Take a fee from each of the participants and add it to a system account If all that I needed was to move funds from one participant to another, I can do that safely even if the system were to fail in the middle. But to ensure that all of the tasks above complete correctly, and roll-back if any of them fail seems overly complex in App Engine. Additionally, my "Order Book" and order matching engine are single threaded right now (using Mutexes for locking.) This seems to go against the whole point of using App Engine, but I'm not sure I see a way around it. So (finally) - My questions are: Is there a best practice when using App Engine where there are multiple steps that all depend on every step completing correctly? Does anyone have any suggestions as how to either, allow the order book to be multi-threaded, or if it remains single threaded - is there a best practice to not have this core piece block the use of the site as it scales? I've thought about using tasks to queue the order adds/updates/cancels to keep the book separate from direct participant input. Thank you, I look forward to your help!
How to implement bank/cross-order type updates scalably and safely
1.2
0
0
140
15,965,699
2013-04-12T07:40:00.000
1
0
1
0
python
15,965,846
2
true
0
0
Type help() in the shell. And then in the help prompt type modules to see a complete list of all modules.
1
0
0
OS Windows XP SP3 situation I installed three python exes on my machine. Python 2.6 Python 2.7 Python EPD ENABLED (for pylab) problem I installed wxPython and in the selection I decided to install it to Python in system registry I don't know to which python this package was installed. what I tried I tried writing import wx on all the shells and found that it was installed to EPD python. bigger issue I don't want to keep doin this each time I install a package. So is there a command that can be used in the shell or any other way, so that I can know about all the packages installed? please help me with this issue.
What all packages do I have?
1.2
0
0
71
15,966,368
2013-04-12T08:19:00.000
2
0
1
0
python,python-3.x,syntax-error
15,966,435
2
false
0
0
In Python 2.x, a leading zero in an integer literal means it is interpreted as octal. This was dropped for Python 3, which requires the 0o prefix. A leading zero in a literal was left as a syntax error so that old code relying on the old behavior breaks loudly instead of silently giving the "wrong" answer.
1
5
0
Why a number like 01 gives a Syntax error when 01 is typed in python interactive mode and pressed enter? When 00 is entered the interpreter evaluates to 0, however numbers like 01, 001 or anything which starts with a 0 is entered Syntax error:invalid token is displayed. Entering 1,000 in prompt evaluates to a tuple of (1,0) but 1,001 doesn't evaluate to (1,1) instead Syntax error is displayed. Why does the Python interpreter behave so?
Why a number like 01 gives a Syntax error in python interactive mode
0.197375
0
0
1,856
15,967,035
2013-04-12T08:56:00.000
0
0
0
0
python,macos,keyboard-shortcuts,macports,python-idle
15,967,261
2
true
0
0
Make sure that you have the cursor more than one space after the last >>>. It won't paste unless you do.
1
2
0
I am using Python 2.7.3 on OS X 10.8.3, installed with MacPorts. If I hit command + c or command + x on content in IDLE, I can paste it with command + v into other apps. However, I cannot use command + v to paste into IDLE itself, either from other apps or from within IDLE. If I highlight content in IDLE, a right-click on the mouse will duplicate (i.e. paste) that content. This is sort of a workaround, but it is annoying not to be able to paste URL's or code from outside of IDLE.
Python IDLE: cmd-v to paste not working on OS X
1.2
0
0
597
15,969,213
2013-04-12T10:38:00.000
2
0
1
1
python,asynchronous,flask,gevent
15,970,840
2
false
0
0
How about simply using ThreadPool and Queue? You can then process your stuff in a seperate thread in a synchronous manner and you won't have to worry about blocking at all. Well, Python is not suited for CPU bound tasks in the first place, so you should also think of spawning subprocesses.
1
16
0
I have recently been working on a pet project in python using flask. It is a simple pastebin with server-side syntax highlighting support with pygments. Because this is a costly task, I delegated the syntax highlighting to a celery task queue and in the request handler I'm waiting for it to finish. Needless to say this does no more than alleviate CPU usage to another worker, because waiting for a result still locks the connection to the webserver. Despite my instincts telling me to avoid premature optimization like the plague, I still couldn't help myself from looking into async. Async If have been following python web development lately, you surely have seen that async is everywhere. What async does is bringing back cooperative-multitasking, meaning each "thread" decides when and where to yield to another. This non-preemptive process is more efficient than OS-threads, but still has it's drawbacks. At the moment there seem to be 2 major approaches: event/callback style multitasking coroutines The first one provides concurrency through loosely-coupled components executed in an event loop. Although this is safer with respect to race conditions and provides for more consistency, it is considerably less intuitive and harder to code than preemptive multitasking. The other one is a more traditional solution, closer to threaded programming style, the programmer only having to manually switch context. Although more prone to race-conditions and deadlocks, it provides an easy drop-in solution. Most async work at the moment is done on what is known as IO-bound tasks, tasks that block to wait for input or output. This is usually accomplished through the use of polling and timeout based functions that can be called and if they return negatively, context can be switched. Despite the name, this could be applied to CPU-bound tasks too, which can be delegated to another worker(thread, process, etc) and then non-blockingly waited for to yield. Ideally, these tasks would be written in an async-friendly manner, but realistically this would imply separating code into small enough chunks not to block, preferably without scattering context switches after every line of code. This is especially inconvenient for existing synchronous libraries. Due to the convenience, I settled on using gevent for async work and was wondering how is to be dealt with CPU-bound tasks in an async environment(using futures, celery, etc?). How to use async execution models(gevent in this case) with traditional web frameworks such as flask? What are some commonly agreed-upon solutions to these problems in python(futures, task queues)? EDIT: To be more specific - How to use gevent with flask and how to deal with CPU-bound tasks in this context? EDIT2: Considering how Python has the GIL which prevents optimal execution of threaded code, this leaves only the multiprocessing option, in my case at least. This means either using concurrent.futures or some other external service dealing with processing(can open the doors for even something language agnostic). What would, in this case, be some popular or often-used solutions with gevent(i.e. celery)? - best practices
Python async and CPU-bound tasks?
0.197375
0
0
6,728
15,971,821
2013-04-12T12:47:00.000
3
0
0
0
python,django,signals
16,020,917
1
true
1
0
I feel like the only option is to process the data after every m2m_change, since there doesn't appear to be an event or signal that maps to "all related data on this model has finished saving." If this is high cost, you could handle the processing asynchronously. When I encountered a similar situation, I added a boolean field to the model to handle state as it related to processing (e.g., MyModel.needs_processing) and a separate asynchronous task queue (Celery, in my case) would sweep through every minute and handle the processing/state resetting. In your case, if m2m_changed and needs_processing is False, you could set needs_processing to True and save the model, marking it for processing by your task queue. Then, even when the second m2m_changed fired for the other m2m field, it wouldn't incur duplicate processing costs.
1
7
0
I have a Django model with 2 ManyToMany fields. I want to process the data from the model each time it has been saved. The post_save signal is sent before it saves the ManyToMany relations, so I can't use that one. Then you have the m2m_changed signal, but since I have 2 ManyToMany fields I cannot be sure on which ManyToMany field I should put the signal. Isn't there a signal that is triggered after all the ManyToMany fields have been saved?
Django signal after whole model has been saved
1.2
0
0
588
15,972,941
2013-04-12T13:42:00.000
0
1
1
0
jpeg,python-imaging-library,uninstallation,raspberry-pi
16,268,144
1
true
0
0
You can do this to re-install PIL pip install -I PIL
1
0
0
Hi i am getting an error "IOError: decoder jpeg not available" when trying to implement some functions from the PIL. What i would like to do is remove PIL, install the jpeg decoder then re-install the PIL, but im lost as to how to uninstall the PIL? Any help would be greatly appreciated
Remove PIL from raspberry Pi
1.2
0
0
429
15,973,821
2013-04-12T14:21:00.000
0
1
0
1
python,qpid
16,147,555
1
true
0
0
1) That depends on architecture. Both methods, queues and topics, can get messages from many sources to many destinations. Topics get messages to all listeners, queues get message to one of the listeners - whoever grabs the message first. 2) Are there any error or log messages pertaining to failure? I suspect you are running out of resources. 3) No, you should figure out why your messaging fails before 24 hours.
1
0
0
I am using a C++ broker with clients written in C++, Python, and Java. If we run the system overnight, it reliably does not send/receive messages by morning. All messages are exchanged over topics with subjects designating the destination. I have 3 questions: 1.) Should we be using queues? Is there an advantage to using queues over topics? What is the design decision that picks a queue over a topic? Queues seem more rigid (i.e. if you know node A sent a request and wants a response, you would send a response right back; pub/sub). 2.) If a message goes unacknowledged, what can happen? I discovered that the Python module was missing a session.acknowledge(). Could this be causing our overnight failures? I discovered this problem today so I will hopefully have more insight tomorrow. The remedy has been to restart the qpidd service. (We are running on x64 Linux). 3.) Is this a good reason to use cluster fail over?
Qpid reliability
1.2
0
0
329
15,974,476
2013-04-12T14:50:00.000
1
0
0
0
wxpython
15,974,678
2
true
0
1
Have a look at the agw.flatnotebook.
1
0
0
Is there any Wxpython control that looks like a tabbed panels...I can use notebook but im not able to close the pages..I need something similar to IE, where you can open new tabs and close it when not needed..
wxpython control similar to tabbed panes
1.2
0
0
648
15,976,101
2013-04-12T16:09:00.000
2
0
1
0
python,security
15,976,835
2
false
0
0
I wouldn't worry about these. If a hostile agent is on your machine, you have bigger issues to worry about than terminal buffers and private memory. I do know that there are already similar solutions that are much slicker than what you describe; browser plugins that combine a master password with the domain name to make a unique plugin, with nice auto-completion features. But if this is mostly a programming exercise, go for it! "Normal" users won't be able to access your terminal buffer. They also shouldn't be able to examine the memory of your process.
1
2
0
I want to try out a system where I use a key and salt it with the name of a website, then hash it and use the hash as my password on the site that it's salted with. But, I'd like to do this securely. My concerns are: The hash (my password for a given site) being printed to the terminal The hash, as well as my universal key used to generate the hash, being in memory. Would it be safe to print the password to the terminal, and just close the terminal after? Would the key and password be gone from memory and disk once Python has completed? I'm going to use getpass, but does that provide any actual security against anything but over-shoulder lookers? Is there a way to securely overwrite the raw key and the hash/password in RAM?
Safely handling passwords in Python?
0.197375
0
0
7,560
15,977,072
2013-04-12T17:05:00.000
-1
0
1
0
python,import
15,977,341
2
false
1
0
If you want to use from module import * and not include everything imported within module then you should define the variable __all__ in module. This variable should be a list of strings naming the classes, variables, modules, etc. that should be imported. From the python documentation If the list of identifiers is replaced by a star (*), all public names defined in the module are bound in the local namespace of the import statement and The public names defined by a module are determined by checking the module’s namespace for a variable named __all__; if defined, it must be a sequence of strings which are names defined or imported by that module. The names given in __all__ are all considered public and are required to exist. If __all__ is not defined, the set of public names includes all names found in the module’s namespace which do not begin with an underscore character (_) [...] It is intended to avoid accidentally exporting items that are not part of the API (such as library modules which were imported and used within the module). (emphasis mine)
1
1
0
I am trying to import a python module without importing the imports of that module. I was digging around a bit, but the only way to exclude any command from being run when a file is being imported is the if __name__ == "__main__": But the module is also imported by various other modules that ened that modules imports, so I cant place the imports below the if __name__ == "__main__": Any idea how to solve that? The reason why I dont want to import this modules imports that those modules get run also from a jar jython envioronment and import java.lang functions. I just need to access a few functions in that file without the whole and importing those modules break make script. The functions that I am trying to access dont need any dependencies that module ahs. I import via 'from moduleX import f1,f2,f3'
importing module without importing that models imports
-0.099668
0
0
1,523
15,982,362
2013-04-12T23:20:00.000
0
0
1
0
python,functional-programming,interop
15,994,435
4
false
0
0
I think that any answer to this question would be remiss without considering the inertia of object-oriented and imperative languages relative to functional ones. Consider the following situation, beginning with the fact that functional languages are not taught at nearly the frequency that object-oriented or imperative languages are at the secondary, university, or graduate level. As User mentions, there is significant momentum involved as a programmer concerning one's choice of language. For example, during the course of a typical CS degree at a four year University, one may learn a handful of languages and more than likely, not one of them is a functional language. This typical graduate will then proceed to work in industry, where, after programming for 40+ hours per week for one's job, it is very difficult to then take time to, not only learn an entirely new language, but to learn a language that operates completely differently from the one's one already knows. On top of all of this, there is the drawback that functional languages are not nearly as useful in industry as object-oriented or imperative one's are. One can see that given this current state, it is understandable that the interoperability between Python and Haskell is not what some programmers would like it to be.
3
8
0
My current primary programming language is python. There are lots of things I like about it, but I also like functional languages. Not enough to do an entire program in them, but definitely for certain functionality, that fits the functional mould well. Of course .NET is amazing in this regard, having both ironpython and F#. But considering the ironpython support for the scientific python ecosystem is still dodgy last time I checked, .NET is not much of an option for me. I am a bit shocked at the apparent lack of tools to facilitate interop between cpython and say, Haskell. They are both mature languages with large communities, that seem like such a nice match to me. Is there something about their architecture that makes them ill-compatible that im missing, or is this just something awesome that is still waiting to happen? To clarify; there are some half-baked projects out there, but I am thinking of something that parallels the awesomeness of Weave, pycuda, or boost. Something that automates all the plumbing inherent in interop with just a few annotations.
Python and functional language interop
0
0
0
729
15,982,362
2013-04-12T23:20:00.000
2
0
1
0
python,functional-programming,interop
17,864,825
4
false
0
0
Another approach is to use unix pipes, and just write a Haskell program, and also write a Python program, and have them communicate over text. Haskell and Python even share the same syntax for lists, so it's really easy to pipe data from one to the other.
3
8
0
My current primary programming language is python. There are lots of things I like about it, but I also like functional languages. Not enough to do an entire program in them, but definitely for certain functionality, that fits the functional mould well. Of course .NET is amazing in this regard, having both ironpython and F#. But considering the ironpython support for the scientific python ecosystem is still dodgy last time I checked, .NET is not much of an option for me. I am a bit shocked at the apparent lack of tools to facilitate interop between cpython and say, Haskell. They are both mature languages with large communities, that seem like such a nice match to me. Is there something about their architecture that makes them ill-compatible that im missing, or is this just something awesome that is still waiting to happen? To clarify; there are some half-baked projects out there, but I am thinking of something that parallels the awesomeness of Weave, pycuda, or boost. Something that automates all the plumbing inherent in interop with just a few annotations.
Python and functional language interop
0.099668
0
0
729
15,982,362
2013-04-12T23:20:00.000
0
0
1
0
python,functional-programming,interop
15,991,468
4
false
0
0
Referring to: Is there something about their architecture that makes them ill-compatible that im missing, or is this just something awesome that is still waiting to happen? I think it is about the people doing these languages: There are not much people who want to do Haskell and Python at the same time. to make use of both languages (liek Haskell and Python) at the same time you either could go via the C-Interface or create a protocol both languages speak. Both are fairly advanced, limiting the number of people who could do it. Sure there would be also tradeoffs which make it difficult to use the full power of both languages. I am using Python and although I know some Haskell, I do not program it. I am a bit stuck in object-orientation, telling me that some day a problem will decide for me that it can better be solved in Haskell than python. That problem did not yet occur because my first thought is: I know how to do it in Python. I think other 'experts' face that problem, too. You start thinking in a language. I think that Haskell has a totally different way of thinking with its functional style, no side-effects. To get into this thinking I need to forget or not use my Python knowledge. Switching between these two ways of thinking requires some strength. Wrapping it up: because that two languages are not so close they stay apart. It is hard to do both and there are not many or easy ways to practice doing both.
3
8
0
My current primary programming language is python. There are lots of things I like about it, but I also like functional languages. Not enough to do an entire program in them, but definitely for certain functionality, that fits the functional mould well. Of course .NET is amazing in this regard, having both ironpython and F#. But considering the ironpython support for the scientific python ecosystem is still dodgy last time I checked, .NET is not much of an option for me. I am a bit shocked at the apparent lack of tools to facilitate interop between cpython and say, Haskell. They are both mature languages with large communities, that seem like such a nice match to me. Is there something about their architecture that makes them ill-compatible that im missing, or is this just something awesome that is still waiting to happen? To clarify; there are some half-baked projects out there, but I am thinking of something that parallels the awesomeness of Weave, pycuda, or boost. Something that automates all the plumbing inherent in interop with just a few annotations.
Python and functional language interop
0
0
0
729
15,982,612
2013-04-12T23:50:00.000
0
0
1
0
python,path,enthought
15,995,409
4
false
0
0
The problem described also occurs in a Win 7 Canopy installation. I tried to place files to be imported in several of the locations provided in sys.path(). ['', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\User\Scripts\python27.zip', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.0.0.1160.win-x86_64\DLLs', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.0.0.1160.win-x86_64\lib', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.0.0.1160.win-x86_64\lib\plat-win', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.0.0.1160.win-x86_64\lib\lib-tk', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.0.0.1160.win-x86_64', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\User', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\User\lib\site-packages', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\System', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\System\lib\site-packages', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\System\lib\site-packages\PIL', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\System\lib\site-packages\win32', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\System\lib\site-packages\win32\lib', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\System\lib\site-packages\Pythonwin', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\App\appdata', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.0.0.1160.win-x86_64\lib\site-packages', 'C:\Users\Owner\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.0.0.1160.win-x86_64\lib\site-packages\IPython\extensions'] The only solution I found was to use: sys.path.append()
2
3
0
Running Enthought Canopy appears to de-activate the normal .profile PATH information (OS X) for python programs run within the Canopy environment. I need to make locations searchable for user files. How to do this is not explained in the user manual. There are several possible places to enter such information (eg the two 'activate' files) but adding extra PATH information in them has no effect. So how is it done? DN
Enthought Canopy: how do I add to the PATH?
0
0
0
6,880
15,982,612
2013-04-12T23:50:00.000
1
0
1
0
python,path,enthought
16,281,145
4
false
0
0
On Mac OSX 10.6.8 this worked % launchctl setenv PYTHONPATH /my/directory:/my/other/directory then launch Canopy and you should see /my/directory and /my/other/directory on sys.path
2
3
0
Running Enthought Canopy appears to de-activate the normal .profile PATH information (OS X) for python programs run within the Canopy environment. I need to make locations searchable for user files. How to do this is not explained in the user manual. There are several possible places to enter such information (eg the two 'activate' files) but adding extra PATH information in them has no effect. So how is it done? DN
Enthought Canopy: how do I add to the PATH?
0.049958
0
0
6,880
15,985,339
2013-04-13T07:20:00.000
2
0
0
0
python,selenium,selenium-webdriver
55,485,019
4
false
0
0
Another way to do it would be to inspect the url bar in chrome to find the id of the element, have your WebDriver click that element, and then send the keys you use to copy and paste using the keys common function from selenium, and then printing it out or storing it as a variable, etc.
1
227
0
I'm trying to get the current url after a series of navigations in Selenium. I know there's a command called getLocation for ruby, but I can't find the syntax for Python.
How do I get current URL in Selenium Webdriver 2 Python?
0.099668
0
1
291,478
15,986,114
2013-04-13T09:03:00.000
0
1
0
0
python,opencv,imaging,imagej
15,991,413
1
false
0
0
You could use cont = cv2.findcontours to find the almost round shaped cells and count them with len(cont).
1
0
0
If I take multiple images in different fluorescent channels (after staining the cells with some antibody/maker), how can I automatically quantitate the fraction of cells positive for each marker? Has anyone done something like this in Python? I can already use Fiji (ImageJ) to count the cells containing only one staining type, but I can't make it run a selective count on merged images which contain two staining types. Since Fiji interacts well with python, I was thinking of writing a script that looks at each respective image containing only one staining type and then obtain the x-y coordinates of the respective image and check for matches between. I am not sure if that's a good idea though and I was wondering, if anyone has done something similar or has a more efficient way of getting the task done? Thanks for your help!
Cell Counting: Selective; Only count cells positive for all stainings
0
0
0
1,007
15,987,672
2013-04-13T12:14:00.000
0
0
0
0
python-2.7,speech-recognition,speech-to-text
25,560,560
1
false
0
0
If you can someshow get an API of google speech recognition, which I think is present on the web somewhere. Then using it in python won't be a problem you could use SL4A for programming in python and make an .apk by compiling your app with python interpreter.
1
1
0
Can JellyBean's offline speech recognition be used in Python?
Jellybean Offline Recognition in Python
0
0
0
345
15,989,610
2013-04-13T15:44:00.000
5
0
0
0
python,scrapy,classification,scikit-learn
15,998,577
2
false
1
0
Use the HashingVectorizer and one of the linear classification modules that supports the partial_fit API for instance SGDClassifier, Perceptron or PassiveAggresiveClassifier to incrementally learn the model without having to vectorize and load all the data in memory upfront and you should not have any issue in learning a classifier on hundreds of millions of documents with hundreds of thousands (hashed) features. You should however load a small subsample that fits in memory (e.g. 100k documents) and grid search good parameters for the vectorizer using a Pipeline object and the RandomizedSearchCV class of the master branch. You can also fine tune the value of the regularization parameter (e.g. C for PassiveAggressiveClassifier or alpha for SGDClassifier) using the same RandomizedSearchCVor a larger, pre-vectorized dataset that fits in memory (e.g. a couple of millions of documents). Also linear models can be averaged (average the coef_ and intercept_ of 2 linear models) so that you can partition the dataset, learn linear models independently and then average the models to get the final model.
1
5
1
I am working on a relatively large text-based web classification problem and I am planning on using the multinomial Naive Bayes classifier in sklearn in python and the scrapy framework for the crawling. However, I am a little concerned that sklearn/python might be too slow for a problem that could involve classifications of millions of websites. I have already trained the classifier on several thousand websites from DMOZ. The research framework is as follows: 1) The crawler lands on a domain name and scrapes the text from 20 links on the site (of depth no larger than one). (The number of tokenized words here seems to vary between a few thousand to up to 150K for a sample run of the crawler) 2) Run the sklearn multionmial NB classifier with around 50,000 features and record the domain name depending on the result My question is whether a Python-based classifier would be up to the task for such a large scale application or should I try re-writing the classifier (and maybe the scraper and word tokenizer as well) in a faster environment? If yes what might that environment be? Or perhaps Python is enough if accompanied with some parallelization of the code? Thanks
Using sklearn and Python for a large application classification/scraping exercise
0.462117
0
0
940
15,990,456
2013-04-13T17:00:00.000
12
0
1
0
python,arrays,list,math,dictionary
15,990,493
2
false
0
0
When the keys of the dictionary are 0, 1, ..., n, a list will be faster, since no hashing is involved. As soon as the keys are not such a sequence, you need to use a dict.
1
14
1
In Python, are there any advantages / disadvantages of working with a list of lists versus working with a dictionary, more specifically when doing numerical operations with them? I'm writing a class of functions to solve simple matrix operations for my linear algebra class. I was using dictionaries, but then I saw that numpy uses list of lists instead, so I guess there must be some advantages in it. Example: [[1,2,3],[4,5,6],[7,8,9]] as opposed to {0:[1,2,3],1:[4,5,6],2:[7,8,9]}
List of lists vs dictionary
1
0
0
20,631
15,990,889
2013-04-13T17:43:00.000
0
0
0
1
python,google-app-engine,google-chrome,pycharm,webapp2
30,849,574
1
true
1
0
It turns out that this wasn't a refresh or caching issue but a timing issue. Under some circumstances, GAE uses an update algorithm that incurs a delay before transactions are applied. In Run mode, the new page was being requested before the update was completed; in Debug mode, enough time passed for the update to be completed. One solution would have been to change the datastore architecture to eliminate reading an obsolete version of the data, but that caused other, more serious problems. Another solution was to include a split-second delay, after an update but before displaying the updated record. Not ideal, since it's impossible to know how long that delay has to be, but for now this has been satisfactory.
1
1
0
I'm writing a GAE webapp using Python 2.7, webapp2, and Jinja. In development, I run the app under PyCharm 2.7.1 on a Max OSX 10.7.5 (Lion). I'm currently using Chrome 26.0.1410.43 as my browser. I don't know for sure that this is a PyCharm issue, but that's my best guess. Here's a description: When I use the "Debug" control to start the app, webpages refresh automatically as I navigate from one page to another. That is, if I start at page A, navigate to page B, take some action that changes what A should look like, and navigate back to A, the change appears. However, when I use the "Run" control to start the app, with no other changes, webpages do not automatically refresh. In that same scenario, when I navigate back to A, the old version of that webpage appears. I need to click my browser's Refresh control to see the updated page. Please tell me how to stop the browser from displaying cached pages in Run mode. I haven't tried publishing this to our GAE website yet, and hopefully it won't happen there, but I need Run mode for performance on the video tutorial I'm creating. Thanks for any suggestions!
In PyCharm, webpages refresh in debug mode, not in run mode
1.2
0
0
550
15,993,287
2013-04-13T21:48:00.000
0
0
0
0
python,user-interface,pygame
60,500,908
2
false
0
1
pygame.display.toggle_fullscreen() use this function
1
1
0
I have looked at the documentation but I can't seem to implement the full screen toggle. Could someone please show me an example of the correct syntax?
What is the correct syntax to enable fullscreen toggle in Pygame?
0
0
0
2,962
15,994,251
2013-04-14T00:01:00.000
-1
0
0
0
android,python,apk,sl4a
28,605,884
2
false
0
1
I think you can use py2exe to convert your programm to an .exe - File and then convert this .exe - File to an .apk - File.
1
2
0
Is it possible to build an APK using Python for Android that doesn't need Python for Android to be there, i.e. it includes Python? If so, how?
Python for Android APK without dependencies
-0.099668
0
0
5,413
15,994,989
2013-04-14T02:06:00.000
5
0
1
0
ipython,startupscript
16,007,159
1
true
0
0
if you drop your startup.py into the directory found at $(ipython locate profile)/startup/, then it will run on every IPython startup (you may need to create the startup directory first).
1
3
0
I had set up a startup.py script for python that imported the commonly used modules like re, os and sys. Ipython however does not seem to run it at startup because trying to use one of the modules raises an error.
Make IPython use my original startup script
1.2
0
0
1,251
15,999,307
2013-04-14T12:59:00.000
1
0
0
0
python,slider,tkinter
15,999,600
1
true
0
1
First, you will need to save a reference to each of your sliders, which are instances of the Scale widget. Next, you will need to associate a command with the checkbutton (using the command attribute) which will be called whenever the checkbutton is checked or unchecked. In that command you can call the set method of each slider.
1
0
0
How can I make that when checkbutton is checked that then all Sliders move like one by using python 3 and tkinter? Thanks for any help! :)
Sliders and checkbtton in python 3 and tkinter
1.2
0
0
132
16,001,064
2013-04-14T15:58:00.000
0
0
0
0
c#,python,buffer
16,002,014
1
false
0
0
I'd create a log file on the HDD and put in the last recorded data and time. Then just read it out when needed again.
1
0
0
I've a client (currently in C#, a python version in progress) which gets computer data such as CPU %, disk space etc. and sends it to a server. I don't know how to manage if my client looses connection with the server. I have to continue collecting information but where to stock them? Just a buffer? Is using a log file a better solution? Any ideas?
How to store datas while connection is lost?
0
0
1
57
16,005,707
2013-04-14T23:52:00.000
49
0
1
0
python,indexing
16,005,716
1
false
0
0
Generally it means that you are providing an index for which a list element does not exist. E.g, if your list was [1, 3, 5, 7], and you asked for the element at index 10, you would be well out of bounds and receive an error, as only elements 0 through 3 exist.
1
19
0
I am a beginner programmer and im not sure what this means... Index Error: list index out of range
Index Error: list index out of range (Python)
1
0
0
376,778
16,009,440
2013-04-15T07:03:00.000
0
0
1
0
python,c
16,009,488
1
false
0
0
A simple implementation is to use a preallocated buffer and a counter for the number of elements. When the buffer is filled up and you want to append element, then you allocate a bigger buffer (e.g. twice as big) and copy the old one's data into the new one. Thus the append operation is not strictly O(1), but it's amortized O(1), namely on average it's O(1).
1
0
0
How are the lists implemented in python? I mean how is it possible to append an element in constant time and also get an item in constant time? Can anyone please tell how to do it in C?
Python list implementation
0
0
0
937
16,010,492
2013-04-15T08:12:00.000
1
0
1
0
python,windows,python-2.7,pip
16,010,621
2
false
0
0
Many module developers provide different versions of their modules for Python 2.x and Python 3.x. But yes, you will need to individually re-install different versions of the modules you're using if you're calling them from a totally different version of Python. How easy it will be depends on which modules you're using. That being said, if you're upgrading to a new sub-version (such as 2.6 -> 2.7 or 3.2 -> 3.3) you won't need to worry about reinstalling the modules.
1
0
0
Do we need to re-install all the third party modules if we upgrade to a higher version of Python or is there an easier way out?
Managing third party modules after upgrading Python
0.099668
0
0
200
16,011,056
2013-04-15T08:45:00.000
12
0
1
0
python,global-variables
16,011,147
2
false
0
1
Global variables should be avoided because they inhibit code reuse. Multiple widgets/applications can nicely live within the same main loop. This allows you to abstract what you now think of as a single GUI into a library that creates such GUI on request, so that (for instance) a single launcher can launch multiple top-level GUIs sharing the same process. If you use global variables, this is impossible because multiple GUI instances will trump each other's state. The alternative to global variables is to associate the needed attributes with a top-level widget, and to create sub-widgets that point to the same top-level widgets. Then, for example, a menu action will use its top-level widget to reach the currently opened file in order to operate on it.
1
12
0
when reading python documentation and various mailing lists I always read what looks a little bit like a dogma. Global variables should be avoided like hell, they are poor design ... OK, why not ? But there are some real lifes situation where I do not how to avoid such a pattern. Say that I have a GUI from which several files can be loaded from the main menu. The file objects corresponding to the loaded files may be used througout all the GUI (e.g. an image viewer that will display an image and on which various actions can be performed on via different dialogs/plugins). Is there something really wrong with building the following design: Menu.py --> the file will be loaded from here Main.py --> the loaded file objects can be used here Dialog1.py --> or here Dialog2.py --> or there Dialog3.py --> or there ... Globals.py where Globals.py will store a dictionary whose key are the name of the loaded files and the value the corresponding file objects. Then, from there, the various part of the code that needs those data would access it via weak references. Sorry if my question looks (or is) stupid, but do you see any elegant or global-free alternatives ? One way would be to encapsulate the loaded data dictionary in the main application class of Main.py by considering it as the central access part of the GUI. However, that would also bring some complications as this class should be easily accessible from all the dialogs that needs the data even if they are necesseraly direct children of it. thank a lot for your help
How to avoid global variables
1
0
0
12,634
16,012,245
2013-04-15T09:52:00.000
3
0
1
1
python,python-2.7
16,012,268
3
true
0
0
If you're seeing a Command Prompt open and immediately close when you double-click your .py file, that's to be expected - it's not how you're supposed to run a console-based Python script. What you should do is start a Command Prompt via the Start menu, then run your program by typing c:\python27\python.exe myscript.py or similar. Alternatively, use a Python IDE (eg. Idle) or an editor (eg. Scite) that can run Python scripts.
1
0
0
I need your help. I'm a newbie and I'm learning python. I know to write basic codes. But, when I execute the code in Python(command line) it closes immediately. Is there any piece of code that can prevent this from happening or a trick? Please help me out. Cheers! P.S: I use Python 2.7 in Windows.
Is there a command or a piece of code that prevents the Python(command line) from closing immediately after executing the code?
1.2
0
0
77
16,013,843
2013-04-15T11:19:00.000
7
0
1
0
python,python-import
16,013,905
1
true
0
0
Since it is a mapping, there can be no identically named modules in sys.modules. That is the point. If use the statement import foo and sys.modules['foo'] exists, that module is returned. No file access is needed, no top-level code for that module needs to be run. If foo is not present, then the sys.path determines where foo is going to be found first. That value is a list, so it has order, and the search for the foo module is conducted according to that ordering.
1
1
0
When Python wants to import a module it is first going to look in sys.modules. But since the key-value pairs of dictionaries are not in a fixed order, how can you ever know for certain which of two identically named modules in sys.modules will be imported first?
Why was sys.modules chosen to be a dictionary?
1.2
0
0
92