Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
12,068,723
2012-08-22T08:01:00.000
0
0
1
0
python,windows-7
18,947,368
3
false
0
0
I got a easy way to switch. Install both python27 and python33 in C:> . Then there will be two folders python27 and python33. Set the system path to python27 by default. If you want to use python33, change python27 folder name to something like python27_274 and change python33 folder name to python27:)
2
1
0
I need to use both python 2 and python 3. The only way to change the default python used upon opening a .py file to is change the PATH environment variable. The steps are troublesome. Can I have some windows batch script which modify the PATH variable for me? Thanks.
How do you switch python 2 and 3 quickly?
0
0
0
182
12,070,031
2012-08-22T09:24:00.000
1
0
0
0
python,django,redundancy,webfaction,django-orm
12,934,130
1
false
1
0
I was looking for something similar. What I found is: 1) Try something like Xeround cloud DB - it's built on MySQL and is compatible but doesn't support savepoints. You have to disable this in (a custom) DB engine. The good thing is that they replicate at the DB level and provide automatic scalability and failover. Your app works as if there's a single DB. They are having some connectivity issues at the moment though which are blocking my migration. 2) django-synchro package - looks promissing for replications at the app layer but I have some concerns about it. It doesn't work on objects.update() which I use a lot in my code.
1
1
0
I am currently sitting in front of a more specific problem which has to do with fail-over support / redundancy for a specific web site which will be hosted over @ WebFaction. Unfortunately replication at the DB level is not an option as I would have to install my own local PostgreSQL instances for every account and I am worried about performance amongst other things. So I am thinking about using Django's multi-db feature and routing all writes to all (shared) databases and the balance the reads to the nearest db. My problem is now that all docs I read seem to indicate that this would most likely not be possible. To be more precise what I would need: route all writes to a specific set of dbs (same type, version, ...) if one write fails, all the others will be rolled back (transactions) route all reads to the nearest db (could be statically configured) Is this currently possible with Django's multi-db support? Thanks a lot in advance for any help/hints...
Django multi-db: Route all writes to multiple databases
0.197375
1
0
345
12,072,506
2012-08-22T11:53:00.000
0
0
0
0
python,c,networking
12,075,452
4
false
0
0
Using pcap you cannot stop the packets, if you are under windows you must go down to the driver level... but you can stop only packets that your machine send. A solution is act as a pipe to the destination machine: You need two network interfaces (without address possibly), when you get a packet that you does not found interesting on the source network card you simply send it on the destination network card. If the packet is interesting you does not send it, so you act as a filter. I have done it for multimedia performance test (adding jitter, noise, etc.. to video streaming)
1
0
0
This is the problem I'm trying to solve, I want to write an application that will read outbound http request packets on the same machine's network card. This would then be able to extract the GET url from it.On basis of this information, I want to be able to stop the packet, or redirect it , or let it pass. However I want my application to be running in promiscuous mode (like wireshark does), and yet be able to eat up (stop) the outbound packet. I have searched around a bit on this.. libpcap / pcap.h allows to me read packets at the network card, however I haven't yet been able to figure out a way to stop these packets or inject new ones into the network. Certain stuff like twisted or scapy in python, allows me set up a server that is listening on some local port, I can then configure my browser to connect to it, using proxy configurations. This app can then do the stuff.. but my main purpose of being promiscuous is defeated here.. Any help on how I could achieve this would be greatly appreciated ..
Stop packets at the network card
0
0
1
1,109
12,076,445
2012-08-22T15:24:00.000
1
0
1
0
python,class,immutability,mutable
12,076,559
4
false
0
0
All objects (with the exception of a few in the standard library, some that implement special access mechanisms using things like descriptors and decorators, or some implemented in C) are mutable. This includes instances of user defined classes, classes themselves, and even the type objects that define the classes. You can even mutate a class object at runtime and have the modifications manifest in instances of the class created before the modification. By and large, things are only immutable by convention in Python if you dig deep enough.
1
30
0
Say I want to create a class for car, tractor and boat. All these classes have an instance of engine and I want to keep track of all the engines in a single list. If I understand correctly if the motor object is mutable i can store it as an attribute of car and also the same instance in a list. I cant track down any solid info on whether user defined classes are mutable and if there is a choice to choose when you define them, can anybody shed some light?
are user defined classes mutable
0.049958
0
0
20,111
12,078,046
2012-08-22T17:04:00.000
3
0
0
1
python,gstreamer
12,079,062
1
false
0
0
Depends on what you mean by synchronization, what are your sources and what is your pipeline. If both are getting data from different sources unless the sources were synchronized in some form there is no real sense to what you mean by synchronize the two pipelines If all you want is that they are in lock step with each other irrespective of what their source offsets were, as long as you have a clock based pipeline they will remain so. [say you are capturing from two usb cameras]. As long as the system is fast enough to run the 2 pipelines in real time they will remain in real time. If you just want to display the two side by side irrespective of the initial offsets between them use videomixer and place them side by side. It will automatically ensure the two are synchronized in the sense that the videos will move in lock step with each other If you want them to be synchronized on basis of the timestamps then you have to use RTSP. Send the output from both the pipelines to a gstrtpbin and from the single gstrtpbin you can get synchronized streams. This is slightly non trivial.
1
0
0
I am playing 2 videos in two different pipelines of gstreamer. I would like to synchronize both the videos. do any of you have any tips?
synchronize two pipelines in gstreamer
0.53705
0
0
3,138
12,078,928
2012-08-22T18:07:00.000
1
0
0
0
python,sql,django,nosql
12,078,992
4
false
1
0
Postgres is a great database for Django in production. sqlite is amazing to develop with. You will be doing a lot of work to try to not use a RDBMS on your first Django site. One of the greatest strengths of Django is the smooth full-stack integration, great docs, contrib apps, app ecosystem. Choosing Mongo, you lose a lot of this. GeoDjango also assumes SQL and really loves postgres/postgis above others - and GeoDjango is really awesome. If you want to use Mongo, I might recommend that you start with something like bottle, flask, tornado, cyclone, or other that are less about the full-stack integration and less assuming about you using a certain ORM. The Django tutorial, for instance, assumes that you are using the ORM with a SQL DB.
2
3
0
I've been learning Python through Udacity, Code Academy and Google University. I'm now feeling confident enough to start learning Django. My question is should I learn Django on an SQL database - either SQLite or MySQL; or should I learn Django on a NoSQL database such as Mongo? I've read all about both but there's a lot I don't understand. Mongo sounds better/easier but at the same time it sounds better/easier for those that already know Relational Databases very well and are looking for something more agile.
First time Django database SQL or NoSQL?
0.049958
1
0
5,188
12,078,928
2012-08-22T18:07:00.000
0
0
0
0
python,sql,django,nosql
12,079,233
4
false
1
0
sqlite is the simplest to start with. If you already know SQL toss a coin to choose between MySQL and Postgres for your first project!
2
3
0
I've been learning Python through Udacity, Code Academy and Google University. I'm now feeling confident enough to start learning Django. My question is should I learn Django on an SQL database - either SQLite or MySQL; or should I learn Django on a NoSQL database such as Mongo? I've read all about both but there's a lot I don't understand. Mongo sounds better/easier but at the same time it sounds better/easier for those that already know Relational Databases very well and are looking for something more agile.
First time Django database SQL or NoSQL?
0
1
0
5,188
12,079,226
2012-08-22T18:29:00.000
0
0
1
0
python,hadoop,hadoop-streaming
12,079,485
2
true
0
0
You could convert the pickled object to base64 using the base64 module.
2
0
0
I want to process a large number of pickled data with Hadoop using Python. What I am trying to do is represent my data as some key (file id) and compressed pickle as value in a large file. If I simply try to put binary code as ascii in the file which I want to process with hadoop I am getting a lot of '\t' and '\n' values which interfere with (key, value) structure of hadoop file. My question is: how can I compress some data using python and represent it as a string in an ascii file avoiding certain characters (such as '\t' and '\n')? Or maybe my approach is inherently invalid? I would really appreciate any help!
Ascii represantion of compressed data without certain character
1.2
0
0
96
12,079,226
2012-08-22T18:29:00.000
0
0
1
0
python,hadoop,hadoop-streaming
12,079,479
2
false
0
0
For compression you could use the zlib or bz2 modules. For representation you can use the base64 module.
2
0
0
I want to process a large number of pickled data with Hadoop using Python. What I am trying to do is represent my data as some key (file id) and compressed pickle as value in a large file. If I simply try to put binary code as ascii in the file which I want to process with hadoop I am getting a lot of '\t' and '\n' values which interfere with (key, value) structure of hadoop file. My question is: how can I compress some data using python and represent it as a string in an ascii file avoiding certain characters (such as '\t' and '\n')? Or maybe my approach is inherently invalid? I would really appreciate any help!
Ascii represantion of compressed data without certain character
0
0
0
96
12,082,723
2012-08-22T23:21:00.000
0
0
1
0
python,module
12,082,749
5
false
0
0
You could create a function that tries to load a specific module and if it is unable to do so then provide directions for the user to download manually or you can script to download the module automatically.
1
1
0
Is there any way to have a python program that tells the user it needs a module to run and then the program can install it for the user
Installing Modules from within a python program
0
0
0
193
12,082,918
2012-08-22T23:46:00.000
1
0
1
1
python,terminal,progress-bar,command-prompt
12,083,132
2
false
0
0
Each IDE is infact interacting via the command line and redirecting streams into it's implementation of showing those outputs, Each IDE has it's own way of doing this, command prompt is more powerful if you are expeirienced and easy to try one off scripts, try ipython which is great for beginners and learners alike for fast access to the programming environment and trying out module.
1
0
0
What is the benefit of running code through the command prompt/terminal vs an ide? I've noticed recently when using the progressbar module of python that the progress text is updated on the same line in the command prompt window while the ide prints each text on the next line. Why are these different? Are they not running though the same interpreter?
Command Prompt Python
0.099668
0
0
271
12,083,628
2012-08-23T01:37:00.000
-1
0
0
0
python,dns,size
12,087,906
2
true
0
0
The 512 bytes limit is for a dns message, not specifically for a request, and the limitation is only valid for responses, which can contain Resource Records. For a request you are limited to the 253 bytes of the domain name. You might be able to manually create a query containing Resource Records, but it will probably be dropped by your local dns server.
1
0
0
I have a question. I have read various RFCs and so many info on internet. I read that DNS through UDP has a 512 bytes limit. I want to write a python program that use this max limit to create a well generated DNS request. It is very important to use UDP and not the TCP DNS implementation. I have tried using public libraries but they did not use the 512 bytes that can be use like the RFC says. It is very important too, to use the ~ 512 bytes to sent so much data in a single request. Thank you for your help! Let's make it happens!! ;)
Make a 512 UDP bytes DNS request
1.2
0
1
988
12,083,640
2012-08-23T01:39:00.000
0
0
1
1
python,macos,python-2.7
12,083,839
2
false
0
0
If I remember correctly, you may want to do a "sudo port activate python". What does "which python" tell you? If it's /usr/bin/python, you're running OSX Python. If, OTOH, it's /usr/local/bin/python you're probably using the ports version.
1
3
0
I installed python on mac os (mountain lion) with Macports. When I run $python It gives an error on "cannot import urandom" when I try to import pandas or matplotlib. If I run $python 2.7 Everything runs perfectly. I want to change python to use python2.7 always. I tried using sudo port select python python27. But that didn't help. Please help me on this, I'm new to mac.
Set up python path in mac osx?
0
0
0
14,852
12,086,224
2012-08-23T06:50:00.000
1
0
1
0
python,path
12,086,577
5
false
0
0
I would not recommend doing this. Note that while windows does accept slash / as path seperator also, it has a different meaning in some contexts. It's treated as relative path using cd, for example: Command line: c:\Users\YourUser> cd /FooBar c:\FooBar Here, / substitutes the drive letter. Also, I don't see a problem at all with copying the strings, since if you print the string, the string is displayed as you wish: Python interpreter: >>> import os >>> print os.path.join("c:\", "foo","bar") c:\foo\bar >>>
1
4
0
As we know, windows accept both "\" and "/" as separator. But in python, "\" is used. For example, call os.path.join("foo","bar"), 'foo\\bar' will be returned. What's annoying is that there's an escape character, so you cannot just copy the path string and paste to your explorer location bar. I wonder is there any way to make python use "/" as default separator, I've tried change the value of os.path.sep and os.sep to "/", but os.path.join still use "\\". what's the right way? PS: I just don't understand why python is using "\" as default separator on windows, maybe old version of windows don't support "/"?
Why not os.path.join use os.path.sep or os.sep?
0.039979
0
0
15,621
12,090,204
2012-08-23T11:07:00.000
0
0
0
0
php,python,mongodb,size,limit
12,090,898
1
true
0
0
You can store up to 16MB of data per MongoDB BSON document (e.g. using the pymongo Binary datatype). For arbitrary large data you want to use GridFS which basically stored your data as chunks + extra metadata. When you using MongoDB with its replication features (replica sets) you will have kind of a distributed binary store (don't mix this up with a distributed filesystem (no integration with local filesystem).
1
2
0
I need python and php support. I am currently using mongodb and it is great for my data (test results), but I need to store results of a different type of test which are over 32 MB and exceed mongo limit of 16 MB. Currently each test is a big python dictionary and I retrieve and represent them with php.
no-sql database for document sizes over 32 MB?
1.2
1
0
128
12,091,353
2012-08-23T12:14:00.000
1
0
1
0
python,django,django-templates,persian,hindi
12,091,958
7
false
1
0
You can use the django internationalization there is a library l10n is define in django
1
6
0
I want to print {{forloop.counter}} with persian or Hindi encoding means to have "۱ ۲ ۳ ۴ .." instead of "1 2 3 4 ...". I searched a lot but I couldn't find any related functions. Would you mind helping me? Regards
Hindi or Farsi numbers in django templating engine
0.028564
0
0
1,525
12,091,413
2012-08-23T12:17:00.000
0
0
0
0
python,database
12,091,455
1
false
0
0
When you establish the MySQL connection, use the remote machines IP address / hostname and corresponding credentials (username, password).
1
0
0
I am using MySQLdb. I am developing a simple GUI application using Rpy2. What my program does? - User can input the static data and mathematical operations will be computed using those data. - Another thing where I am lost is, user will give the location of their database and the program will computer maths using the data from the remote database. I have accomplished the result using the localhost. How can I do it from the remote database? Any idea? Thanx in advance!
How to take extract data from the remote database in Python?
0
1
0
187
12,094,148
2012-08-23T14:38:00.000
4
0
0
0
python,sip,decode,pcap
12,708,904
3
true
0
0
Finally I did this with help of pyshark from the sharktools (http://www.mit.edu/~armenb/sharktools/). In order to sniff IP packages I used scapy instead of libpcap.
1
3
0
I need python script that can sniff and decode SIP messages in order to check their correctness. As a base for this script I use python-libpcap library for packets sniffing. I can catch UDP packet and extract SIP payload from it, but I don't know how to decode it. Does python has any libraries for packets decoding? I've found only dpkt, but as I understood it can't decode SIP. If there are no such libraries how can I do this stuff by hands? Thank you in advance!
Decode SIP messages with Python
1.2
0
1
5,300
12,094,663
2012-08-23T15:02:00.000
5
0
0
1
python
12,094,678
2
false
0
0
it just deletes them from the harddrive
2
13
0
I've been using Python for a long time and have numerous scripts running all over my office. I use a few in particular scripts to back up then delete data. In these script I use os.remove function. My question is: Where does the os.remove function delete items to? Does it delete them right off the HD? I know they don't go to the recycle bin Does it simply remove the item's link, but keep it on the HD somehow?
Where does os.remove go?
0.462117
0
0
7,445
12,094,663
2012-08-23T15:02:00.000
24
0
0
1
python
12,094,722
2
true
0
0
os.remove will call the operating system's unlink functionality, and delete the file from the disk. Technically the OS/filesystem probably just marks the sectors as free, and removes the file entry from the directory, but that's up to the filesystem implementation.
2
13
0
I've been using Python for a long time and have numerous scripts running all over my office. I use a few in particular scripts to back up then delete data. In these script I use os.remove function. My question is: Where does the os.remove function delete items to? Does it delete them right off the HD? I know they don't go to the recycle bin Does it simply remove the item's link, but keep it on the HD somehow?
Where does os.remove go?
1.2
0
0
7,445
12,095,507
2012-08-23T15:48:00.000
0
1
0
0
python,ruby,amazon-ec2,amazon-web-services,xmpp
12,095,630
2
false
1
0
As an employee of ProcessOne, the makers of ejabberd, I can tell you we run a lot of services over AWS, including mobile chat apps. We have industrialized our procedures.
2
0
0
I'm building an Android IM chat app for fun. I can develop the Android stuff well but i'm not so good with the networking side so I am looking at using XMPP on AWS servers to run the actual IM side. I've looked at OpenFire and ejabberd which i could use. Does anyone have any experience with them or know a better one? I'm mostly looking for sending direct IM between friends and group IM with friends.
Running XMPP on Amazon for a chat app
0
0
1
365
12,095,507
2012-08-23T15:48:00.000
1
1
0
0
python,ruby,amazon-ec2,amazon-web-services,xmpp
12,095,743
2
false
1
0
Try to explore more about Amazon SQS( Simple Queuing Service) . It might come handy for your requirement.
2
0
0
I'm building an Android IM chat app for fun. I can develop the Android stuff well but i'm not so good with the networking side so I am looking at using XMPP on AWS servers to run the actual IM side. I've looked at OpenFire and ejabberd which i could use. Does anyone have any experience with them or know a better one? I'm mostly looking for sending direct IM between friends and group IM with friends.
Running XMPP on Amazon for a chat app
0.099668
0
1
365
12,097,293
2012-08-23T17:44:00.000
1
0
1
0
python,configuration
12,097,427
4
false
0
0
Yes, this is standard practice on most Unix systems. For transparency, it's often a good idea to print an informative message like Creating directory .dir to store script state the first time you create the storage location. If you are expecting to store significant amounts of data, it's a good idea to confirm the location with the user. This is also the standard place for any additional configuration files for your application.
2
3
0
I'd like to save my script's data to disk to load next time the script runs. For simplicity, is it a good idea to use os.path.expanduser('~') and save a directory named ".myscript_data" there? It would only need to be read by the script, and to avoid clutter for the user, I'd like it to be hidden. Is it acceptable practice to place hidden ".files" on the users computer?
".directory" in home acceptable?
0.049958
0
0
257
12,097,293
2012-08-23T17:44:00.000
1
0
1
0
python,configuration
12,097,591
4
false
0
0
On Linux, I suggest a file or directory (no dotfile) in os.environ['XDG_CONFIG_HOME'], which is in most cases the directory $HOME/.config. A dotfile in $HOME, however, is also often being used.
2
3
0
I'd like to save my script's data to disk to load next time the script runs. For simplicity, is it a good idea to use os.path.expanduser('~') and save a directory named ".myscript_data" there? It would only need to be read by the script, and to avoid clutter for the user, I'd like it to be hidden. Is it acceptable practice to place hidden ".files" on the users computer?
".directory" in home acceptable?
0.049958
0
0
257
12,098,358
2012-08-23T19:01:00.000
0
0
0
1
python,google-app-engine
12,116,756
1
false
1
0
I am trying to be pretty general here as I don't know whether you are using the default users service or not and I don't know how you are uniquely linking your SessionSupplemental entities to users or whether you even have a way to identify users at this point. I am also assuming you are using some version of webapp as that is the standard request handling library on App Engine. Let me know a bit more and I can update the answer to be more specific. Subclass the default RequestHandler in webapp with a new class (such as MyRequestHandler). In your subclass override the initialize() method. In your new initialize() method get the current user from your session system (or the users service or whatever you are using). Test to see if a SessionSupplemental entity already exists for this user and if not create a new one. For all your other request handlers you now want to subclass MyRequestHandler (instead of the default RequestHandler). Whenever a request happens webapp will automatically call the initialize() method. This is going to cost you a read for every request and also a write for every request by a new user. If you use the ndb library (instead of db) then a lot of the requests will just hit memcache instead of the datastore. Now if you are just starting creating a new AppEngine app I would recommend using the Python27 runtime and webapp2 and trying to leverage as much of the webapp2 Auth module as you can so you don't have to write so much session stuff yourself. Also, ndb can be much nicer than the default db library.
1
0
0
I am a newbie to Google App Engine and Python. I want to create an entry in a SessionSupplemental table (Kind) anytime a new user accesses the site (regardless of what page they access initially). How can I do this? I can imagine that there is a list of standard event triggers in GAE; where would I find these documented? I can also imagine that there are a lot of system/application attributes; where can I find these documented and how to use them? Thanks.
How can I trigger a function anytime there is a new session in a GAE/Python Application?
0
0
0
412
12,098,554
2012-08-23T19:17:00.000
0
0
0
0
python,c,api
12,098,669
1
false
0
1
Try this: create a 'template' PyTypeObject, and use struct copying (or memcpy) to clone the basic template. Then you can fill it in with the requisite field definitions after that. This solves (2), since you only have to declare the full PyTypeObject once. For your first point, you just set the static variable from your module init instead of doing it in the static variable declaration. So, it won't be set until your module actually initializes. If you plan on doing this often, it may be worth looking at Boost::Python, which simplifies the process of generating CPython wrappers from C++ classes.
1
0
0
I am using the Python C API and trying to create a function that will allocate new instances of PyTypeObjects to use in several C++ classes. The idea is that each class would have a pointer to a PyTypeObject that would get instantiated with this factory. The pointers must be static. However, I'm having issues with this approach. In the class that contains the pointer to the PyTypeObject, I get the "undefined reference" linker error when I try to set that static variable equal to the result of the factory function (which is in another class but is static). I suppose this makes sense because the function wouldn't happen until runtime but I don't know another way to do this. I don't know how to set the PyTypeObject fields dynamically because the first field is always a macro: PyObject_VAR_HEAD. Hope this makes sense. Basically, I'm trying to make it so several classes don't have to redefine PyTypeObject statically, but can instead instantiate their PyTypeObject variables from a factory function.
Static factory for PyTypeObject
0
0
0
461
12,098,732
2012-08-23T19:30:00.000
1
0
0
0
python,windows,debugging,internet-explorer-9,pipeline
12,098,951
2
false
1
0
You need to specify what Python "Web server" you're using (e.g. bottle? Maybe Tornado? CherryPy?), but more important, you need to supply what request headers and what HTTP response go in and out when IE9 is involved. You may lift them off the wire using e.g. ngrep, or I think you can use Developers Tools in IE9 (F12 key). The most common quirks with IE9 that often do not bother Web browsers are mismatches in Content-Length (well, this DID bother Safari last time I looked), possibly Content-Type (this acts in reverse - IE9 sometimes correctly gleans HTML mimetype even if the Content-Type is wrong), Connection: Close. So yes, it could be a problem with HTTP pipelining: specifically if you pipeline a request with invalid Content-Length and not even chunked-transfer-encoding, IE might wait for the request to "finish". This would happen in Web browsers too; but it could then be that this behavior, in IE, overrides the connection being flushed and closed, while in Web browsers it does not. These two hypotheses might match your observed symptoms. To fix that, you either switch to chunked transfer encoding, which replaces Content-Length in a way, or correctly compute its value. How to do this depends on the server. To verify quickly, you could issue a Content-Length surely too short (e.g. 100 bytes?) to see whether this results in IE un-hanging and displaying a partial web page.
1
0
0
Trying to debug a website in IE9. I am running it via Python. In chrome, safari, firefox, and opera, the site loads immediately, but in IE9 it seems to hang and never actually loads. Could this possibly be an issue with http pipelining? Or something else? And how might I fix this?
IE9 and Python issues?
0.099668
0
1
408
12,102,061
2012-08-24T01:15:00.000
1
0
1
1
python,windows-7
12,102,132
1
false
0
0
The IDLE context menu plug-in is registered when you install Python and points to the version of IDLE supplied with the Python installed. (IDLE itself has significant code changes between Python 2 and 3 because it's written in Python and the language changed a lot.) To change it, simply re-install the version of Python you wish the IDLE context menu to invoke.
1
5
0
I installed python 3.2 and later installed python 2.7. Somehow the IDLE, which I open it by right-click on python file -> Edit with IDLE, are using python 2.7 instead of python 3.2. It seems that python 2.7 was set as default with IDLE. Even if I changed the PATH environment variable in windows advance setting back to python 3.2, the default python shell is still 2.7. I am sure that there was no more python 2.7 in the path. Later I have to uninstall python 2.7 and reinstall python 3.2.
how to set python IDLE's default python?
0.197375
0
0
1,564
12,102,327
2012-08-24T02:02:00.000
1
0
1
0
python,string
12,102,623
4
true
0
0
I'm partial to string.rstrip('\n') - this way it only strips newlines, and only from the right end of the string.
1
0
0
I have a problem, I save an expression from a website that is for example "ab" in a dictionary... the problem is that the expression comes with an \n and alters my whole key, what can I do?
"Clean" a python expression that has a \n
1.2
0
0
106
12,102,392
2012-08-24T02:14:00.000
0
0
1
0
python,amazon-simpledb,boto
14,759,079
1
true
0
0
Simply, YES, SimpleDB provides only first level of keys. So if you want to store data with higher level of key nesting, you will have to serialize the data to a string and you will not have simple select commands to make queries, using deeper nested data (you will be given to test it as a string, but will not have simple access to subkey values). Note, that one key (in one record) handles storing multiple values, but this is sort of list (often used to store multiple tags), but not a dictionary.
1
0
0
Am I correct to assume that nested dictionaries are not supported in aws simpledb? Should I just serialize everything into json and push to the database? For example, test = dict(company='test company', users={'username':'joe', 'password': 'test'}) This returns test with keys of 'company' and 'users', however 'users' just represents a string..
python boto simpledb with nested dictionaries
1.2
0
0
306
12,105,775
2012-08-24T08:32:00.000
3
1
1
0
c++,python,compilation
12,105,989
3
true
0
1
The only way to do that in C++ is to unload the DLL with the code to be modified, modify the sources, invoke the compiler to regenerate the DLL, and reload the DLL. It's very, very heavy weight, and it only works if the compiler is present on the machines where the code is to be run. (Usually the case under Unix, rarely the case with Windows.) Interpreted languages like Python are considerably more dynamic; Python has a built-in function to execute a string as Python code, for example. If you need dynamically modifiable code, I'd suggest embedding Python in your application, and using it for the dynamic parts.
1
2
0
My question is a little bit stupid but I decided to ask advanced programmers like some of you. So I want to make a "dynamic" C++ program. My idea is to compile it and after compilation (maybe with scripting language like python) to change some how the code of the program. I know you will tell me that after the compilation I can not change the code but is there a way of doing that. Thank you!
Using scripting language in C++
1.2
0
0
315
12,105,855
2012-08-24T08:38:00.000
1
0
1
1
python,wxpython
12,106,760
4
false
0
0
I don't no much about wx, I work with jython(python implemented in java and you can use java) and swing. Swing has its own worker thread, and if you do gui updates you wrap your code into a runnable and invoke it with swing.invokelater. You could see if wx has something like that, if you however are only allowed to manipulate the gui from the thread in which you created it try something similar. create a proxy object for your gui, which forwards all your calls to your thread which forwards them to the gui. But proxying like this gets messy. how about you let them define classes, with an 'updateGui' function, that they should hand back to you over a queue and that you will execute in your gui thread.
1
0
0
I've seen a lot of stuff about running code in subprocesses or threads, and using the multiprocessing and threading modules it's been really easy. However, doing this in a GUI adds an extra layer of complication. From what I understand, the GUI classes don't like it if you try and manipulate them from multiple threads (or processes). The workaround is to send the data from whatever thread you created it in to the thread responsible for the graphics and then render it there. Unfortunately, for the scenario I have in mind this is not an option: The gui I've created allows users to write their own plotting code which is then executed. This means I have no control over how they plot exactly, nor do I want to have it. (Update: these plots are displayed in separate windows and don't need to be embedded anywhere in the main GUI. What I want is for them to exist separated from the main GUI, without sharing any of the underlying stack of graphics libraries.) So what I'm wondering now is Is there some clean(ish) way of executing a string of python code in a whole new interpreter instance with its own ties to the windowing system? In response to the comments: The current application is set up as follows: A simple python script loads a wxPython gui (a wx.App). Using this gui users can set up a simulation, part of which involves creating a script in plain python that runs the simulation and post-processes the results (which usually involves making plots and displaying them). At the moment I'm doing this by simply calling exec() on the script code. This works fine, but the gui freezes while the simulation is running. I've experimented with running the embedded script in a subprocess, which also works fine, right up until you try to display the created graphs (usually using matplotlib's show()). At this point some library deep down in the stack of wxPython, wx, gtk etc starts complaining because you cannot manipulate it from multiple threads. The set-up I would like to have is roughly the same, but instead of the embedded script sharing a GUI with the main application, I would like it to show graphics in an environment of its own. And just to clarify: This is not a question about "how do I do multithreading/multiprocessing" or even "how do I do multithreading/multiprocessing within a single wxpython gui". The question is how I can start a script from a gui that loads an entirely new gui. How do I get the window manager to see this script as an entirely separate application? The easiest way would be to generate it in a temporary folder somewhere and then make a non-blocking call to the python interpreter, but this makes communication more difficult and it'd be quite hard to know when I could delete the temp files again. I was hoping there was a cleaner, dynamical way of doing this.
Executing a python script in a subprocess - with graphics
0.049958
0
0
973
12,106,515
2012-08-24T09:21:00.000
3
0
1
0
python,interpreted-language
12,106,561
6
false
0
0
If you mean interpreted languages by the remark "languages like Python", then yes it will make a difference, as the parsing might take somewhat longer. The difference is unnoticeable I'd say. I do completely agree with nightcracker though; don't do it. Make your code readable for a human being. Let the parser/compiler handle the readability for the machine. Remember the rules about optimization: Don't do it. (Only for experts) don't do it yet.
3
17
0
Is there anything to be gained memorywise and speedwise by having shorter variable-names in a language like python? And if so, what kind of situations would it be reasonable to consider this? Note I'm in no way advocating short variable names, I'm just wondering, please (re)read the question. Note 2 Please, I do understand the value of descriptive variable names. I've looked at enough code to prefer descriptive names over shorter names, and understand the value of it. A plain No doesn't really help.
Is there anything to be gained from short variable names?
0.099668
0
0
3,992
12,106,515
2012-08-24T09:21:00.000
1
0
1
0
python,interpreted-language
12,106,608
6
false
0
0
Pretty much none. Admittedly it might slow down finding the variable name the first time when python is precompiling your script. However, the time expended as a result of the confusion that results from short variable names generally far exceeds the amount of time saved in the execution of the script.
3
17
0
Is there anything to be gained memorywise and speedwise by having shorter variable-names in a language like python? And if so, what kind of situations would it be reasonable to consider this? Note I'm in no way advocating short variable names, I'm just wondering, please (re)read the question. Note 2 Please, I do understand the value of descriptive variable names. I've looked at enough code to prefer descriptive names over shorter names, and understand the value of it. A plain No doesn't really help.
Is there anything to be gained from short variable names?
0.033321
0
0
3,992
12,106,515
2012-08-24T09:21:00.000
15
0
1
0
python,interpreted-language
12,107,102
6
true
0
0
There's a problem with "like python", because not all interpreted languages are the same. With a purely-interpreted language it would have more of an impact than with one like Python that has a pre-compile step. Strictly this isn't a language difference (you could have one Javascript engine that precompiles, and one that doesn't), but it does affect the answer to this question. Stretching out "like python" to include every interpreted language, I'd say the answer is "yes, for some of them, at least some of the time". The next question is, "how much". In 1997 through to early 1998 I was working on some rather complicated javascript code that made use of some of the new features of Netscape Navigator 4 and Internet Explorer 4. This was a humongous javascript file for the time - when the prevalence of dial-up meant that every kilobyte counted in terms of site speed. For this reason, we used a minimiser script. The main thing this did, was to re-write variables to be shorter (lastHeight becomes a, userSel becmomes b and so on). Not only did it reduce the download time, but it did also make one of the heavier functions appreciably faster. But only appreciable if you were someone who spent their whole working day looking at nothing else, which pretty much meant me and one other colleague. So yes, if we put Javascript into the "like python" category as far as interpretation goes, then it can make a difference, under the following conditions: It was running on Pentium, Pentium Pro and 486s (Pentium II was out then, but we didn't have any). I got a new machine part-way through the project, which meant I went from 133MHz to 166MHz. It was a rather large piece of nasty looping (most of the script had no appreciable difference). It was running on a script-engine from 15years ago with none of the improvements in script-engine performance that have been made since. And it still didn't make that much difference. We can assume for that, that some other interpreted languages are also affected to a similarly minute degree. Even in 1997 though, I wouldn't have bothered were it not that it coincidentally gave me another advantage, and I certainly wasn't working on the minimised version.
3
17
0
Is there anything to be gained memorywise and speedwise by having shorter variable-names in a language like python? And if so, what kind of situations would it be reasonable to consider this? Note I'm in no way advocating short variable names, I'm just wondering, please (re)read the question. Note 2 Please, I do understand the value of descriptive variable names. I've looked at enough code to prefer descriptive names over shorter names, and understand the value of it. A plain No doesn't really help.
Is there anything to be gained from short variable names?
1.2
0
0
3,992
12,108,816
2012-08-24T11:45:00.000
0
0
0
1
python,sql-server,google-app-engine
12,116,542
2
false
1
0
You could, at least in theory, replicate your data from the MS-SQL to the Google Cloud SQL database. It is possible create triggers in the MS-SQL database so that every transaction is reflected on your App Engine application via a REST API you will have to build.
1
3
0
I use python 2.7 pyodbc module google app engine 1.7.1 I can use pydobc with python but the Google App Engine can't load the module. I get a no module named pydobc error. How can I fix this error or how can use MS-SQL database with my local Google App Engine.
How can use Google App Engine with MS-SQL
0
1
0
2,793
12,109,795
2012-08-24T12:54:00.000
0
0
0
0
c#,javascript,python,tidesdk
12,211,713
9
false
1
0
It is strange that Qt is not for you. You may be surprised to hear Sencha's Architect and Animator products use Qt and QWebView for cross platform JavaScript applications with full menus and icons and executables and system dialog boxes and file I/O. It currently works Windows, OSX, and Linux. They use an in-house developed library called ion to load and interact a JavaScript application. They provide some helper classes for JS to use. A simple skeleton c++ application which uses Qt to create and load a window and create a web view in that window and load html and other content from file into that view. Another solution is Adobe's Air which is like a browser with native support. It also provides deployment.
1
9
0
I want to develop a desktop app to be used cross-system (win, mac, linux), is there a GUI framework that would allow me to write code once for all 3 platforms and have a fully-scriptable embedded web component? I need it to have an API to communicate between app and webpage javascript. I know C#, JavaScript and a little bit of python.
Is there a cross-OS GUI framework that supports embedding HTML pages?
0
0
0
4,766
12,110,312
2012-08-24T13:26:00.000
0
0
1
1
python,windows,powershell,command-line,named-pipes
12,110,714
2
false
0
0
The client can connect to a Windows named pipe as if it was any other file, provided it has been created by another program. The low-level API is CreateFile using ALWAYS_EXISTING, but the ordinary language routines, like Python's open should work. The filename will be \\ server \pipe\ name. Unlike disk files pipes are temporary, and when every handle to a pipe has been closed, the pipe and all the data it contains are deleted, so personally I maintain the pipe using a service where necessary.
1
1
0
I'm looking for a quick way to check which data comes into a named pipe (on windows). Is there any way to do it from cmd.exe or powershell or python? Actually I found only ways to create named pipe and than manipulate it. But how can I open a named pipe created by another program?
Read from windows named pipe with cmd.exe or PowerShell or Python
0
0
0
4,243
12,110,610
2012-08-24T13:41:00.000
4
1
1
0
python,unit-testing,tdd
12,110,670
3
false
0
0
I would not let them pass or show OK, because you will not find them easily back. Maybe just let them fail and the reason (not written yet), which seems logical because you have a test that is not finished.
1
5
0
I'm doing TDD using Python and the unittest module. In NUnit you can Assert.Inconclusive("This test hasn't been written yet"). So far I haven't been able to find anything similar in Python to indicate that "These tests are just placeholders, I need to come back and actually put the code in them." Is there a Pythonic pattern for this?
How should I indicate that a test hasn't been written yet in Python?
0.26052
0
0
984
12,111,983
2012-08-24T15:05:00.000
7
0
0
0
python,mysql,django,multithreading
12,115,563
2
true
1
0
You can perform actions from different thread manually (eg with Queue and executors pool), but you should note, that Django's ORM manages database connections in thread-local variables. So each new thread = new connection to database (which will be not good idea for 50-100 threads for one request - too many connections). On the other hand, you should check database "bandwith".
1
7
0
In one model I've got update() method which updating few fields and creates one object of some other model. The problem is that data I use to update is fetched from another host (unique for each object) and it could take a moment (host may be offline, and timeout is set to 3sec). And now, I need to update couple of hundred objects, 3-4 times per hour - of course updating every one in a row is not an option, because it could take all day. My first thought was split it up for 50-100 threads so each one could update its own part of objects. 99% of update function time is waiting for server respond (there is few bytes of data only, so pings are the problem), I think the CPU won't be a problem, I'm more worried about: Django ORM. Can it handle it? Getting all objects, splitting it up, and updating from >50 threads? Is it a good idea to solve this? If it is - how to do it and don't screw a database? Or maybe I shouldn't care about so little records? If it isn't a good way, how to do it right?
Django databases and threads
1.2
0
0
5,194
12,113,554
2012-08-24T16:49:00.000
0
0
0
1
google-app-engine,python-2.7
12,185,853
1
true
1
0
The issue has resolved itself after a few days. Now the app is returning the correct data. It may be just a glitch from the migration. I have another GAE app that's stuck in the middle of the migration. Searching on SO I have found others that are experiencing the same problem.
1
0
0
After following the instruction to migrate from a GAE app from Master/Slave to High Replication Datastore(HRD), the app is returning nothing for datastore read. I am able to see the data using the "Datastore Viewer" and they are there (migrated successfully). I have not changed any code. Just wondering if there's anything I need to set or configure for the datastore read to happen. I don't see any error in the "Log Console" on my dev machine and no error on the server's "Logs".
Google App Engine HRD Migration - Data Read Returns Nothing
1.2
0
0
104
12,114,229
2012-08-24T17:42:00.000
2
0
0
1
python,ssl,apache2,cherrypy
12,114,468
2
false
0
0
You cannot do that (nor would I try to). Firstly, Apache will be better for terminating the SSL than CherryPy (if for no other reason, than performance). And secondly, it will simply not work because Apache speaks HTTP and HTTPS is actually HTTP encrypted with SSL, so you need to handle the SSL before you get any HTTP that Apache can understand.
2
0
0
Is there a way to set up CherryPy to use SSL when running behind Apache2 without configuring Apache2 to do SSL for CherryPy? I have found multiple tutorials about using SSL with CherryPy and configuring Apache2 to do the SSL work for CherryPy, but I have not been able to find a tutorial that deals with using SSL with CherryPy behind Apache2 without configuring Apache2 to do the SSL work.
CherryPy SSL behind Apache
0.197375
0
0
435
12,114,229
2012-08-24T17:42:00.000
3
0
0
1
python,ssl,apache2,cherrypy
12,115,325
2
true
0
0
to expound a bit on gcbrizan's answer, you cannot because the first step required to understand an https request is to first decrypt the connection. SSL/TLS work in two modes; tunneling and STARTTLS; in the latter, a normal connection is started, and at some point, once the two parties have established whatever they want to do with the connection; one peer asks the other to start encrypting the connection. ESMTP (email) uses this mechanism. HTTP, however, does not have a starttls feature; so tunneling is used instead. Before any http traffic is transferred, both parties start a secure tunnel; the client verifies the correctness of the server's certificate, and the server may do the same for the client (if required/requested). Only once all of this has happened does the client send the page request. were apache (or any other proxy) to do this, that means that it would have to pass all encrypted traffic to the origin server (cherrypy in your question) since the traffic is encrypted, the proxy has no opportunity to "send this request here, but that request there". If it's just passing all traffic unmodified, then it's not really doing anything helpful at all; and you may as well expose the origin server directly.
2
0
0
Is there a way to set up CherryPy to use SSL when running behind Apache2 without configuring Apache2 to do SSL for CherryPy? I have found multiple tutorials about using SSL with CherryPy and configuring Apache2 to do the SSL work for CherryPy, but I have not been able to find a tutorial that deals with using SSL with CherryPy behind Apache2 without configuring Apache2 to do the SSL work.
CherryPy SSL behind Apache
1.2
0
0
435
12,115,073
2012-08-24T18:51:00.000
3
0
0
0
python,django
12,115,267
2
false
1
0
ForeignKey means that you are referencing an element that exists inside of another table. OneToOne, is a type of ForeignKey in which an element of table1 and table2 are uniquely bound together. Your favorite fruit example would be OneToMany. Because each person has a unique favorite fruit, but each fruit can have multiple people who list that particular fruit as their favorite. A OneToOne relationship may be done with your Car example. Cars.VIN could have a OneToOne relationship with CarInfo.VIN since one car will only ever have one CarInfo associated with it (and vise versa).
2
7
0
My understanding is that OneToOneField is used for just 1 row of data from Table2 (Favorite fruit) linked to one row of data in Table1 (Person's name), and ForeignKey is for multiple rows of data in Table2 (Car models) to 1 row of data in Table1 (Brand/Manufacturer). My question is what should I use if I have multiple tables but only one row of data from each table that links back to Table1. For example: I have Table1 as "Cars", my other tables are "Insurance Info", "Car Info", "Repair History". Should I use ForeignKey or OneToOne?
Which to use: OneToOne vs ForeignKey?
0.291313
0
0
3,900
12,115,073
2012-08-24T18:51:00.000
13
0
0
0
python,django
12,115,281
2
true
1
0
You just need to ask yourself "Can object A have many object B, or object B many object A's"? Those table relations each could be different: A Car could have 1 or many insurance policies, and an insurance policy only applies to one car. If the car can only have one, then it could be a one-to-one. A Car can have many repair history rows, so this would be a foreign key on the repair history, with a back relation to the Car as a set. Car Info is similar to the UserProfile concept in django. If it is truly unique information, then it too would be a one-to-one. But if you define Car Info as a general description that could apply to similar Car models, then it would be a foreign key on the Car Table to refer to the Car Info
2
7
0
My understanding is that OneToOneField is used for just 1 row of data from Table2 (Favorite fruit) linked to one row of data in Table1 (Person's name), and ForeignKey is for multiple rows of data in Table2 (Car models) to 1 row of data in Table1 (Brand/Manufacturer). My question is what should I use if I have multiple tables but only one row of data from each table that links back to Table1. For example: I have Table1 as "Cars", my other tables are "Insurance Info", "Car Info", "Repair History". Should I use ForeignKey or OneToOne?
Which to use: OneToOne vs ForeignKey?
1.2
0
0
3,900
12,116,390
2012-08-24T20:44:00.000
3
0
1
0
python,c,callback,pthreads
12,118,259
1
false
0
0
In general situations, the C library needs to call PyEval_InitThreads() to gain GIL before spawning any thread that invokes python callbacks. And the callbacks need to be surrounded with PyGILState_Ensure() and PyGILState_Release() to ensure safe execution. However, if the C library is running within the context of, say, a python C extension, then there are simple cases where it is safe to omit GIL manipulation at all. Think of this calling sequence: 1) python code calls C function foo(), 2) foo() spawns one and only one thread that runs another C function bar(), which calls back to python code, and 3) foo() always joins or cancels the thread running bar() before returns. In such a case, it is safe to omit GIL manipulation. Because foo() owns the GIL (i.e. implicitly borrowed from the python code that calls it) during its lifecycle, and the execution of python callbacks within the lifecycle of foo() is serialized (i.e. only one callback thread and python code does not incorporate threading).
1
6
0
If the one and only Python interpreter is in the middle of executing a bytecode when the OS dispatches another thread, which calls a Python callback - what happens? Am I right to be concerned about the reliability of this design?
Python code calls C library that create OS threads, which eventually call Python callbacks
0.53705
0
0
1,830
12,116,491
2012-08-24T20:53:00.000
1
0
1
0
python,nonetype
12,116,552
3
false
0
0
Python doesn't enforce return types like Java so it is actually "quite easy" to forget to put in a return statement at the end of a function. This will cause the function to return None.
1
1
0
I don't quite understand why Python will automatically convert any 0 returned to me by a function into a None object. I've programmed in almost all of the common languages, yet I've never come across this before. I would expect that if I set 0 in the result, I would get 0 back from the function. Could anyone please explain why Python does this? EDIT: To give some more information, we have a wrapper around a C++ class and the return value type is a void pointer. So if it is returning an integer with a value of 0, it gives me a None type. Does this make sense to anyone? I'm just new to Python and trying to figure out when I might expect None types rather than the return value.
Why does Python convert 0's into None objects?
0.066568
0
0
3,176
12,118,162
2012-08-25T00:18:00.000
0
0
0
0
python,directory,dropbox
12,118,287
8
false
0
0
One option is you could go searching for the .dropbox.cache directory which (at least on Mac and Linux) is a hidden folder in the Dropbox directory. I am fairly certain that Dropbox stores its preferences in an encrypted .dbx container, so extracting it using the same method that Dropbox uses is not trivial.
1
20
0
I have a script that is intended to be run by multiple users on multiple computers, and they don't all have their Dropbox folders in their respective home directories. I'd hate to have to hard code paths in the script. I'd much rather figure out the path programatically. Any suggestions welcome. EDIT: I am not using the Dropbox API in the script, the script simply reads files in a specific Dropbox folder shared between the users. The only thing I need is the path to the Dropbox folder, as I of course already know the relative path within the Dropbox file structure. EDIT: If it matters, I am using Windows 7.
How to determine the Dropbox folder location programmatically?
0
0
1
12,977
12,119,171
2012-08-25T04:33:00.000
2
0
1
0
python,python-3.x,python-2.7
12,119,210
2
true
0
0
You can easily answer this question for yourself using the timeit module. But the entire point of a dictionary is near-instant access to any desired element by key, so I would not expect to have a large difference between the two scenarios.
1
0
0
I am designing a software in Python and I was getting little curious about whether there is any time differences when popping out items from a dictionary of very small lengths and when popping out items from a dictionary of very large length or it is same in all cases.
Time differences when popping out items from dictionary of different lengths
1.2
0
0
75
12,120,539
2012-08-25T08:47:00.000
0
0
0
0
python,session,login,web.py
12,137,859
1
false
1
0
Okay, I was able to figure out what I did wrong. Total newbie stuff and all part of the learning process. This code now works, well mostly. The part that I was stuck on is now working. See my comments in the code Thanks import web web.config.debug = False render = web.template.render('templates/', base='layout') urls = ( '/', 'index', '/add', 'add', '/login', 'Login', '/reset', 'Reset' ) app = web.application(urls, globals()) db = web.database(blah, blah, blah) store = web.session.DiskStore('sessions') session = web.session.Session(app, store, initializer={'login': 0, 'privilege': 0}) class index: def GET(self): todos = db.select('todo') return render.index(todos) class add: def POST(self): i = web.input() n = db.insert('todo', title=i.title) raise web.seeother('/') def logged(): if session.get('login', False): return True else: return False def create_render(privilege): if logged(): if privilege == 0: render = web.template.render('templates/reader') elif privilege == 1: render = web.template.render('templates/user') elif privilege == 2: render = web.template.render('templates/admin') else: render = web.template.render('templates/communs') else: ## This line is key, i do not have a communs folder, thus returning an unusable object #render = web.template.render('templates/communs') #Original code from example render = web.template.render('templates/', base='layout') return render class Login: def GET(self): if logged(): ## Using session.get('something') instead of session.something does not blow up when it does not exit render = create_render(session.get('privilege')) return '%s' % render.login_double() else: render = create_render(session.get('privilege')) return '%s' % render.login() def POST(self): name, passwd = web.input().name, web.input().passwd ident = db.select('users', where='name=$name', vars=locals())[0] try: if hashlib.sha1("sAlT754-"+passwd).hexdigest() == ident['pass']: session.login = 1 session.privilege = ident['privilege'] render = create_render(session.get('privilege')) return render.login_ok() else: session.login = 0 session.privilege = 0 render = create_render(session.get('privilege')) return render.login_error() except: session.login = 0 session.privilege = 0 render = create_render(session.get('privilege')) return render.login_error() class Reset: def GET(self): session.login = 0 session.kill() render = create_render(session.get('privilege')) return render.logout() if __name__ == "__main__": app.run()
1
0
0
I am trying to copy and use the example 'User Authentication with PostgreSQL database' from the web.py cookbook. I can not figure out why I am getting the following errors. at /login 'ThreadedDict' object has no attribute 'login' at /login 'ThreadedDict' object has no attribute 'privilege' Here is the error output to the terminal for the second error. (the first is almost identical) Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 239, in process return self.handle() File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 230, in handle return self._delegate(fn, self.fvars, args) File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 420, in _delegate return handle_class(cls) File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/application.py", line 396, in handle_class return tocall(*args) File "/home/erik/Dropbox/Python/Web.py/Code.py", line 44, in GET render = create_render(session.privilege) File "/usr/local/lib/python2.7/dist-packages/web.py-0.37-py2.7.egg/web/session.py", line 71, in __getattr__ return getattr(self._data, name) AttributeError: 'ThreadedDict' object has no attribute 'privilege' 127.0.0.1:36420 - - [25/Aug/2012 01:12:38] "HTTP/1.1 GET /login" - 500 Internal Server Error Here is my code.py file. Pretty much cut-n-paste from the cookbook. I tried putting all of the class and def on top of the main code. I have also tried launching python with sudo as mentioned in another post. import web class index: def GET(self): todos = db.select('todo') return render.index(todos) class add: def POST(self): i = web.input() n = db.insert('todo', title=i.title) raise web.seeother('/') def logged(): return False #I added this to test error #1, Now I get error #2 #if session.login==1: # return True #else: # return False def create_render(privilege): if logged(): if privilege == 0: render = web.template.render('templates/reader') elif privilege == 1: render = web.template.render('templates/user') elif privilege == 2: render = web.template.render('templates/admin') else: render = web.template.render('templates/communs') else: render = web.template.render('templates/communs') return render class Login: def GET(self): if logged(): render = create_render(session.privilege) return '%s' % render.login_double() else: # This is where error #2 is render = create_render(session.privilege) return '%s' % render.login() def POST(self): name, passwd = web.input().name, web.input().passwd ident = db.select('users', where='name=$name', vars=locals())[0] try: if hashlib.sha1("sAlT754-"+passwd).hexdigest() == ident['pass']: session.login = 1 session.privilege = ident['privilege'] render = create_render(session.privilege) return render.login_ok() else: session.login = 0 session.privilege = 0 render = create_render(session.privilege) return render.login_error() except: session.login = 0 session.privilege = 0 render = create_render(session.privilege) return render.login_error() class Reset: def GET(self): session.login = 0 session.kill() render = create_render(session.privilege) return render.logout() #web.config.debug = False render = web.template.render('templates/', base='layout') urls = ( '/', 'index', '/add', 'add', '/login', 'Login', '/reset', 'Reset' ) app = web.application(urls, globals()) db = web.database(dbn='postgres', user='hdsfgsdfgsd', pw='dfgsdfgsdfg', db='postgres', host='fdfgdfgd.com') store = web.session.DiskStore('sessions') # Too me, it seems this is being ignored, at least the 'initializer' part session = web.session.Session(app, store, initializer={'login': 0, 'privilege': 0}) if __name__ == "__main__": app.run()
web.py User Authentication with PostgreSQL database example
0
1
0
2,157
12,122,671
2012-08-25T14:05:00.000
2
1
0
1
python,ssh,pyqt4,xserver
12,123,998
4
false
0
0
Similar to your xclock solution, I like to run xdpyinfo and see if it returns an error.
2
8
0
I'm writing a linux application which uses PyQt4 for GUI and which will only be used during remote sessions (ssh -XY / vnc). So sometimes it may occur that a user will forget to run ssh with X forwarding parameters or X forwarding will be unavailable for some reason. In this case the application crashes badly (unfortunately I am force to use an old C++ library wrapped into python and it completely messes user's current session if the application crashes). I cannot use something else so my idea is to check if X forwarding is available before loading that library. However I have no idea how to do that. I usually use xclock to check if my session has X forwarding enabled, but using xclock sounds like a big workaround. ADDED If possible I would like to use another way than creating an empty PyQt window and catching an exception.
How to determine from a python application if X server/X forwarding is running?
0.099668
0
0
8,184
12,122,671
2012-08-25T14:05:00.000
8
1
0
1
python,ssh,pyqt4,xserver
12,123,396
4
true
0
0
Check to see that the $DISPLAY environment variable is set - if they didn't use ssh -X, it will be empty (instead of containing something like localhost:10).
2
8
0
I'm writing a linux application which uses PyQt4 for GUI and which will only be used during remote sessions (ssh -XY / vnc). So sometimes it may occur that a user will forget to run ssh with X forwarding parameters or X forwarding will be unavailable for some reason. In this case the application crashes badly (unfortunately I am force to use an old C++ library wrapped into python and it completely messes user's current session if the application crashes). I cannot use something else so my idea is to check if X forwarding is available before loading that library. However I have no idea how to do that. I usually use xclock to check if my session has X forwarding enabled, but using xclock sounds like a big workaround. ADDED If possible I would like to use another way than creating an empty PyQt window and catching an exception.
How to determine from a python application if X server/X forwarding is running?
1.2
0
0
8,184
12,122,980
2012-08-25T14:51:00.000
4
0
0
0
python,wxpython,sublimetext2
21,536,926
3
false
0
1
Find the file named python.sublime-build under C:\Users[USER NAME]\AppData\Roaming\Sublime Text 2\Packages\Python\ Add the following value "shell":"true" Save the file and run your How to run a wxPython GUI app in Sublime Text 2!
1
8
0
I just started to use Sublime Text 2. I use Sublime for python, but when I use CTRL+B it does not run my wxPython GUI app. It can run a Tkinter app. Why is this? What do I need to do to run a wxPython app from Sublime?
How to run a wxPython GUI app in Sublime Text 2
0.26052
0
0
3,053
12,124,280
2012-08-25T17:46:00.000
0
0
1
0
python
12,124,300
2
false
0
0
Your Python is probably compiled in 32-bit architecture so it could not handle more than 4 GB of memory. To solve the problem install 64-bit Python on 64-bit OS. PS. He gives up on 3.4 (not 4 GB) because some part of address space is reserved.
1
0
0
I try to use loadtxt('x.txt', delimiter=' ') on a file that is 6,8 GB in size. This gives a memory error. My computer have 8 GB memory. When I look at my computer performance meter I see that Python gives the error message already when just 3,4 GB of the memory is used. Why does not Python try to use the remaining 4,6 GB before giving in? Yours! Per P.
out of memory whilt using loadtxt
0
0
0
144
12,124,284
2012-08-25T17:47:00.000
4
0
1
0
python,superclass
12,124,305
2
false
0
0
That's because type is not the supertype of all builtin types. object is.
1
6
0
Given that type is the superclass of all classes, why isinstance(1, type) is False? Am I understanding the concept wrong?
Is type the super class of all classes in Python?
0.379949
0
0
1,607
12,124,852
2012-08-25T19:10:00.000
1
0
1
1
django,virtualenv,mysql-python
12,537,780
1
true
0
0
Those files aren't important for running MySQLdb, but they should be included, and I'll fix this for the next release if possible. (Fixed in the 1.2.4 betas)
1
1
0
When using pip to install mysql-python in virtualenv on ubuntu, the install goes through successfully but with the following warnings: warning: no files found matching 'MANIFEST' warning: no files found matching 'ChangeLog' warning: no files found matching 'GPL' Anyone know why? is it something I need to worry about?
why am I getting warning: no files found matching 'MANIFEST', 'ChangeLog', 'GPL when mysql-python installs successfully
1.2
0
0
1,106
12,126,964
2012-08-26T01:15:00.000
0
0
1
1
python,development-environment
12,126,980
3
false
0
0
Don't expect what you'd get from a classical programming language IDE from something having to do with Python. It can't be done due to the dynamic nature of the language and the fact that in order to figure out details such as autocomplete, parameter info or members, an IDE would have, at some point, to run the code - it can't be done because of the possible side-effects. I'm using Emacs and Sublime Text 2 myself.
1
0
0
I'm wondering if there's a more beginner-friendly environment to write Python than a terminal shell. Any suggestions?
What are some choice Python programming environments?
0
0
0
2,952
12,129,329
2012-08-26T10:21:00.000
0
0
1
0
python,design-patterns,encapsulation
12,130,429
2
true
0
0
If the function is only used by one class, and especially if the module has more classes with potentially more utility functions (used only by one class), it might clarify things a bit if you kept the functions as static methods instead to make it obvious which class they belong to. Also, automated refactorings (using the e.g. the rope library, or PyCharm or PyDev etc) then automatically move the static method along with the class to wherever the class is moved. P.S. @staticmethods, unlike module-level functions, can be overridden in subclasses, e.g. in case of a mathematical formula that doesn't depend on the object but does depend on the type of the object.
1
0
0
Suppose I have an instance method that contains a lot of nested conditionals. What would be a good way to encapsulate that code? Put in another instance method of the same class or a function? Could you say why a certain approach is preferred?
Code design: Instance method with deeply nested conditionals, put in another instance method of the same class or put it in a function?
1.2
0
0
89
12,133,857
2012-08-26T20:57:00.000
2
1
0
1
python,linux,passwords
12,134,029
3
false
0
0
You want the equivalent of a "modal" window, but this is not (directly) possible in a multiuser, multitasking environment. The next best thing is to prevent the user from accessing the system. For example, if you create an invisible window as large as the display, that will intercept any mouse events, and whatever is "behind" will be unaccessible. At that point you have the problem of preventing the user from using the keyboard to terminate the application, or to switch to another application, or to another virtual console (this last is maybe the most difficult). So you need to access and lock the keyboard, not only the "standard" keyboard but the low-level keys as well. And to do this, your application needs to have administrative rights, and yet run in the user environment. Which starts to look like a recipe for disaster, unless you really know what you are doing. What you want to do should be done through a Pluggable Authentication Module (PAM) that will integrate with your display manager. Maybe, you can find some PAM module that will "outsource" or "callback" some external program, i.e., your Python script.
2
0
0
I have written a simple python script that runs as soon as a certain user on my linux system logs in. It ask's for a password... however the problem is they just exit out of the terminal or minimize it and continue using the computer. So basically it is a password authentication script. So what I am curious about is how to make the python script stay up and not let them exit or do anything else until they entered the correct password. Is there some module I need to import or some command that can pause the system functions until my python script is done? Thanks I am doing it just out of interest and I know a lot could go wrong but I think it would be a fun thing to do. It can even protect 1 specific system process. I am just curious how to pause the system and make the user do the python script before anything else.
pause system functionality until my python script is done
0.132549
0
0
223
12,133,857
2012-08-26T20:57:00.000
3
1
0
1
python,linux,passwords
12,134,000
3
false
0
0
There will always be a way for the user to get past your script. Let's assume for a moment that you actually manage to block the X-server, without blocking input to your program (so the user can still enter the password). The user could just alt-f1 out of the X-server to a console and kill "your weird app". If you manage to block that too he could ssh to the box and kill your app. There is most certainly no generic way to do something like this; this is what the login commands for the console and the session managers (like gdm) for the graphical display are for: they require a user to enter his password before giving him some form of interactive session. After that, why would you want yet another password to do the same thing? the system is designed to not let users use it without a password (or another form of authentication), but there is no API to let programs block the system whenever they feel like it.
2
0
0
I have written a simple python script that runs as soon as a certain user on my linux system logs in. It ask's for a password... however the problem is they just exit out of the terminal or minimize it and continue using the computer. So basically it is a password authentication script. So what I am curious about is how to make the python script stay up and not let them exit or do anything else until they entered the correct password. Is there some module I need to import or some command that can pause the system functions until my python script is done? Thanks I am doing it just out of interest and I know a lot could go wrong but I think it would be a fun thing to do. It can even protect 1 specific system process. I am just curious how to pause the system and make the user do the python script before anything else.
pause system functionality until my python script is done
0.197375
0
0
223
12,134,782
2012-08-26T23:41:00.000
2
0
1
0
python,flask,virtualenv,yolk
12,134,916
1
true
1
0
As long as you're sourcing the virtualenv correctly and installing the packages correctly, your virtualenv should not be affected by a reboot. It's completely independent of that. There are one of three possibilities that I can think of that explains your issues: The incorrect virtualenv was sourced You installed flask and yolk onto the system python You used some kind of ephemeral storage (The third is the least likely)
1
1
0
I just started learning Flask (and as a result, getting into virtualenv as well). I followed a tutorial on Flask's documentation and created a small application. I installed Flask and yolk using venv and everything was working fine. I restarted my computer and when I activated virtualenv again, flask and yolk were no longer recognised. I had to reinstall them via easy_install. Does venv remove any installed packages once the computer has been restarted? What happened here? Is there anything I need to do from my side?
virtualenv removing libraries (flask / yolk) on restart
1.2
0
0
333
12,135,458
2012-08-27T01:53:00.000
1
0
1
0
python
12,135,665
2
false
0
0
Extended or extension, i.e. features / functionalities beyond the core.
1
3
0
In a lot of Python libraries I see a module called "ext", for example sqlalchemy.ext. I was just curious what the abbreviation means and what the module is usually used for.
What does the abbreviation "ext" mean in Python libraries?
0.099668
0
0
875
12,135,671
2012-08-27T02:40:00.000
2
0
0
0
python,django
12,136,794
1
true
1
0
1) Of course Django can make request to another server I have not much idea about django-socketio and one more suggestion why you are using httplib you can use other advance version like httplib2 or requests apart from that Django-Piston is dedicated for REST request you can also try with that
1
0
0
I'm looking for help. My django server has instant messaging function achieved by django-socketio. If I run the server by cmd 'runserver_socketio' then there is no problems. But now I want to run server by 'runfcgi' but that will make my socketio no working. So I want the socketio server handles the request which is conveyed by fcgi server. Can it work? Following is my code: def push_msg(msg): params = urllib.urlencode({"msg":str(msg)}) '''headers = {"Content-type":"text/html;charset=utf8"} conn = httplib.HTTPConnection("http://127.0.0.1:8000") print conn conn.request("POST", "/push_msg/", data=params, headers=headers) response = conn.getresponse() print response''' h = httplib2.http() print h resp, content = h.request("http://127.0.0.1:8000/push_msg/", method="POST", body=params) url(r'^push_msg/$', 'chat.events.on_message') chat.events.on_message: def on_message(request): msg = request.POST.get('msg') msg = eval(msg) try: print 'handle messages' from_id = int(msg['from_id']) to_id = int(msg['to_id']) user_to = UserProfile.objects.get(id = msg['to_id']) django_socketio.broadcast_channel(msg, user_to.channel) if msg.get('type', '') == 'chat': ct = Chat.objects.send_msg(from_id=from_id,to_id=to_id,content=data['content'],type=1) ct.read = 1 ct.save() except: pass return HttpResponse("success") I have tried many times, but it can't work, why?
Can django server send request to another server
1.2
0
0
1,855
12,138,656
2012-08-27T08:36:00.000
0
0
1
0
python,nltk,nlp
24,087,161
1
false
0
0
Since posting this question I have found the punkt sentence sentence tokenizer to be incredibly useful for isolating sentences. I've then used a simple find method to find number strings that begin with '18', '19' or '20'. Thanks all for your comments.
1
0
0
I've searched through methods in the NLTK library for methods that grab dates, but I don't know which would be the best for grabbing dates and the sentences they belong to. I know that I should be parsing for DATE name entities, but what method should I use? I simply need the date and the sentence it belongs to.
Using python's NLTK library to grab dates and the sentences they belong to.
0
0
0
163
12,139,167
2012-08-27T09:14:00.000
1
0
1
0
python,unit-testing,coding-style
12,140,396
2
false
0
0
It really depends on what you want to test. If you want to test that a dictionary contains certain keys with certain values, then I would suggest separate assertions to check each key. This way your test will still be valid if the dictionary is extended, and test failures should clearly identify the problem (an error message telling you that one 50-line long dictionary is not equal to a second 50 line long dictionary is not exactly clear). If you really do want to verify that the dictionary contains only the given keys, then a single assertion might be appropriate. Define the object you are comparing against where it is most clear. If defining it in a separate file (as Constantinius's answer suggests) makes things more readable then consider doing that. In both cases, the guiding principle is to only test the behaviour you care about. If you test behaviour you don't care about, you may find your test suite more obstructive than helpful when refactoring.
1
7
0
I'm unit testing my application. What most of the tests do is calling a function with specific arguments and asserting the equality of the return value with an expected value. In some tests the expected return value is a relatively big object. One of them, for example, is a dictionary which maps 5 strings to lists of tuples. It takes 40-50 repetitive lines of code to define that object, but that object is an expected value of one of the functions I'm testing. I don't want to have a 40-50 lines of code defining an expected return value inside a test function because most of my test functions consist of 3-6 lines of code. I'm looking for a best practice for such situations. What is the right way of putting lengthy definitions inside a test? Here are the ideas I was thinking of to address the issue, ranked from the best to the worst as I see it: Testing samples of the object: Making a few equality assertions based on a subset of the keys. This will sacrifice the thoroughness of the test for the sake of code elegance. Defining the object in a separate module: Writing the lengthy 40-50 lines of code in a separate .py file, importing the module in the test and then make the equality assertion. This will make the test short and concise but I don't like having a separate file as a supplement to a test; the object definition is part of the test after all. Defining the object inside the test function: This is the trivial solution which I wish to avoid. My tests are pretty simple and straightforward and the lengthy definition of that object doesn't fit. Maybe I'm too obsessed with clean code, but I like none of the above solutions. Is there another common practice I haven't thought of?
A long definition of an object inside a Python unit test
0.099668
0
0
407
12,140,608
2012-08-27T10:52:00.000
1
1
0
1
python,windows,bash,mingw
12,141,142
2
true
0
0
Sure you can communicate between them, just read/write from a file or pair of files (one for Python to write to and the bash script to read from, and the other for the visa-versa situation).
1
0
0
I am running some shell test scripts from a python script under Windows. The shell scripts are testing the functionality of various modules. The problem that I faced is that some scripts can hang. For this I added a timeout for each script. This timeout has a default value. But this timeout value can be changed by the bash script - from a bash function ( SetMaxTime ) - I can modify SetMaxTime. When the default value is used I wait for that period of time in python and if the bash script is not done I will consider that test as failed due to timeout. The problem is when the default value of timeout is changed from bash. Is there a way to communicate with a bash script (ran with mingw) from python? NOTE: The scripts are ran under Windows.
Communicating with bash scripts from python
1.2
0
0
479
12,145,688
2012-08-27T16:17:00.000
1
1
1
0
python,testing,pyramid
12,145,889
3
false
0
0
I have finally found the way to do it. I have just created directory named tests, put my tests inside it and created empty file __init__.py. I needed to fix relative imports, or it make strange errors like: AttributeError: 'module' object has no attribute 'tests' I do not really understand what is going on, and what is the nosetest role here, but it works. If someone is able to explain this problematics deeper, it would be nice.
1
4
0
I am using pyramids framework for large project and I find it messy to have all my tests in one tests.py file. So I have decided to create directory that would contain files with my tests. Problem is, I have no idea, how to tell pyramids, to run my tests from this directory. I am running the tests using python setup.py test -q. But this of course do not work, after I have moved my tests into tests directory. What to do, to make it work?
Having tests in multiple files
0.066568
0
0
4,414
12,147,224
2012-08-27T18:11:00.000
1
0
0
1
python,monitoring,snmp
12,147,545
2
true
0
0
The SNMP is a standard monitoring (and configuration) tool used widely in managing network devices (but not only). I don't understand your question fully - is it a problem that you cannot use SNMP because device does not support it (what does it support then?) To script anything you have to know what interface is exposed to you (if not a MIB files then what?). Did you read about NETCONF?
2
0
0
is snmp really required to manage devices ? i'd like to script something with python to manage devices (mainly servers), such as disk usage, process list etc. i'm learning how to do and many article speak about snmp protocole. I can't use, for example, psutil, or subprocess or os modules, and send information via udp ? Thanks a lot
is snmp required
1.2
0
1
197
12,147,224
2012-08-27T18:11:00.000
1
0
0
1
python,monitoring,snmp
12,148,746
2
false
0
0
No, it's not required, but your question is sort of like asking if you're required to use http to serve web pages. Technically you don't need it, but if you don't use it you're giving up interoperability with a lot of existing client software.
2
0
0
is snmp really required to manage devices ? i'd like to script something with python to manage devices (mainly servers), such as disk usage, process list etc. i'm learning how to do and many article speak about snmp protocole. I can't use, for example, psutil, or subprocess or os modules, and send information via udp ? Thanks a lot
is snmp required
0.099668
0
1
197
12,150,321
2012-08-27T22:09:00.000
3
0
0
1
python,linux,background
12,150,416
1
true
0
0
You can use Python's os.kill to kill a process by its pid, sending a mostly-arbitrary signal to it like SIGTERM or SIGINT. If it won't die, try SIGKILL. You can look up a process' pid using the pidof program, or if you use subprocess.Popen you can get the pid from the popen object without needing to spawn another subprogram. os.system is no longer in great favor, at least not compared to subprocess.Popen.
1
1
0
So I'm running a python script: It starts a logging program in the background Does a bunch of stuff. Then stops the logging program. So I have two questions: 1) I understand to run a background program, I could call: os.system("test_log_prog &") but can I also do: os.system("test_echo_prog > logfile.log &") or something equivalent? 2) How can I close test_echo_prog? for the program, I was using pkill "test_log_prog" but for some reason it doesn't work when using > logfile.log..... Thanks in advance! Cheers!
Kill a background program from another program
1.2
0
0
121
12,150,405
2012-08-27T22:18:00.000
0
1
0
1
php,python,exec
12,151,910
1
true
0
0
In case of CGI, server starts a copy of PHP interpreter every time it gets a request. PHP in turn starts Python process, which is killed after exec(). There is a huge overhead on starting two processes and doing all imports on every request. In case of FastCGI or WSGI, server keeps couple processes warmed up (min and max amount of running processes is configurable), so at price of some memory you get rid of starting new process every time. However, you still have to start/stop Python process on every exec() call. If you can use a Python app without exec(), eg port PHP part to Python, it would boost performance a lot. But as you mentioned this is a small project so the only important criteria is if your current server can sustain current load.
1
4
0
I have a php application that executes Python scripts via exec() and cgi. I have a number of pages that do this and while I know WSGI is the better way to go long-term, I'm wondering if for a small/medium amount of traffic this arrangement is acceptable. I ask because a few posts mentioned that Apache has to spawn a new process for each instance of the Python interpreter which increases overhead, but I don't know how significant it is for a smaller project. Thank you.
PHP Exec() and Python script scaleability
1.2
0
0
266
12,150,513
2012-08-27T22:31:00.000
0
0
0
0
python,numpy,scipy
12,150,625
1
false
0
0
According to the doc, np.mod(x1,x2)=x1-floor(x1/x2)*x2. The problem here is that you are working with very small values, a dark domain where floating point errors (truncation...) happen quite often and results are often unpredictable... I don't think you should spend a lot of time worrying about that.
1
0
1
The Numpy 'modulus' function is used in a code to check if a certain time is an integral multiple of the time-step. But some weird behavior is seeen. numpy.mod(121e-12,1e-12) returns 1e-12 numpy.mod(60e-12,1e-12) returns 'a very small value' (compared to 1e-12). If you play around numpy.mode('122-126'e-12,1e-12) it gives randomly 0 and 1e-12. Can someone please explain why? Thanks much
Numpy/Scipy modulus function
0
0
0
4,240
12,150,908
2012-08-27T23:27:00.000
0
0
1
0
python-2.7
12,151,167
2
false
0
0
Are you familiar with NumPy ? Once you have your data in a numpy ndarray, it's a breeze to shuffle the rows while keeping the column orders, without the hurdle of creating many temporaries. You could use a function like np.genfromtxt to read your data file and create a ndarray with different named fields. You could then use the np.random.shuffle function to reorganize the rows.
1
2
1
I have a text file with five columns. First column has year(2011 to 2040), 2nd has Tmax, 3rd has Tmin, 4th has Precip, and fifth has Solar for 30 years. I would like to write a python code which shuffles the first column (year) 10 times with remaining columns having the corresponding original values in them, that is: I want to shuffle year columns only for 10 times so that year 1 will have the corresponding values.
How do I shuffle in Python for a column (years) with keeping the corrosponding column values?
0
0
0
1,120
12,150,973
2012-08-27T23:36:00.000
11
0
0
0
python,django,sqlite,django-postgresql
15,858,398
3
false
1
0
Adding this as an answer. Django maps this to serial columns which means that the maximum value is in the 2 billion range ( 2,147,483,647 to be exact). While that is unlikely to be an issue for most applications, if you do, you could alter the type to become a bigint instead and this would make it highly unlikely you will ever reach the end of 64-bit int space.
1
14
0
Is there a limit to the AutoField in a Django model or the database back-ends? The Django project I am working on could potentially see a lot of objects in certain database tables which would be in excess of 40000 within a short amount of time. I am using Sqlite for dev and Postgresql for production.
django id integer limit
1
0
0
7,009
12,154,188
2012-08-28T06:54:00.000
2
0
1
0
python
12,154,250
2
false
0
0
A reference to the module is stored in sys.modules, so no it's not released. Consider using execfile or similar if you don't want to load the module
1
0
0
I'm wondering about a few things concerning importing modules. I have a module that contains nothing but a list of variables, so that I can use these across 3 or 4 scripts that run either once or daily. This same module I would like to use in another script of mine, but I only need to load it once, and afterwards, I don't need the module anymore, because I would copy the variables to a list in my script(for comparison purposes). My questions: 1. if I import the module in a method, is it discarded when the function ends? 2. what is the memory-impact on importing a module? Good to know is that the function is one-shot. Greetings
memory impact and scope/lifespan of imported modules python
0.197375
0
0
132
12,154,854
2012-08-28T07:44:00.000
0
0
1
0
python
12,154,925
1
false
0
0
They will probably both be about the same. I/O is generally a lot slower than CPU, so the entire process of reading and writing files will depend on how fast your disk can handle the requests. It also will depend on the data-processing approach you take. If you opt to read the whole file in at once, then of course it will use more memory than if you choose to read the file piece-by-piece. So, the answer is: the performance will only (very minimally) depend on your choice of language. Choice of algorithm and I/O performance will easily account for the majority of CPU or RAM usage.
1
0
0
I need to know what mechanism is more efficient (less RAM/CPU) to read and write files, especially write. Possibly a JSON data structure. The idea is perform these operations in a context of WebSockets (client -> server -> read/write file with data of the actual session -> response to client).... Best idea is to store the data in temporal variables and destroy vars when are not useful? Any idea?
Write/Read files: NodeJS vs Python
0
0
1
335
12,155,573
2012-08-28T08:38:00.000
0
0
0
0
java,python,algorithm,google-maps
12,155,673
6
false
0
0
Simple: Generate two random numbers, one for latitude and one for longitude, inside the bounding rectangle of the map, for each point.
1
0
0
I need it to stress test some location based web service. The input is 4 pairs of lat/lon defining a bounding rectangle or a set of points defining a polygon. Are there any libraries/algorithms for generating random point on a map? (Python/java)
Generate random lat lon
0
0
0
5,533
12,156,293
2012-08-28T09:22:00.000
0
1
0
0
python,html,web,cgi,mysql-python
12,156,338
6
false
1
0
If nothing else it will show you why you want to use a framework, should be a really valuable learning experience. I say go for it.
3
0
0
I want to learn Python for Web Programming. As of now I work on PHP and want to try Python and its object oriented features. I have basic knowledge of python and its syntax and data strucures. I want to start with making basic web pages - forms, file upload and then move on to more dynamic websites using MYSQl. As of now I do not want to try Django or for that matter any other frameworks. Is python with cgi and MySQLdb modules a good start? Thanks
Python Web Programming - Not Using Django
0
0
0
350
12,156,293
2012-08-28T09:22:00.000
1
1
0
0
python,html,web,cgi,mysql-python
12,160,352
6
false
1
0
Having used both Flask and Django for a bit now, I must say that I much prefer Flask for most things. I would recommend giving it a try. Flask-Uploads and WTForms are two nice extensions for the Flask framework that make it easy to do the things you mentioned. Lots of other extensions available. If you go on to work with dynamic site attached to a database, Flask + SQL Alchemy make a very powerful combination. I much prefer the SQLAlchemy ORM to the django model ORM.
3
0
0
I want to learn Python for Web Programming. As of now I work on PHP and want to try Python and its object oriented features. I have basic knowledge of python and its syntax and data strucures. I want to start with making basic web pages - forms, file upload and then move on to more dynamic websites using MYSQl. As of now I do not want to try Django or for that matter any other frameworks. Is python with cgi and MySQLdb modules a good start? Thanks
Python Web Programming - Not Using Django
0.033321
0
0
350
12,156,293
2012-08-28T09:22:00.000
2
1
0
0
python,html,web,cgi,mysql-python
12,268,540
6
false
1
0
I recommend Pyramid Framework!
3
0
0
I want to learn Python for Web Programming. As of now I work on PHP and want to try Python and its object oriented features. I have basic knowledge of python and its syntax and data strucures. I want to start with making basic web pages - forms, file upload and then move on to more dynamic websites using MYSQl. As of now I do not want to try Django or for that matter any other frameworks. Is python with cgi and MySQLdb modules a good start? Thanks
Python Web Programming - Not Using Django
0.066568
0
0
350
12,157,350
2012-08-28T10:26:00.000
4
0
0
0
python,mongodb,pymongo,gevent,greenlets
12,163,744
1
false
1
0
I found what the problem is. By default PyMongo has no network timeout defined on the connections, so what was happening is that the connections in the pool got disconnected (because they aren't used for a while). Then when I try to reuse a connection and perform a "find", it takes a very long time for the connection be detected as dead (something like 15 minutes). When the connection is detected as dead, the "find" call finally throws an AutoReconnectError, and a new connection is spawned up to replace to stale one. The solution is to set a small network timeout (15 seconds), so that the call to "find" blocks the greenlet for 15 seconds, raises an AutoReconnectError, and when the "find" is retried, it gets a new connection, and the operation succeeds.
1
3
0
I am using PyMongo and gevent together, from a Django application. In production, it is hosted on Gunicorn. I am creating a single Connection object at startup of my application. I have some background task running continuously and performing a database operation every few seconds. The application also serves HTTP requests as any Django app. The problem I have is the following. It only happens in production, I have not been able to reproduce it on my dev environment. When I let the application idle for a little while (although the background task is still running), on the first HTTP request (actually the first few), the first "find" operation I perform never completes. The greenlet actually never resumes. This causes the first few HTTP requests to time-out. How can I fix that? Is that a bug in gevent and/or PyMongo?
Deadlock with PyMongo and gevent
0.664037
1
0
862
12,160,673
2012-08-28T13:47:00.000
3
0
0
0
python,scrapy
12,161,314
3
false
1
0
1) In the BaseSpider, there is an __init__ method that can be overridden in subclasses. This is where the declaration of the start_urls and allowed_domains variables are set. If you have a list of urls in mind, prior to running the spider, than you can insert them dynamically here. For example, in a few of the spiders I have built, I pull in preformatted groups of URL's from MongoDB, and insert them into the start_urls list in once bulk insert. 2)This might be a little bit more tricky, but you could easily see the crawled URL by looking in the response object (response.url). You should be able to check to see if the url contains 'google', 'bing', or 'yahoo', and then use the prespecified selectors for a url of that type. 3) I am not so sure that #3 is possible, or at least not without some difficulty. As far as I know, the url's in the start_urls list are not crawled orderly, and they each arrive in the pipeline independently. I am not sure that without some serious core hacking, you will be able to collect a group of response objects and pass them into a pipeline together. However, you might consider serializing the data to disk temporarily, and then bulk-saving the data later on to your database. One of the crawlers I built receives groups of URLs that are around 10000 in number. Rather than making 10000 single item database insertions, I store the urls (and collected data) in BSON, and than insert it into MongoDB later.
1
1
0
I have a project which requires a great deal of data scraping to be done. I've been looking at Scrapy which so far I am very impressed with but I am looking for the best approach to do the following: 1) I want to scrape multiple URL's and pass in the same variable for each URL to be scraped, for example, lets assume I am wanting to return the top result for the keyword "python" from Bing, Google and Yahoo. I would want to scrape http://www.google.co.uk/q=python, http://www.yahoo.com?q=python and http://www.bing.com/?q=python (not the actual URLs but you get the idea) I can't find a way to specify dynamic URLs using the keyword, the only option I can think of is to generate a file in PHP or other which builds the URL and specify scrapy to crawl the links in the URL. 2) Obviously each search engine would have its own mark-up so I would need to differentiate between each result to find the corresponding XPath to extract the relevant data from 3) Lastly, I would like to write the results of the scraped Item to a database (probably redis), but only when all 3 URLs have finished scraping, essentially I am wanting to build up a "profile" from the 3 search engines and save the outputted result in one transaction. If anyone has any thoughts on any of these points I would be very grateful. Thank you
Scrapy approach to scraping multiple URLs
0.197375
0
1
2,710
12,161,140
2012-08-28T14:10:00.000
1
0
0
0
python,web-scraping,beautifulsoup
12,163,450
1
true
1
0
BeautifulSoup is a tool for parsing and analyzing HTML. It cannot talk to web servers, so you'd need another library to do that, like urllib2 (builtin, but low-level) or requests (high-level, handles cookies, redirection, https etc. out of the box). Alternatively, you can look at mechanize or windmill, or if you also require JavaScript code to be executed, phantomjs.
1
0
0
Is there any way to generate various events like: filling an input field submitting a form clicking a link Handling redirection etc via python beautiful soup library. If not what's the best way to do above (basic functionality).
Generate various events in 'Web scraping with beautiful soup'
1.2
0
1
102
12,161,182
2012-08-28T14:12:00.000
2
1
0
0
c++,python,c,algorithm,matrix
12,161,433
2
true
0
0
The manner to avoid trashing CPU caches greatly depends on how the matrix is stored/loaded/transmitted, a point that you did not address. There are a few generic recommendations: divide the problem into worker threads addressing contiguous rows per threads increment pointers (in C) to traverse rows and keep the count on a per-thread basis consolidate the per-thread results at the end of all worker threads. If your matrix cells are made of bits (instead of bytes, ints, or arrays) then you can read words (either 4-byte or 8-byte on 32-bit/64-bit platforms) to speedup the count. There are too many questions left unanswered in the problem description to give you any further guidance.
2
0
1
I'm looking for the fastest algorithm/package i could use to compute the null space of an extremely large (millions of elements, and not necessarily square) matrix. Any language would be alright, preferably something in Python/C/C++/Java. Your help would be greatly appreciated!
Computing the null space of a large matrix
1.2
0
0
766
12,161,182
2012-08-28T14:12:00.000
-1
1
0
0
c++,python,c,algorithm,matrix
12,161,500
2
false
0
0
In what kind of data structure is your matrix represented? If you use an element list to represent the matrix, i.e. "column, row, value" tuple for one matrix element, then the solution would be just count the number of the tuples (subtracted by the matrix size)
2
0
1
I'm looking for the fastest algorithm/package i could use to compute the null space of an extremely large (millions of elements, and not necessarily square) matrix. Any language would be alright, preferably something in Python/C/C++/Java. Your help would be greatly appreciated!
Computing the null space of a large matrix
-0.099668
0
0
766
12,161,271
2012-08-28T14:15:00.000
91
0
0
0
python,django,django-views,django-staticfiles
12,161,409
1
true
1
0
No. In fact, the file django/contrib/staticfiles/finders.py even checks for this and raises an ImproperlyConfigured exception when you do so: "The STATICFILES_DIRS setting should not contain the STATIC_ROOT setting" The STATICFILES_DIRS can contain other directories (not necessarily app directories) with static files and these static files will be collected into your STATIC_ROOT when you run collectstatic. These static files will then be served by your web server and they will be served from your STATIC_ROOT. If you have files currently in your STATIC_ROOT that you wish to serve then you need to move these to a different directory and put that other directory in STATICFILES_DIRS. Your STATIC_ROOT directory should be empty and all static files should be collected into that directory (i.e., it should not already contain static files).
1
45
0
I'm using Django 1.3 and I realize it has a collectstatic command to collect static files into STATIC_ROOT. Here I have some other global files that need to be served using STATICFILES_DIR. Can I make them use the same dir ? Thanks.
Can I make STATICFILES_DIR same as STATIC_ROOT in Django 1.3?
1.2
0
0
37,149
12,162,629
2012-08-28T15:30:00.000
2
0
1
0
python,printing,python-2.x,function-call
12,162,672
3
false
0
0
It is still evaluated as a statement, you are simply printing ("Hello SO!"), which simply evaluates to "Hello SO!" since it is not a tuple (as mentioned by delnan).
1
57
0
I understand the difference between a statement and an expression, and I understand that Python3 turned print() into a function. However I ran a print() statement surrounded with parenthesis on various Python2.x interpreters and it ran flawlessly, I didn't even have to import any module. My question: Is the following code print("Hello SO!") evaluated as a statement or an expression in Python2.x?
Using print() (the function version) in Python2.x
0.132549
0
0
60,321
12,162,793
2012-08-28T15:39:00.000
20
1
0
0
python,boost,cross-compiling
12,168,033
1
true
0
0
Look at --without-* bjam option e.g. --without-python
1
21
0
I have a gcc 4.3.3 toolchain for my embedded device but I have no python (and don't need it). I'am looking for a way to configure boostbuild without python (compilation and cross-compilation). Is python mandatory ? Must I compile every single parts but boost-python ? (I hope not). Thanks in advance. What I did thanks to Jupiter ./bootstrap.sh --without-libraries=python ./b2 and I got Component configuration: - chrono : building - context : building - date_time : building - exception : building - filesystem : building - graph : building - graph_parallel : building - iostreams : building - locale : building - math : building - mpi : building - program_options : building - python : not building - random : building - regex : building - serialization : building - signals : building - system : building - test : building - thread : building - timer : building - wave : building
How to (cross-)compile boost WITHOUT python?
1.2
0
0
7,027
12,162,976
2012-08-28T15:49:00.000
0
0
1
1
python,macos,python-2.6,dylib
23,327,147
1
false
0
0
On a Mac, it is stored under /usr/lib
1
1
0
I'm looking for libpython2.6.dylib in my frameworks folder but for all my instals I can only find the libpython2.7.dylib. I'm looking in 'System/Library/Frameworks/Python.framework/Versions/2.x'/lib . I also notice that libpython2.7.dylib is actually just an alias for '/System/Library/Frameworks/Python.framework/Versions/2.7/Python', does this mean i can just make my own alias of the other 'Python' binaries that are on all the install directories?
where can i find libpython2.6.dylib on osx
0
0
0
2,408
12,163,104
2012-08-28T15:57:00.000
1
0
0
0
java,c++,python,osgi,modularity
12,191,011
2
false
0
1
Peter already mentioned Apache Celix. Might be well worth to check it out. Part of Celix is a Remote Service Admin (RSA) implementation, making it possible to use it in a distributed environment. Eventually this implementation will also make it possible to communicate with a Java based OSGi framework, making Celix+RSA an alternative for JNI. This has the additional benefit that the native and java code don't share the same process. If one end runs into a problem, the other end still stays running. In line with Celix, you could as well look at Native-OSGi, this is an effort of Celix and some C++ OSGi like frameworks (CTK Plugin Framework and nOSGi) to come up with a combined approach for Native OSGi like implementations. This includes stuff like a well defined API, bundle format, code sharing etc. Looking at your requirements I thinks Celix and Native-OSGi might be a good fit.
2
3
0
I'm thinking about Python and C/C++ combination to replace the original concept about OSGi + Java + JNI + C/C++ in our SW architecture. I definitely don't need to replace all features of such OSGi frameworks as Felix or Equinox. What I really will need in my Python code: Enabler of modularity for the application layer Component-based framework for applications Central registry of services/components Very lightweight framework, it will run on embedded devices (though pretty enough of RAM) Can you please advise on such Python framework?
Python framework as an alternative for "Java + OSGi" combination
0.099668
0
0
2,706
12,163,104
2012-08-28T15:57:00.000
3
0
0
0
java,c++,python,osgi,modularity
12,172,339
2
false
0
1
I think much of what OSGi offers is closely related to the architecture of Java: its class loaders and type safety. Implementing a service registry should not be too hard but managing it with the accuracy of OSGi will be virtually impossible. Obviously the power of multiple namespace that OSGi offers will not work in Python unless you move the modules into separate processes which will require more expensive inter process communication for inter module communication. You could start with Apache Celix, which is native based but I have similar doubts about its utility since native code does not provide a lot of information about its dependencies. A more general solution is the original idea of Universal OSGi. In this model you keep the OSGi framework as it is for the deployment and management. However, you create handler bundles that can map a bundle written with other languages. E.g. a Python handler or C++ native. The handlers would map a native service registry model to the OSGi service registry. This is surprisingly easy to do since the OSGi service registry is properly evented. The native handlers would map the bundle events like start/stop to instruct the operating system to start/stop the native code.
2
3
0
I'm thinking about Python and C/C++ combination to replace the original concept about OSGi + Java + JNI + C/C++ in our SW architecture. I definitely don't need to replace all features of such OSGi frameworks as Felix or Equinox. What I really will need in my Python code: Enabler of modularity for the application layer Component-based framework for applications Central registry of services/components Very lightweight framework, it will run on embedded devices (though pretty enough of RAM) Can you please advise on such Python framework?
Python framework as an alternative for "Java + OSGi" combination
0.291313
0
0
2,706
12,163,918
2012-08-28T16:50:00.000
1
0
1
0
python,refactoring,pickle,zodb
12,164,152
3
false
1
0
Unfortunately, there is no easy solution. You can convert your old-style objects with the refactored ones (I mean classes which are in another file/module) by the following schema add the refactored classes to your code without removing the old-ones walk through your DB starting from the root and replacing all old objects with new equivalents compress your database (that's important) now you can remove your old classes from the sources
2
7
0
I'm using ZODB which, as I understand it, uses pickle to store class instances. I'm doing a bit of refactoring where I want to split my models.py file into several files. However, if I do this, I don't think pickle will be able to find the class definitions, and thus won't be able to load the objects that I already have stored in the database. What's the best way to handle this problem?
pickle/zodb: how to handle moving .py files with class definitions?
0.066568
0
0
745
12,163,918
2012-08-28T16:50:00.000
3
0
1
0
python,refactoring,pickle,zodb
12,164,218
3
false
1
0
As long as you want to make the pickle loadable without performing a migration to the new class model structure: you can use alias imports of the refactored classes inside the location of the old model.py.
2
7
0
I'm using ZODB which, as I understand it, uses pickle to store class instances. I'm doing a bit of refactoring where I want to split my models.py file into several files. However, if I do this, I don't think pickle will be able to find the class definitions, and thus won't be able to load the objects that I already have stored in the database. What's the best way to handle this problem?
pickle/zodb: how to handle moving .py files with class definitions?
0.197375
0
0
745
12,164,910
2012-08-28T18:01:00.000
1
0
0
0
python,django
12,164,955
1
false
1
0
You can use Javascript, if you don't have to do it on the backend. Just read the facebook likes using the API, and sort the divs.
1
1
0
I've a requirement for my django website. Is there any way to order contents of my website based on its fb likes + twitter share count + gplus count + etc.. Any api's that I can use. I saw this feature on the new digg site. They seem to have aggregated the counts fb + twitter + digg) for the stories.
Order a website's content based on its social share count (fb+ twitter + gplus)
0.197375
0
0
112
12,165,652
2012-08-28T18:55:00.000
2
0
1
0
python,multithreading,while-loop,data-acquisition
12,166,052
2
false
0
0
I'm thinking that it would be good for you to use client-server model It would be nicely separated, so one script would not affect the other - status check / data collecting Basically what you would do it to run server for data collecting on the main machine, which could have some terminal input for maintenance (logging, gracefull exit etc..) and the data collecting PC would act like clients with while True loop (which can run indefinetly unless killed), and on each of data collecting PC would be server/client (depends on point of view) for status check and that would send data to MAIN pc where you would decide what to do also if you use unix/linux or maybe even windows, for status check just use ssh to the machine and check status (manually or via script from main machine) ... depends on specific needs... Enjoy
2
0
0
I will have 4 hardware data acquisition units connected to a single control PC over a hard-wired Ethernet LAN. The coding for this application will reside on the PC and is entirely Python-based. Each data acquisition unit is identically configured and will be polled from the PC in identical fashion. The test boxes they are connected to provide the variable output we seek to do our testing. These tests are long-term (8-16 months or better), with relatively low data acquisition rates (less than 500 samples per minute, likely closer to 200). The general process flow is simple, too. I'll loop over each data acquisition device and: Read the data off the device; Do some calc on the data; If the calcs say one thing, turn on a heater; If they say anything else, do nothing; Write the data to disk and file for subsequent processing I'll wait some amount of time, then repeat the process all over again. Here are my questions: I plan to use a while TRUE: loop to start the execution of the sequence I outlined above, and to allow the loop to be exited via exceptions, but I'd welcome any advice on the specific exceptions I should check for -- or even, is this the best approach to take AT ALL? Another approach might be this: Once inside the while loop, I could use the try: - except: - finally: construct to exit the loop. The process I've outlined above is for the main data acquisition stuff, but given the length of the collection period, I need to be able to do other things as well: check the hardware units are running OK, take test stands on and offline as required, etc. These 'management' functions ar distinct from the main loop, so I'd like to keep them distinct. Should I set this activity up in separate threads within the same script, or are there better approaches? Thanks in advance, folks. All feedback is welcome!
Long term instrument data acquisition with Python - Using "While" loops and threaded processes
0.197375
0
0
975
12,165,652
2012-08-28T18:55:00.000
1
0
1
0
python,multithreading,while-loop,data-acquisition
12,166,110
2
false
0
0
You may need more than one loop. If the instruments are TCP servers, you may want to catch a 'disconnected' exception in an inside loop and try to reconnect, rather than terminating the instrument thread permanently. Not sure about Python. On C++, C#, Delphi, I would probably generate the wait by waiting on a producer-consumer queue with a timeout. If nothing gets posted, the sequence you outlined would be repeated as you wish. If some of that other, occasional, stuff needs to happen, you can queue up a message that instructs the thread to issue the necessary commands to the instruments, or disconnect and set an internal 'don't poll, wait until instructed to reconnect and poll again' flag, or whatever needs to be done. This sort of approach is going to be cleaner than stopping the thread and connecting from some other thread just to do the occasional stuff. Stopping/terminating/recreating threads is just best avoided in any language.
2
0
0
I will have 4 hardware data acquisition units connected to a single control PC over a hard-wired Ethernet LAN. The coding for this application will reside on the PC and is entirely Python-based. Each data acquisition unit is identically configured and will be polled from the PC in identical fashion. The test boxes they are connected to provide the variable output we seek to do our testing. These tests are long-term (8-16 months or better), with relatively low data acquisition rates (less than 500 samples per minute, likely closer to 200). The general process flow is simple, too. I'll loop over each data acquisition device and: Read the data off the device; Do some calc on the data; If the calcs say one thing, turn on a heater; If they say anything else, do nothing; Write the data to disk and file for subsequent processing I'll wait some amount of time, then repeat the process all over again. Here are my questions: I plan to use a while TRUE: loop to start the execution of the sequence I outlined above, and to allow the loop to be exited via exceptions, but I'd welcome any advice on the specific exceptions I should check for -- or even, is this the best approach to take AT ALL? Another approach might be this: Once inside the while loop, I could use the try: - except: - finally: construct to exit the loop. The process I've outlined above is for the main data acquisition stuff, but given the length of the collection period, I need to be able to do other things as well: check the hardware units are running OK, take test stands on and offline as required, etc. These 'management' functions ar distinct from the main loop, so I'd like to keep them distinct. Should I set this activity up in separate threads within the same script, or are there better approaches? Thanks in advance, folks. All feedback is welcome!
Long term instrument data acquisition with Python - Using "While" loops and threaded processes
0.099668
0
0
975
12,166,268
2012-08-28T19:33:00.000
10
0
1
0
pypy,gil,rpython
12,210,808
1
true
0
0
The GIL handling is inserted by module/thread/gil.py in your PyPy checkout. It's an optional translation feature and it's only added when thread module is enabled. That said, RPython itself is not a thread-safe language (like for example C), so you would need to take care yourself to lock objects correctly, so they don't come up inconsistent. The main issue would be to provide a thread-aware garbage collector, because the one that we use right now is not thread safe and just adding a lock would remove a whole lot of benefits from a free-threading model. Cheers, fijal
1
13
0
Is the PyPy GIL part of the PyPy interpreter implementation in RPython, or is it something that translate.py automatically adds? i.e., if I were to write my own new language interpreter in RPython and ran it through translate.py, would it be subject to the GIL a priori, or would that be up to my interpreter code?
Where's the GIL in PyPy?
1.2
0
0
1,593
12,166,819
2012-08-28T20:11:00.000
2
0
1
0
python,nlp,nltk
12,272,524
3
false
0
0
You don't need system-install support, just the right modules where python can find them. I've set up the NLTK without system install rights with relatively little trouble--but I did have commandline access so I could see what I was doing. To get this working, you should put together a local install on a computer you do control-- ideally one that never had NLTK installed, since you may have forgotten (or not know) what was configured for you. Once you figure out what you need, copy the bundle to the hosting computer. But at that point, check that you're using the module versions that are appropriate for the webserver's architecture. Numpy in particular has different 32/64 bit versions, IIRC. It's also worth your while to figure out how to see the error messages from the hosting computer. If you can't see them by default, you could catch ImportError and display the message it contains, or you could redirect stderr... it depends on your configuration.
1
5
0
Learning Python with the Natural Language Toolkit has been great fun, and they work fine on my local machine, though I had to install several packages in order to use it. Exactly how the NLTK resources are now integrated on my system remains a mystery to me, though it seems clear that the NLTK source code is not simply sitting someplace where the Python interpreter knows to find it. I would like to use the Toolkit on my website, which is hosted by another company. Simply uploading the NLTK source code files to my server and telling scripts in the root directory to "import nltk" has not worked; I kind of doubted it would. What, then, is the difference between whatever the NLTK install routine does and straightforward imports, and why should the toolkit be inaccessible to straightforward imports? Is there a way to use the NLTK source files without essentially altering my host's Python? Many thanks for your thoughts and notes. -G
Use NLTK without installing
0.132549
0
0
2,390
12,167,261
2012-08-28T20:45:00.000
1
0
1
0
python,windows,copy,installation
12,168,463
2
true
0
0
The short answer to this question is "no", since packages can execute arbitrary code on installation and do whatever the heck they want wherever they want on your system. Just reinstall all of them.
1
4
0
I have reinstalled my operating system (moved from windows XP to Windows 7). I have reinstalled Python 2.7. But i had a lot of packages installed in my old environment. (Django, sciPy, jinja2, matplotlib, numpy, networkx, to name just a view) I still have my old Python installation lying around on a data partition, so i wondered if i can just copy-paste the old Python library folders onto the new installation? Or do i need to reinstall every package? Do the packages keep any information in registry, system variables or similar? Does it depend on the package?
Moving a Python environment over to a new OS install
1.2
0
0
2,273
12,167,324
2012-08-28T20:49:00.000
2
0
0
0
python,arrays,pandas,multi-index
12,170,479
1
false
0
0
If you just want to do simple arithmetic operations, I think something like A.div(B, level='date') should work. Alternatively, you can do something like B.reindex(A.index, level='date') to manually match the indices.
1
6
1
I have two pandas arrays, A and B, that result from groupby operations. A has a 2-level multi-index consisting of both quantile and date. B just has an index for date. Between the two of them, the date indices match up (within each quantile index for A). Is there a standard Pandas function or idiom to "broadcast" B such that it will have an extra level to its multi-index that matches the first multi-index level of A?
How to broadcast to a multiindex
0.379949
0
0
1,335