Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
19,676,538
2013-10-30T07:44:00.000
33
0
1
0
python,arrays,numpy
19,676,652
3
false
0
0
B=A creates a reference B[:]=A makes a copy numpy.copy(B,A) makes a copy the last two need additional memory. To make a deep copy you need to use B = copy.deepcopy(A)
2
127
1
For example, if we have a numpy array A, and we want a numpy array B with the same elements. What is the difference between the following (see below) methods? When is additional memory allocated, and when is it not? B = A B[:] = A (same as B[:]=A[:]?) numpy.copy(B, A)
Numpy array assignment with copy
1
0
0
76,805
19,679,256
2013-10-30T10:05:00.000
4
0
0
0
python,python-3.x,tkinter
19,679,336
1
true
0
1
Start your Python program using pythonw.exe instead of python.exe. This is specifically designed for this.
1
2
0
Is there any way to hide python shell while using a tkinter window or turtle window? While opening py file by clicking the py shell shows every time but I don't need it.
Hide python shell while using tkinter window
1.2
0
0
550
19,680,487
2013-10-30T11:04:00.000
1
1
1
0
python
19,684,314
2
false
0
0
Basically instead of trying things out in the console, try things out by writing tests. It's almost the same amount of typing, but repeatable (you can check if things still work after you make changes) and it works as a rudimentary form of documentation.
1
0
0
I see from other discussions that reload is considered an unnecessary operation and a bad way to develop programmes. People say to use doctest and unittest. I must be missing something. To develop a programme I write it in a module, load the module into an eclipse console, experiment in the console by running little samples of code, then I spot an error in the module or decide to change something. Surely the quickest thing to do is save the module, reload it and carry on working in the console. Is there a better way?
Best way to develop a programme without using reload
0.099668
0
0
119
19,687,421
2013-10-30T15:43:00.000
1
0
0
0
python,beautifulsoup,scrapy,web-crawler
66,479,925
9
false
1
0
Beautifulsoup is web scraping small library. it does your job but sometime it does not satisfy your needs.i mean if you scrape websites in large amount of data so here in this case beautifulsoup fails. In this case you should use Scrapy which is a complete scraping framework which will do you job. Also scrapy has support for databases(all kind of databases) so it is a huge of scrapy over other web scraping libraries.
6
150
0
I want to make a website that shows the comparison between amazon and e-bay product price. Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler.
Difference between BeautifulSoup and Scrapy crawler?
0.022219
0
1
84,428
19,687,421
2013-10-30T15:43:00.000
0
0
0
0
python,beautifulsoup,scrapy,web-crawler
54,838,886
9
false
1
0
The differences are many and selection of any tool/technology depends on individual needs. Few major differences are: BeautifulSoup is comparatively is easy to learn than Scrapy. The extensions, support, community is larger for Scrapy than for BeautifulSoup. Scrapy should be considered as a Spider while BeautifulSoup is a Parser.
6
150
0
I want to make a website that shows the comparison between amazon and e-bay product price. Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler.
Difference between BeautifulSoup and Scrapy crawler?
0
0
1
84,428
19,687,421
2013-10-30T15:43:00.000
1
0
0
0
python,beautifulsoup,scrapy,web-crawler
49,187,707
9
false
1
0
Using scrapy you can save tons of code and start with structured programming, If you dont like any of the scapy's pre-written methods then BeautifulSoup can be used in the place of scrapy method. Big project takes both advantages.
6
150
0
I want to make a website that shows the comparison between amazon and e-bay product price. Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler.
Difference between BeautifulSoup and Scrapy crawler?
0.022219
0
1
84,428
19,687,421
2013-10-30T15:43:00.000
3
0
0
0
python,beautifulsoup,scrapy,web-crawler
46,601,960
9
false
1
0
Both are using to parse data. Scrapy: Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. But it has some limitations when data comes from java script or loading dynamicaly, we can over come it by using packages like splash, selenium etc. BeautifulSoup: Beautiful Soup is a Python library for pulling data out of HTML and XML files. we can use this package for getting data from java script or dynamically loading pages. Scrapy with BeautifulSoup is one of the best combo we can work with for scraping static and dynamic contents
6
150
0
I want to make a website that shows the comparison between amazon and e-bay product price. Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler.
Difference between BeautifulSoup and Scrapy crawler?
0.066568
0
1
84,428
19,687,421
2013-10-30T15:43:00.000
21
0
0
0
python,beautifulsoup,scrapy,web-crawler
19,687,572
9
false
1
0
I think both are good... im doing a project right now that use both. First i scrap all the pages using scrapy and save that on a mongodb collection using their pipelines, also downloading the images that exists on the page. After that i use BeautifulSoup4 to make a pos-processing where i must change attributes values and get some special tags. If you don't know which pages products you want, a good tool will be scrapy since you can use their crawlers to run all amazon/ebay website looking for the products without making a explicit for loop. Take a look at the scrapy documentation, it's very simple to use.
6
150
0
I want to make a website that shows the comparison between amazon and e-bay product price. Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler.
Difference between BeautifulSoup and Scrapy crawler?
1
0
1
84,428
19,687,421
2013-10-30T15:43:00.000
2
0
0
0
python,beautifulsoup,scrapy,web-crawler
24,040,613
9
false
1
0
The way I do it is to use the eBay/Amazon API's rather than scrapy, and then parse the results using BeautifulSoup. The APIs gives you an official way of getting the same data that you would have got from scrapy crawler, with no need to worry about hiding your identity, mess about with proxies,etc.
6
150
0
I want to make a website that shows the comparison between amazon and e-bay product price. Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler.
Difference between BeautifulSoup and Scrapy crawler?
0.044415
0
1
84,428
19,689,091
2013-10-30T16:54:00.000
1
0
1
0
python-3.x,decorator
19,695,169
1
true
0
0
Thanks to the advice of Blender, I find the C source using the line command grep -lr "class property" * in the directory of the source of Python.
1
0
0
All is in the title. I would like to have the code of the decorator @property. Where is it ?
How to have easily the source code of @property?
1.2
0
0
24
19,690,110
2013-10-30T17:41:00.000
0
0
1
0
windows,python-3.x
19,692,328
4
false
0
0
I think, the best is to uninstall python 2.7 before and after try again.
1
1
0
Does Python 3 support Win XP? Because I'm switching from Python 2 to 3, but I cannot install it. I've downloaded Python 3.3.2 Windows x86 MSI Installer, I open it and it says "Preparing to install...", later I select "Install just for me (not available on Vista)", click "Next" and the installer suddenly closes (also if I select "Install for all users"). When closed, a message appears saying "Send report errors? - Don't send". I'm running Windows XP SP 3 32 bits. I have also installed Python 2.7, that's a problem? Thanks in advance.
Can't install Python 3 on Windows XP
0
0
0
3,116
19,692,437
2013-10-30T19:44:00.000
0
0
1
1
python,networking,synchronization,wmi,apscheduler
19,737,941
1
false
0
0
I tried to include functionality like this in APScheduler 2.0 but it didn't pan out. Maybe The biggest issue is handling concurrent accesses to jobs and making sure jobs get run even if a particular node crashes. The nodes also need to communicate somehow. Are you sure you don't want to use Celery instead?
1
1
0
I am using apscheduler and wmi to create and install new python based windows services where the service determines the type of job to be run. The services are installed across all the machines on the same network. Given this scenario I want to make sure that these services run only on one machine and not all the machines. If a machine goes down I still want the job to be run from another machine on the same network. How would I accomplish this task? I know I need to do some kind of synchronization across machines but not sure how to address it?
syncronizing across machines for a python apscheduler and wmi based windows service
0
0
0
170
19,696,973
2013-10-31T01:40:00.000
1
1
0
1
python,ubuntu,python-2.7,python-3.x,pyside
19,697,689
1
true
0
0
You have two independent Python 2.7 installations, one in /usr and one in /usr/local. (And that's on top of the Python 3.x installation you also have.) This is bound to cause confusion, especially for novices. And it has caused exactly the kind of consuion it was bound to cause. You've installed PySide into the /usr installation, so it ended up in /usr/lib/python2.7/dist-packages. If you run /usr/bin/python, that import PySide will probably work fine. (If not, see below.) But the default thing called python and python2.7 on your PATH is the /usr/local installation, hence which python says /usr/local/bin/python, so it can't see PySide at all. So you need to get it installed for the other Python as well. Unless you know that you need a second Python 2.7 in /usr/local for some reason, the simplest thing to do would be to scrap it. Don't uninstall it and reinstall it; just uninstall it. You've already got a Python 2.7 in /usr, and you don't need two of them. If you really need to get PySide working with the second 2.7… Since you still haven't explained how you've been installing PySide despite being asked repeatedly, I can't tell you exactly how to do that. But generally, the key is to make sure to use explicit paths for all Python programs (python itself, python-config, pip, easy_install, etc.) that you have to run. For example, if the docs or blog or voices in your head tell you to run easy_install at some step, run /usr/local/bin/easy_install instead. If there is no such program, then you need to install that. The fact that you already have /usr/bin/easy_install doesn't help—in fact, it hurts. If you can get rid of the second Python, but that doesn't fix PySide yet, uninstall, rebuild, and reinstall PySide. Or, even simpler… PySide has pre-made, working binary Ubuntu packages for all of the major Python versions that have Ubuntu packages. Just install it that way.
1
1
0
I had PyQt4 running fine with python2 on Ubuntu 12.04. I then installed python-PySide. But the installation test would give me a module not found error. Then I installed python3-PySide and it works fine. So obviously something to do with my environment paths, but I'm not sure what I need to do. I'm guessing PySide is automatically checking if python3 exists and if it does then it'll use it regardless. I need PySide to work with python2.7 because of Qt4.8 compatibility issues. Any suggestions? some info about my system: which python /usr/bin/local/python which python3 /usr/bin/python3 EDIT: More details about installation test. After installation, I bring up the python console and try import PySide, as follows: python import PySide ImportError: No module name PySide But it works fine for python3: python3 import PySide PySide.version '1.1.2'
Ubuntu - PySide module not found for python2 but works fine for python3
1.2
0
0
1,553
19,697,730
2013-10-31T03:00:00.000
0
0
1
0
python,pip
19,698,648
3
false
0
0
I don't think pip can do this. If you are in virtualenv you just delete it and reinstall into new one. If you are in system, you should never use pip but the distribution package manager.
1
2
0
For example, I may have installed pkg1, which requires pkg2 and pkg3. No other packages I have installed require these two. So, during or after pip uninstall pkg1, how can I make pip uninstall pkg2 and pkg3?
How can I make pip uninstall packages that are no longer required by other packages?
0
0
0
290
19,698,151
2013-10-31T03:52:00.000
0
1
0
1
php,python,bash,command-line-interface
19,698,635
2
true
0
0
Writing the data out to a text file in python and then loading that text file in php is definitely the easiest way. If you're willing to modify the php script, you could make it read the data from stdin, and set up a pipe between the two processes, but this going to be a little trickier.
1
0
0
I'm new to all things programming, but am trying to build up some functionality for my team. I have a script in python that performs some useful analysis, and now I need it to communicate to a PHP script that I usually call from the command line with an argument that is a text file, which the script parses and operates on line by line. What I'm trying to do is pass to the script in the CLI a list variable from Python. Is the best way to do this to write the list to a text file on my server and then call the script with subprocess from Python or is there a more streamlined way to make this happen?
Can I pass a list from python to a php script that usually takes a .txt file line-by-line to operate on?
1.2
0
0
74
19,698,469
2013-10-31T04:30:00.000
1
0
0
0
python,machine-learning,couchdb,bayesian,cloudant
19,776,577
3
false
0
0
I also thought the approach using svm.OneClassSVM from sklearn was going to produce a good outlier detector. However, I put together some representative data based upon the example in the question and it simply could not detect an outlier. I swept the nu and gamma parameters from .01 to .99 and found no satisfactory SVM predictor. My theory is that because the samples have categorical data (cities, states, countries, web browsers) the SVM algorithm is not the right approach. (I did, BTW, first convert the data into binary feature vectors with the DictVectorizer.fit_transform method). I believe @sullivanmatt is on the right track when he suggests using a Bayesian classifier. Bayesian classifiers are used for supervised learning but, at least on the surface, this problem was cast as an unsupervised learning problem, ie we don't know a priori which observations are normal and which are outliers. Because the outliers you want to detect are very rare in the stream of web site visits, I believe you could train the Bayesian classifier by labeling every observation in your training set as a positive/normal observation. The classifier should predict that true normal observations have higher probability simply because the majority of the observations really are normal. A true outlier should stand out as receiving a low predicted probability.
1
15
0
I am collecting a lot of really interesting data points as users come to my Python web service. For example, I have their current city, state, country, user-agent, etc. What I'd like to be able to do is run these through some type of machine learning system / algorithm (maybe a Bayesian classifier?), with the eventual goal of getting e-mail notifications when something out-of-the-ordinary occurs (anomaly detection). For example, Jane Doe has only ever logged in from USA on Chrome. So if she suddenly logs into my web service from the Ukraine on Firefox, I want to see that as a highly 'unusual' event and fire off a notification. I am using CouchDB (specifically with Cloudant) already, and I see people often saying here and there online that Cloudant / CouchDB is perfect for this sort of thing (big data analysis). However I am at a complete loss for where to start. I have not found much in terms of documentation regarding relatively simple tracking of outlying events for a web service, let alone storing previously 'learned' data using CouchDB. I see several dedicated systems for doing this type of data crunching (PredictionIO comes to mind), but I can't help but feel that they are overkill given the nature of CouchDB in the first place. Any insight would be much appreciated. Thanks!
Detecting 'unusual behavior' using machine learning with CouchDB and Python?
0.066568
0
0
7,240
19,702,170
2013-10-31T08:59:00.000
2
0
0
1
python,django,raspberry-pi
19,702,189
1
true
1
0
You don't have to run the webbrowser as root but your django app (the webserver). Of course running a web application as root is an incredibly bad idea (even on a pi), so you might want to use a separate worker process (e.g. using celery) that runs as root and accesses the GPIOs.
1
0
0
I am using Django 1.5.4 to design a web page in which i want to use GPIO, but i got following error: "Noᅠaccessᅠtoᅠ/dev/mem. Tryᅠrunningᅠasᅠroot! " in browser. Since web browser itself is an application, how can i assign "root" privileges to it when it tried to render a web page ? If it can be done without any need to install anything that would be better as other frameworks/applications who are able to use GPIO in web page must have made some tweaks.I tried searching for similar questions for this area but couldn't find this specific case ( django + gpio access). Any help would be greatly appreciated. Thanks
Using GPIO in webpage
1.2
0
0
467
19,707,181
2013-10-31T12:59:00.000
0
0
1
0
python,image,imagemagick
19,707,257
1
false
0
0
Compute the sum for each row. If the Row is all black (sum=0) extract this part into a new image and you are done.
1
0
0
I have image which is divided by black lines. Is is possible to split image into multiple parts divided by black lines using Imagemagick (Python or any other solution) Could not fine any solution.
Imagemagick splitting image by black lines
0
0
0
217
19,707,783
2013-10-31T13:25:00.000
2
0
1
0
python,ipython,ipython-notebook
19,709,140
1
false
0
0
in ipython 1.1.0 I just ran "!gvim a.py" in the notebook which opened the gvim editor in a window. After saving the edits into a.py file, I was able to successfully execute "%run a.py"
1
4
0
I am scratching my head on what the standard workflow is for opening, editing and executing a scripts directly from within the ipython notebook? I know that you can use %edit from ipython terminal but this doesn't seem to work from notebook. thank you
Ipython Notebook: Open & Edit Files
0.379949
0
0
2,256
19,713,141
2013-10-31T17:35:00.000
0
0
0
1
python,amazon-web-services,amazon-s3,amazon-ec2,cluster-computing
19,714,085
1
false
1
0
You do not need to use S3, you would likely want to use EBS for storing the code if you need it to be preserved between instance launches. When you launch an instance you have the option to add an ebs storage volume to the drive. That drive will automatically be mounted to the instance and you can access it just like you would on any physical machine. ssh your code up to the amazon machine and fire away.
1
0
0
I have never used amazon web services so I apologize for the naive question. I am looking to run my code on a cluster as the quad-core architecture on my local machine doesn't seem to be doing the job. The documentation seems overwhelming and I don't even know which AWS services are going to be used for running my script on EC2. Would I have to use their storage facility (S3) because I guess if I have to run my script, I'm going to have to store it on the cloud in a place where the cluster instance has access to the files or do I upload my files somewhere else while working with EC2? If this is true is it possible for me to upload my entire directory which has all the contents of the files required by my application onto s3. Any guidance would be much appreciated. So I guess my question is do I have to use S3 to store my code in a place accessible by the cluster? If so is there an easy way to do it? Meaning I have only seen examples of creating buckets wherein one file can be transferred per bucket. Can you transfer an entire folder into a bucket? If we don't require to use S3 then which other service should I use to give the cluster access to my scripts to be executed? Thanks in advance!
how to run python code on amazon ec2 webservice?
0
0
0
718
19,714,108
2013-10-31T18:33:00.000
0
0
0
0
python,machine-learning,scikit-learn,dimensionality-reduction
19,724,359
5
false
0
0
Use a multi-layer neural net for classification. If you want to see what the representation of the input is in the reduced dimension, look at the activations of the hidden layer. The role of the hidden layer is by definition optimised to distinguish between the classes, since that's what's directly optimised when the weights are set. You should remember to use a softmax activation on the output layer, and something non-linear on the hidden layer (tanh or sigmoid).
2
12
1
I'm trying to use scikit-learn to do some machine learning on natural language data. I've got my corpus transformed into bag-of-words vectors (which take the form of a sparse CSR matrix) and I'm wondering if there's a supervised dimensionality reduction algorithm in sklearn capable of taking high-dimensional, supervised data and projecting it into a lower dimensional space which preserves the variance between these classes. The high-level problem description is that I have a collection of documents, each of which can have multiple labels on it, and I want to predict which of those labels will get slapped on a new document based on the content of the document. At it's core, this is a supervised, multi-label, multi-class problem using a sparse representation of BoW vectors. Is there a dimensionality reduction technique in sklearn that can handle that sort of data? Are there other sorts of techniques people have used in working with supervised, BoW data in scikit-learn? Thanks!
Supervised Dimensionality Reduction for Text Data in scikit-learn
0
0
0
4,068
19,714,108
2013-10-31T18:33:00.000
0
0
0
0
python,machine-learning,scikit-learn,dimensionality-reduction
19,714,792
5
false
0
0
Try ISOMAP. There's a super simple built-in function for it in scikits.learn. Even if it doesn't have some of the preservation properties you're looking for, it's worth a try.
2
12
1
I'm trying to use scikit-learn to do some machine learning on natural language data. I've got my corpus transformed into bag-of-words vectors (which take the form of a sparse CSR matrix) and I'm wondering if there's a supervised dimensionality reduction algorithm in sklearn capable of taking high-dimensional, supervised data and projecting it into a lower dimensional space which preserves the variance between these classes. The high-level problem description is that I have a collection of documents, each of which can have multiple labels on it, and I want to predict which of those labels will get slapped on a new document based on the content of the document. At it's core, this is a supervised, multi-label, multi-class problem using a sparse representation of BoW vectors. Is there a dimensionality reduction technique in sklearn that can handle that sort of data? Are there other sorts of techniques people have used in working with supervised, BoW data in scikit-learn? Thanks!
Supervised Dimensionality Reduction for Text Data in scikit-learn
0
0
0
4,068
19,714,463
2013-10-31T18:53:00.000
1
0
0
0
python,mysql,django,apache
19,714,529
2
false
1
0
Probably caching is your answer. I don't know how Django runs on Apache as I run a Gunicorn setup but it makes round trips for every database call. If you institute some memcache to handle common result sets you should see a large improvement so you don't have to make trips for each request. Also, 50 concurrent connection requests at a time seems like a lot. Try to tone it down to 5 or 10 then 25 instead of starting at 50. Just my opinion.
2
0
0
Today I was doing a stress test to a new Django (1.5.4) running on an Ubuntu Server 12.04 site before going into production and I've found some unexpected results: Doing 50 requests per second, htop showed a CPU usage of ~50% and RAM also ~50%. I'm not currently using Django cache and doing a normal browser request while doing the stress test it took ~30s to load each page (without any load it takes <= 2s). The server didn't crash during the test, but I dont understad why if there are almost 50% of the resources free it lasts so much... I expected to see a higher CPU and memory usage! So, my question is: Is there any Django default setting that limits the number of requests per second? Or does Apache or mod_wsgi have any kind of limit? Do I have to change some MySQL config? (Note: I'm a software engineer, not sysadmin).
Stress test on Django makes it slow while there are free system resources
0.099668
0
0
389
19,714,463
2013-10-31T18:53:00.000
0
0
0
0
python,mysql,django,apache
19,719,423
2
true
1
0
How many workers of django you are running? Maybe server can't load to 100% because you are running small amount of workers, but there are many database connections, so most of the time workers are waiting database, and blocking new connections.
2
0
0
Today I was doing a stress test to a new Django (1.5.4) running on an Ubuntu Server 12.04 site before going into production and I've found some unexpected results: Doing 50 requests per second, htop showed a CPU usage of ~50% and RAM also ~50%. I'm not currently using Django cache and doing a normal browser request while doing the stress test it took ~30s to load each page (without any load it takes <= 2s). The server didn't crash during the test, but I dont understad why if there are almost 50% of the resources free it lasts so much... I expected to see a higher CPU and memory usage! So, my question is: Is there any Django default setting that limits the number of requests per second? Or does Apache or mod_wsgi have any kind of limit? Do I have to change some MySQL config? (Note: I'm a software engineer, not sysadmin).
Stress test on Django makes it slow while there are free system resources
1.2
0
0
389
19,717,288
2013-10-31T21:56:00.000
0
1
1
0
python,backup,archive,cd,dvd
19,717,555
2
false
0
0
Writing your own backup system is not fun. Have you considered looking at ready-to-use backup solutions? There are plenty, many free ones... If you are still bound to write your own... Answering your specific questions: With CD/DVD you first typically have to master the image (using a tool like mkisofs), then write image to the medium. There are tools that wrap both operations for you (genisofs I believe) but this is typically the process. To verify the backup quality, you'll have to read back all written files (by mounting a newly written CD) and compare their checksums against those of the original files. In order to do incremental backups, you'll have to keep archives of checksums for each file you save (with backup date etc).
2
3
0
I have to archive a large amount of data off of CDs and DVDs, and I thought it was an interesting problem that people might have useful input on. Here's the setup: The script will be running on multiple boxes on multiple platforms, so I thought python would be the best language to use. If the logic creates a bottleneck, any other language works. We need to archive ~1000 CDs and ~500 DVDs, so speed is a critical issue The data is very valuable, so verification would be useful The discs are pretty old, so a lot of them will be hard or impossible to read Right now, I was planning on using shutil.copytree to dump the files into a directory, and compare file trees and sizes. Maybe throw in a quick hash, although that will probably slow things down too much. So my specific questions are: What is the fastest way to copy files off a slow medium like CD/DVDs? (or does the method even matter) Any suggestions of how to deal with potentially failing discs? How do you detect discs that have issues?
What is the best way to archive a data CD/DVD in python?
0
0
0
1,815
19,717,288
2013-10-31T21:56:00.000
1
1
1
0
python,backup,archive,cd,dvd
19,775,252
2
true
0
0
When you read file by file, you're seeking randomly around the disc, which is a lot slower than a bulk transfer of contiguous data. And, since the fastest CD drives are several dozen times slower than the slowest hard drives (and that's not even counting the speed hit for doing multiple reads on each bad sector for error correction), you want to get the data off the CD as soon as possible. Also, of course, having an archive as a .iso file or similar means that, if you improve your software later, you can re-scan the filesystem without needing to dig out the CD again (which may have further degraded in storage). Meanwhile, trying to recovering damaged CDs, and damaged filesystems, is a lot more complicated than you'd expect. So, here's what I'd do: Block-copy the discs directly to .iso files (whether in Python, or with dd), and log all the ones that fail. Hash the .iso files, not the filesystems. If you really need to hash the filesystems, keep in mind that the common optimization of compression the data before hashing (that is, tar czf - | shasum instead of just tar cf - | shasum) usually slows things down, even for easily-compressable data—but you might as well test it both ways on a couple discs. If you need your verification to be legally useful you may have to use a timestamped signature provided by an online service, instead, in which case compressing probably will be worthwhile. For each successful .iso file, mount it and use basic file copy operations (whether in Python, or with standard Unix tools), and again log all the ones that fail. Get a free or commercial CD recovery tool like IsoBuster (not an endorsement, just the first one that came up in a search, although I have used it successfully before) and use it to manually recover all of the damaged discs. You can do a lot of this work in parallel—when each block copy finishes, kick off the filesystem dump in the background while you're block-copying the next drive. Finally, if you've got 1500 discs to recover, you might want to invest in a DVD jukebox or auto-loader. I'm guessing new ones are still pretty expensive, but there must be people out there selling older ones for a lot cheaper. (From a quick search online, the first thing that came up was $2500 new and $240 used…)
2
3
0
I have to archive a large amount of data off of CDs and DVDs, and I thought it was an interesting problem that people might have useful input on. Here's the setup: The script will be running on multiple boxes on multiple platforms, so I thought python would be the best language to use. If the logic creates a bottleneck, any other language works. We need to archive ~1000 CDs and ~500 DVDs, so speed is a critical issue The data is very valuable, so verification would be useful The discs are pretty old, so a lot of them will be hard or impossible to read Right now, I was planning on using shutil.copytree to dump the files into a directory, and compare file trees and sizes. Maybe throw in a quick hash, although that will probably slow things down too much. So my specific questions are: What is the fastest way to copy files off a slow medium like CD/DVDs? (or does the method even matter) Any suggestions of how to deal with potentially failing discs? How do you detect discs that have issues?
What is the best way to archive a data CD/DVD in python?
1.2
0
0
1,815
19,718,673
2013-10-31T23:57:00.000
2
0
0
0
python,nginx,flask,uwsgi
19,728,271
2
true
1
0
Sessions are signed against the app.secret_key so perhaps you're automatically generating a new secret key each time you launch your app?
1
1
0
I am using Flask and Nginx on my production server and Flask seems to log everyone out whenever I make a change to the code. I realize the reason for this, but I was wondering if there is any way to prevent this. I am using a proxy with Nginx if that makes any difference, I could easily switch back to uwsgi if that will fix the problem but I would prefer to keep my configuration the way it is. Thanks for your help. EDIT: If there is any confusion, I am trying to find a way to keep everyone logged in when I make changes to my code.
Flask logs everyone out when ever I make changes to the code
1.2
0
0
130
19,719,746
2013-11-01T02:07:00.000
3
0
0
0
python,numpy
19,719,936
3
true
0
0
Because of the strided data structure that defines a numpy array, what you want will not be possible without using a masked array. Your best option might be to use a masked array (or perhaps your own boolean array) to mask the deleted the rows, and then do a single real delete operation of all the rows to be deleted before passing it downstream.
1
8
1
Given a large 2d numpy array, I would like to remove a range of rows, say rows 10000:10010 efficiently. I have to do this multiple times with different ranges, so I would like to also make it parallelizable. Using something like numpy.delete() is not efficient, since it needs to copy the array, taking too much time and memory. Ideally I would want to do something like create a view, but I am not sure how I could do this in this case. A masked array is also not an option since the downstream operations are not supported on masked arrays. Any ideas?
How can one efficiently remove a range of rows from a large numpy array?
1.2
0
0
2,948
19,724,167
2013-11-01T09:37:00.000
0
0
0
0
android,python,eclipse,workspace,sl4a
20,184,174
1
false
0
1
I had same problem, I removed project/s from package explorer and re-imported again. so go to your project right click and select delete (make sure don't delete from disk) and then go file/import and browse it again. If your project is on your work space, just cut and paste in different location and than import from there. hope this help.
1
1
0
I already import the projects but when I cleaned them, there's an error. Please help me to fix it. I'm stuck. Errors occurred during the build. Errors running builder 'Android Resource Manager' on project 'PyDroid'. java.lang.NullPointerException Errors running builder 'Android Resource Manager' on project 'Python32APK'. java.lang.NullPointerException Errors running builder 'Android Resource Manager' on project 'PythonAPK'. java.lang.NullPointerException
Error in building workspace For Eclipse for Python Android Programming
0
0
0
418
19,724,784
2013-11-01T10:18:00.000
1
0
0
0
python,performance,winapi,httplib
21,978,052
1
true
0
0
Turns out it's a VM related problem. I was running my Python code on a VM, but when I copy the same code to a physical machinse running the same Windows edition, the problem disappears. As I'm totally unfamiliar with VM mechanisms, it would be great if someone can explain why such a problem exists in VM.
1
1
0
I'm using python and httplib to implement a really simple file uploader for my file sharing server. Files are chunked and uploaded one chunk at a time if they are larger than 1MB. The network connection between my client and server is quite good (100mbps, <3ms latency). When chunk size is small (below 128kB or so), everything works fine (>200kB/s). But when I increase the chunk size to 256kB or above, it takes about 10 times more time to complete a chunk comparing to 128kB chunking (<20kB/s). To make the thing even stranger, this only happens in my win32 machine (win8 x86, running 32b python) but not in my amd64 one (win8 amd64, running 64b python). After some profilings, I've narrowed down my search to request() and getresponse() functions of httplib.HttpConnection, as these are the cause of blocking. My first guess is something about socket buffering. But changing SO_SNDBUF and TCP_NODELAY options does not help much. I've also checked my server side, but everything's normal. I really hope someone can help me out here. Changing the http library (to pycurl) is the last thing I want to do. Thanks in advance!
Slow http upload using httplib of python2.7 at win32
1.2
0
1
210
19,731,349
2013-11-01T16:48:00.000
8
0
0
0
python,django,queue,signals
19,731,987
1
true
1
0
Not sure synchronicity is your real concern here. Django's signals are always executed in-process: that is, they will be executed by the process that did the rest of that request, before returning a response. However, if your server itself is asynchronous, there's a possibility that one request will finish processing after a second one that was received later, and therefore the signals will be processed in the wrong order. Celery, of course, is definitely asynchronous, but might well be a better bet if reliable ordering is a priority for you.
1
4
0
I'm developing a django IPN plugin that saves IPN data to a model and then calls a post_save signal. I'm worried that under this use case (gunicorn, gevent, etc) that the signals may be called/completed asynchronously. The IPN often sends more than 1 request to the ipn url, and I need to be able to process those requests in order. Should I use a queue for this? Would simple python queue's work better, or should I use something like kombu + celery (with 1 worker)?
Are django signals always synchronous?
1.2
0
0
3,964
19,731,409
2013-11-01T16:52:00.000
0
0
0
0
web-services,apache,python-2.7,postgresql-9.2
21,433,269
1
false
1
0
Note that you cannot get a list of "all users logged into a web application". You can get a list of "all users who have recently authenticated", or possibly "all users with a valid authentication token", but due to the stateless nature of web connections, the concept of "currently logged in" is a tricky one (if I close my browser window, am I "logged out"? What if I open a connection in a new window -- have I logged back in, or was I logged in the whole time?). Having said that, all of the web applications you've named have their own mechanism for handling user authentication. Unless you have configured some sort of single sign-on (SSO) solution, you'll have to query each application individually to get information about currently authenticated users. If you are working with a number of web applications, setting up single sign-on can be incredibly convenient, but depending on the web applications in question can be tricky to configure. Be careful of thinking in terms of "client machines", because that is something else that's a tricky concept. A bunch of people using a residential or hotel internet connection may all appear to come from the same ip address, even though there are a number of clients. Several people may share a single computer, or someone may open multiple browser sessions to the application.
1
0
0
I am planning to install bugzilla/trac, mediawiki and some other web services on a server via apache. I would like to know if there is a way to track which all users(username of the specific web service from different client machines ie) are logged into the server. I am thinking of creating a database, and doing some logic with that. Before that if there is any simpler method then I would like to go for that. This is my first time with server stuff and apache. Thanks in advance
How to get the list of users currently logged in to a server for any web service?
0
0
0
1,048
19,732,097
2013-11-01T17:34:00.000
0
0
0
0
python,opencv,numpy
19,732,333
1
true
0
0
Use matrix.reshape((-1, 1)) to turn the n-element 1D matrix into an n-by-1 2D one before converting it.
1
0
1
How can I convert 2D cvMat to 1D? I have tried converting 2D cvMat to Numpy array then used ravel() (I want that kind of resultant matrix).When I tried converting it back to cvMat using cv.fromarray() it gives an error that the matrix must be 2D or 3D.
Conversion of 2D cvMat to 1D
1.2
0
0
606
19,735,409
2013-11-01T21:08:00.000
-1
0
1
0
python,regex
19,735,461
2
false
0
0
I think it works. You can try so. +^[0-9]
1
2
0
This is, I fear, frighteningly simple, but I can't make it work (and I can't find the answer through a search). I am scraping a website for all words in italics (the ones I want are in groups of two words--they are binomial scientific names), but I don't want any numbers returned. The regex I used : <i>(.+?)</i> worked great but it pulled the numbers. I thought using \D would work, but it didn't. What am I doing wrong?
Omitting Numbers with regex
-0.099668
0
0
81
19,736,164
2013-11-01T22:05:00.000
1
0
1
0
python,string,list
19,736,233
2
false
0
0
I'd be inclined to have just a single function taking an iterable of strings. If you need to call it with a single string s, you could always call it as f([s]) or as f((s,)).
2
0
0
I am writing some library functions in which I find myself wanting the functions to accept as input either a string, or a list-of-strings, and return the same type. I have thought of a couple of approaches to this, but both seem clumsy. For each conceptual function I could write: a separate function for string and for string list. But that requires the calling code to know the type of the variable, and select the appropriate function. just one function, which decides what to do based on detecting the type of input using 'isinstance(inputarg, str):'. That at least hides the multiple choices inside the library instead of in the caller. But I'm wondering if there's some more elegant approach or idiom that I've overlooked? Clarification: Of these two solutions I much prefer the second, making a single function which is polymorphic to the extent of accepting (and returning) string or list-of-string. However, even in that case, switching based on isinstance() seems like an inelegance that perhaps has a better alternative.
Python "string-or-string-list" type convention?
0.099668
0
0
96
19,736,164
2013-11-01T22:05:00.000
1
0
1
0
python,string,list
19,736,584
2
true
0
0
isinstance is OK. Better is to test for the presence of some method you require for strings (and not lists), and test again for some method you require for lists (and not strings), and raise an AssertionError if you get neither. You can test for methods using hasattr().
2
0
0
I am writing some library functions in which I find myself wanting the functions to accept as input either a string, or a list-of-strings, and return the same type. I have thought of a couple of approaches to this, but both seem clumsy. For each conceptual function I could write: a separate function for string and for string list. But that requires the calling code to know the type of the variable, and select the appropriate function. just one function, which decides what to do based on detecting the type of input using 'isinstance(inputarg, str):'. That at least hides the multiple choices inside the library instead of in the caller. But I'm wondering if there's some more elegant approach or idiom that I've overlooked? Clarification: Of these two solutions I much prefer the second, making a single function which is polymorphic to the extent of accepting (and returning) string or list-of-string. However, even in that case, switching based on isinstance() seems like an inelegance that perhaps has a better alternative.
Python "string-or-string-list" type convention?
1.2
0
0
96
19,737,084
2013-11-01T23:30:00.000
0
1
0
0
python,python-2.7
19,737,140
2
false
0
0
Your approach must be a quite bad practice. But you can do for example: exec 'print 1 + 2'
1
1
0
i have this necessity and i would like to know if it's possible to acomplish: I want to put part from my python code inside a WebServer. And, only after a authentication process, the user who is executing my script, will be able to read a file that is in my WebServer and use that content to execute it as part of script. Is there any function in Python that can read a text from file as a variable and execute that code as if it was written inside the script?
Execute a code read from file
0
0
0
6,263
19,737,943
2013-11-02T01:19:00.000
0
0
1
0
python,compiler-construction
19,738,125
2
false
0
0
First I would recommend using your file manager in to find the path to your file, for example in OSX you simply right click on your python file and select get info. If you discover the path to your file is ~/foobar/foo/bar.py, you should open up terminal and run python ~/foobar/foo/bar.py Hope this it explains it.
1
0
0
Sorry for the really basic question, but my web searches have yielded poor results. I'm trying to make a simple text game right now and I know that my code is definitely wrong, but I want to try and run it and see what kind of feedback it gives to me. I've searched and found that ctrl + B is supposed to compile and run the code, but I get an error saying "The system cannot find the file specified". I have other programs such as the Python cmnd line and Vim but I'm not sure if they don't seem to help either. I'm the absolute epitome of the word beginner, so any help is definitely appreciated. Thank you so much!
How do I run the code I've written in my text editor?(sublime2)
0
0
0
190
19,738,373
2013-11-02T02:32:00.000
2
1
0
0
python
19,738,662
1
false
0
0
Have you tried adding './src' to the PYTHONPATH ahead of the other paths that you want to "override"? That ought to work. (Haven't tried it with Python, but most "path" lists allow relative paths to be used.)
1
0
0
Is there a way to modify PYTHONPATH automatically whenever I cd into a directory. I usually have multiple projects on my workstation, and whenever I am in one of those directories, I want that projects' src/ to override other src directories in PYTHONPATH.
Modify PYTHONPATH automatically
0.379949
0
0
54
19,741,370
2013-11-02T11:23:00.000
1
0
1
0
python,user-data
19,741,407
2
false
0
0
I suggest you go with JSON. Its the way the shore, store and manipulate any form of user data these days. Its cross platform and without the extra overhead which comes with using XML. JSON is also programmer friendly as you don't have to entirely parse the suer data as you would have to if you use XML.
1
0
0
I am in the process of writing a Python application. Now I want to allow the user to save his or her project data to a file. Now I am stuck with a design decision, what is the best format to save data in Python? As far as I know, the built-in alternatives are JSON and XML, from which XML is known to never be the answer, not even the question. I would like to have an easy to use format with easy backwards compatibility if I'd add more data to the file later on. Any ideas?
File format for Python project data
0.099668
0
0
117
19,743,525
2013-11-02T15:32:00.000
0
0
0
0
python,numpy,matrix,linear-algebra,svd
38,743,037
2
false
0
0
Try to use scipy.linalg.svd instead of numpy's func.
1
5
1
I have 1 million 3d points I am passing to numpy.linalg.svd but it runs out of memory very quickly. Is there a way to break down this operation into smaller chunks? I don't know what it's doing but am I only supposed to pass arrays that represent a 3x3, 4x4 matrix? Because I have seen uses of it online where they were passing arrays with arbitrary number of elements.
Is there a way to prevent numpy.linalg.svd running out of memory?
0
0
0
4,417
19,745,169
2013-11-02T18:16:00.000
0
0
0
1
python,google-app-engine,google-docs-api
19,746,281
1
true
1
0
There is currently no API to create google docs directly except for: 1) make a google apps script service, which does have access to the docs api. 2) create a ".doc" then upload and convert to gdoc. 1 is best but a gas service has some limitations like quotas. If you are only creating dozens/hundreds per day you will be ok with quotas. Ive done it this way for something similar as your case.
1
0
0
Listmates: I am designing a google app engine (python) app to automate law office documents. I plan on using GAE, google docs, and google drive to create and store the finished documents. My plan is to have case information (client name, case number, etc.) entered and retrieved using GAE web forms and the google datastore. Then I will allow the user to create a motion or other document by inserting the form data into template. The completed document can be further customized by the user, email, printed, and/or stored in a google drive folder. I found information on how to create a web page that can be printed. However, I am looking for information for how to create an actual google doc and insert the form data into that document or template. Can someone point me to a GAE tutorial of any type that steps me through how to do this?
How can I automate google docs with Google App Engine?
1.2
0
0
559
19,746,350
2013-11-02T20:11:00.000
1
0
1
0
jupyter-notebook,ipython,markdown
71,530,683
10
false
0
0
This is a very simple and effective trick for google colab. Use the (empty) link syntax of the markdown. [your_message]() Then you'll get the blue text (underline).
1
155
0
I'm only looking to format a specific string within a cell. I change that cell's format to "Markdown" but I'm not sure how to change text color of a single word. I don't want to change the look of the whole notebook (via a CSS file).
How to change color in markdown cells ipython/jupyter notebook?
0.019997
0
0
244,278
19,747,371
2013-11-02T22:00:00.000
64
0
1
0
python
19,747,456
4
false
0
0
Different Means of Exiting os._exit(): Exit the process without calling the cleanup handlers. exit(0): a clean exit without any errors / problems. exit(1): There was some issue / error / problem and that is why the program is exiting. sys.exit(): When the system and python shuts down; it means less memory is being used after the program is run. quit(): Closes the python file. Summary Basically they all do the same thing, however, it also depends on what you are doing it for. I don't think you left anything out and I would recommend getting used to quit() or exit(). You would use sys.exit() and os._exit() mainly if you are using big files or are using python to control terminal. Otherwise mainly use exit() or quit().
1
611
0
It seems that python supports many different commands to stop script execution.The choices I've found are: quit(), exit(), sys.exit(), os._exit() Have I missed any? What's the difference between them? When would you use each?
Python exit commands - why so many and when should each be used?
1
0
0
793,349
19,747,596
2013-11-02T22:26:00.000
1
0
0
1
google-app-engine,sdk,python-2.5
19,748,534
2
true
1
0
In the 1.8.6 SDK, there's an old_dev_appserver.py that works with Python 2.5. That'll help you along as you migrate.
2
2
0
I have an existing app that uses the deprecated Python 2.5 and the deprecated master/slave datastore. According to the docs, I must migrate the datastore to HRD before I can upgrade to Python 2.7. Before I can migrate my M/S datastore to HRD, I need to do some work on the app and test it using the dev server. However, I upgraded to the most recent version of the SDK (1.8.6), and it does not support Python 2.5. Somebody else encountered this problem and learned that the latest SDK that supports Python 2.5 by default is Python SDK 1.7.5. From where can that be downloaded? Or, is there a way I can make the SDK 1.8.6 work with Python 2.5?
GAE SDK for Python 2.5
1.2
0
0
272
19,747,596
2013-11-02T22:26:00.000
0
0
0
1
google-app-engine,sdk,python-2.5
20,110,862
2
false
1
0
Dave W. Smith gave me the answer but I didn't know how to implement it until I made a discovery that maybe most people already know, But in case it might be helpful to somebody, I will tell it here: I do all my GAE/Python/Flex development work in Eclispe, except that I used the launcher for local testing and deploying. (I am command-line adverse.) I discovered that using the PyDev Eclipse plugin it is easy to set up a "run configuration" (under the PyDev "Run" menu) whereby you can set up command line parameters, etc. and run any python program from within Eclipse. I now use that facility for running dev_appserver.py (and when needed for my Python 2.5 app, old_app_devserver.py). I no longer have a need to use the launcher. I also set up a PyDev run configuation to deploy my app and performing various appcfg.py functions (vacuum indexes, etc.).
2
2
0
I have an existing app that uses the deprecated Python 2.5 and the deprecated master/slave datastore. According to the docs, I must migrate the datastore to HRD before I can upgrade to Python 2.7. Before I can migrate my M/S datastore to HRD, I need to do some work on the app and test it using the dev server. However, I upgraded to the most recent version of the SDK (1.8.6), and it does not support Python 2.5. Somebody else encountered this problem and learned that the latest SDK that supports Python 2.5 by default is Python SDK 1.7.5. From where can that be downloaded? Or, is there a way I can make the SDK 1.8.6 work with Python 2.5?
GAE SDK for Python 2.5
0
0
0
272
19,747,751
2013-11-02T22:45:00.000
1
0
1
0
python,list
19,747,785
4
false
0
0
I'd write it as a generator. Repeat: read as many A's as possible, read as many B's as possible, if you've read exactly 1 A and 1 B, yield them; otherwise ignore and proceed. Also this needs an additional special case in case you want to allow the input to end with an A.
1
4
0
Given a list of strings, where each string is in the format "A - something" or "B - somethingelse", and list items mostly alternate between pieces of "A" data and "B" data, how can irregularities be removed? Irregularities being any sequence that breaks the A B pattern. If there are multiple A's, the next B should also be removed. If there are multiple B's, the preceding A should also be removed. After removal of these invalid sequnces, list order should be kept. Example: A B A B A A B A B A B A B A B B A B A B A A B B A B A B In this case, AAB (see rule 2), ABB (see rule 3) and AABB should be removed.
How to maintain a strict alternating pattern of item "types" in a list?
0.049958
0
0
188
19,750,453
2013-11-03T06:35:00.000
1
0
1
0
python,django,heroku,tornado
19,750,550
3
false
1
0
Each client will have their own running version. Definitely. If you want to have some sort of global variables, you should use some inter-process communication tools (message passing, synchronization, shared memory, or rpc). Redis, for example.
2
0
0
I've been working with a Tornado project on my local machine by running server.py. When deployed to the server (e.g. Heroku), will a single instance of server.py be shared by all clients or will each client have their own running version? I am wondering this because I am thinking of making use of global variables in server.py and wondering if they will be shared across all clients or just a single client.
Is a single Python executable file shared by all clients when run on a server?
0.066568
0
0
269
19,750,453
2013-11-03T06:35:00.000
0
0
1
0
python,django,heroku,tornado
19,757,868
3
false
1
0
With Tornado you'll have at least one process per machine/VM (Heroku calls these "dynos"); in multicore environments you'll want to run multiple processes per machine (one per core). Each process handles many users, so in the simple case where there is only one process you can use global variables to share state between users, although as you grow to multiple dynos and processes you'll need some sort of inter-process communication.
2
0
0
I've been working with a Tornado project on my local machine by running server.py. When deployed to the server (e.g. Heroku), will a single instance of server.py be shared by all clients or will each client have their own running version? I am wondering this because I am thinking of making use of global variables in server.py and wondering if they will be shared across all clients or just a single client.
Is a single Python executable file shared by all clients when run on a server?
0
0
0
269
19,751,900
2013-11-03T10:22:00.000
3
0
1
1
python,python-3.x
19,751,942
4
false
0
0
Exit python interpreter/console. Edit your program in notepad++ creating first_program.py in the same directory where your python.exe is start cmd.exe from within exactly the same directory type python first_program.py* you are done
3
0
0
I am very very new in Python and I have a doubt. If I write a program in a text editor (such as Nodepad++), then can I execute it from the Python shell (the one that begin with >>)? What command have I to launch to execute my Python program? Tnx Andrea
How to execute a Python program from the Python shell?
0.148885
0
0
379
19,751,900
2013-11-03T10:22:00.000
0
0
1
1
python,python-3.x
19,751,976
4
false
0
0
from within the Python IDLE shell: File -> Open... -> Select your Python program When your program has openend select Run -> Run Module or press F5
3
0
0
I am very very new in Python and I have a doubt. If I write a program in a text editor (such as Nodepad++), then can I execute it from the Python shell (the one that begin with >>)? What command have I to launch to execute my Python program? Tnx Andrea
How to execute a Python program from the Python shell?
0
0
0
379
19,751,900
2013-11-03T10:22:00.000
0
0
1
1
python,python-3.x
19,751,998
4
false
0
0
In the view of mine: you wrote a program: test.py print 'test file' and you turn to the windows cmd: you excuted python,and you got this > then you can just simply: os.system('python test.py')
3
0
0
I am very very new in Python and I have a doubt. If I write a program in a text editor (such as Nodepad++), then can I execute it from the Python shell (the one that begin with >>)? What command have I to launch to execute my Python program? Tnx Andrea
How to execute a Python program from the Python shell?
0
0
0
379
19,753,771
2013-11-03T14:00:00.000
0
0
0
1
python,canopy
19,759,856
1
true
0
0
Since the supplied information is insufficient, the answer is the same. This is about user authentication. I don't know how you open the app but, your app tries to open a file or a process which is could not be opened by your user. If you open your app with root privileges there won't be any problem.
1
0
0
I'm learning python (from a very low baseline) and recently re-installed Canopy (on a MacBook) It was working fine before. Now whenever I try an launch the editor I get a Access Denied error. Can anyone help? Please bear in mind my inexperience Thanks File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/envisage/ui/tasks/tasks_application.py", line 205, in create_window window.add_task(task) File "/Applications/Canopy.app/appdata/canopy-1.1.0.1371.macosx-x86_64/Canopy.app/Contents/lib/python2.7/site-packages/pyface/tasks/task_window.py", line 187, in add_task state.dock_panes.append(dock_pane_factory(task=task)) File "build/bdist.macosx-10.5-i386/egg/canopy/plugin/editor_task.py", line 143, in _create_python_pane File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/envisage/application.py", line 371, in get_service protocol, query, minimize, maximize File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/envisage/service_registry.py", line 78, in get_service services = self.get_services(protocol, query, minimize, maximize) File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/envisage/service_registry.py", line 115, in get_services actual_protocol, name, obj, properties, service_id File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/envisage/service_registry.py", line 259, in _resolve_factory obj = obj(**properties) File "build/bdist.macosx-10.5-i386/egg/canopy/python_frontend/plugin.py", line 109, in _frontend_manager_service_factory File "build/bdist.macosx-10.5-i386/egg/canopy/app/running_process_manager.py", line 82, in register_proc File "build/bdist.macosx-10.5-i386/egg/canopy/app/util.py", line 53, in get_exe_or_cmdline File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/psutil/_common.py", line 80, in get ret = self.func(instance) File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/psutil/init.py", line 331, in exe return guess_it(fallback=err) File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/psutil/init.py", line 314, in guess_it cmdline = self.cmdline File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/psutil/init.py", line 346, in cmdline return self._platform_impl.get_process_cmdline() File "/Users/simonthompson/Library/Enthought/Canopy_64bit/System/lib/python2.7/site-packages/psutil/_psosx.py", line 153, in wrapper raise AccessDenied(self.pid, self._process_name) AccessDenied: (pid=343) DEBUG|2013-11-03 21:19:25|QtWarningMsg: QImage::scaled: Image is a null image
Canopy - get Access Denied error
1.2
0
0
703
19,756,725
2013-11-03T18:57:00.000
0
0
0
0
python,django,paypal,django-paypal
21,199,468
1
false
1
0
With respect, the question is slightly naive, in that there is typically a separation between the shopping cart, and the payment processing. A payment returns a binary result - it either worked or it didn't. It is up to your application to recall what was being paid for. The Paypal API returns the success or failure of an identified payment; plus will happily consume a list of items you give it, so that the user is presented with a breakdown of the total amount. But note that you are telling paypal what is being paid for. It is consuming that data, not providing it. So the answer depends entirely upon your chosen solution (django-paypal or django-merchant or whatever). Read their documentation. Presumably there is some way to inspect the contents of a recently approved transaction. Cycle through the cart and enable a download of each. Django-paypal, for example, has no interest of what is in the cart. It just fires a signal when a payment is successful, and passes back the transaction identifier. Your application must recall what the transaction was for. Often it's not as easy as you'd hope.
1
2
0
I have an existing django website, and I would like to sell some pdf files through it using paypal. The buyer needs to be able to select 1 or more books, get transferred to the paypal site to enter in payment info. Then after a successful payment, the buyer gets redirected back to my website and the books start downloading automatically. I have looked at the django-paypal and django-merchent apps, but I don't know how to handle the multiple downloads. As far as I know, using these apps, after a successful purchase, the app sends a success signal, but doesn't tell me which books were ordered. What is the best way to implement this either with the django-paypal app or using some other method? Again, I'm looking for the easiest/quickest solution. Thanks,
Easiest way to implement paypal shopping for e-books
0
0
0
165
19,757,331
2013-11-03T19:53:00.000
2
0
0
0
python
19,757,497
1
false
0
0
The answer is: It depends ;). To be more specific: It depends on your operating system and the rights management you are having there. e.g. on UNIX, you could create dedicated user account which is used to run your programm and allow rw-access to the files/folders only to this user. On Windows, it will be more difficult. In general, you should try to get a feeling about how possible attacs might be performed (USB-stick, login via internet / intranet, bribing someone, ...), then consider the likelyhood and perform good countermeasures against this attack. Personally, I prefer Backups located at more then one place Good access logging threatening with legal actions And, most important: strong passwords!!!
1
0
0
We're doing a school project creating student databases for teachers, and we'd like to make it more secure by making the folder containing the student files accessible only by the program. Is this possible, or is it unnecessary?
Can I make a folder accessible only to my program?
0.379949
0
0
203
19,757,384
2013-11-03T19:58:00.000
0
0
1
0
python,python-3.x,intellij-idea
59,255,903
4
false
0
0
In Intellij (community edition) in terminal (I use Linux, this might differ in your case) I type python3 and it works the same way as in pycharm. With "_" as return value etc. I find this rather useful.
1
8
0
I'm new with python 3.3. I'm using intellij IDEA 12.1.6. How can I open the interpreter window, the one with the '>>>' prompt? Thanks
Open python interpreter on intellij
0
0
0
7,115
19,757,936
2013-11-03T20:49:00.000
1
0
0
1
python,ubuntu,gstreamer
19,759,263
1
true
0
0
h264parse is part of the "gst-plugins-bad" , you will want to install them through your package manager, if your script imports Gst from gi.repository you will want the 1.0 plugins, the 0.10 otherwise. Have a nice day :)
1
0
0
I am running a Python script in Ubuntu. The script uses gstreamer. I get the following error message. error: no element "h264parse" Let me know if any other information would be helpful.
python gstreamer script error message no element "h264parse"
1.2
0
0
2,506
19,758,414
2013-11-03T21:33:00.000
1
0
1
1
python,multithreading,macos,ubuntu
19,758,695
1
true
0
0
On hardware level only one operation on a device can be done at once. If the drive is busy, the requested operation is being queued. There are few different queues where it may be waiting and they vary in different operation systems, hardware or even drivers. There are different queue management methods as well most popular on software side is fifo (first in, first out), but on drive side it probably is NCQ (special queue managment that selects closest data to be written/read first) All of those queues have limited size. If hardware level queues are full (for example disk cache have been filled), the system halts all operations of applications requesting disk access. So if your application is doing some disk operations it may simply be waiting for a disk drive. As SSD technology makes whole processing much quicker, an access latency is about 10-20 times faster than in HDD, it is highly probable that your application doesn't use 100% of CPU because of HDD.
1
0
0
Can a HDD vs SSD setup account for lower processor utilization when there are many read and write operations? So I've written a program that spawns multiple processes. On OSX it runs great and utilizes 100% of the cpu. Overloading it with hundreds of threads works out fine. On Ubuntu, it freezes when pushing a large number of threads. When I limit the number of total threads the the max for the processors, the Ubuntu machine doesn't utilize all the computing power--only about 50%. My threads do run at nearly 100% for the first minute or so, then suddently it becomes random with a wave like utilization graph which doesn't always begin at the same time. Specs: OSX, SSD, Intel i7 4 cores x 2 threads each = 8 threads Ubuntu, HDD, 3930K Intel i7 6 cores x 2 threads each = 12 threads
Multiprocessing on Ubuntu vs OSX and SSD vs HDD
1.2
0
0
229
19,759,096
2013-11-03T22:42:00.000
1
0
0
0
python,wxpython,wxwidgets
19,760,286
2
true
0
1
Validate() is called only when the dialog is about to close by default, but you may also call it yourself when the control loses focus. Finally, if your control doesn't accept some characters at all, you can also intercept wxEVT_CHAR events to prevent them from being entered. I do believe wxPython demo shows how to do it.
1
0
0
I'm trying to create Validators for inputs on forms. I learned already that in wxPython it is necessary to inherit from wx.Validator due to lack of support for standard wxTextValidator and others. My question is: how effectively check that string complies to simple rules (no regexp please) acceptableChars = ['a', 'b', ...] all(char in acceptableChars for char in string) is something like this efficient? and how to cleanly specify all alphanumeric or digits? or maybe is there any ready class or function? will overriding Validate method only keep the constraints while inputing data - I mean will it prevent user from entering digits into alphanumerical TextCtrl or will it check only at closing the modal diagog?
Validating data in wxPython
1.2
0
0
215
19,759,098
2013-11-03T22:43:00.000
1
0
1
0
python,list,union,intersect
19,759,463
1
false
0
0
If you can assume that the two input lists are sorted, then this is just the merge step of mergesort. For that, you need two indices, i and j, and you move one of them forward at a time, not both. Start them out at 0. When one of them hits the end of its list, you need two while loops - one that takes values from the first list until exhausted, and another that takes values form the second list until exhausted.
1
0
0
I have to recreate the 2 pythons functions "a.union(b)" and a.intersect(b) with only with the tools append; pop; len;while; for i in range; if-else; l[i] (for a list l); and booleens And to have at the end a function with 2 lists as argument and to return the final ordonned list. For example a=[1.2.5.6] b=[3.5.6.8.15] if i enter f(a,b) and get in return [1.2.3.5.6.8.15] (union) and if i enter g(a,b) i get something like [5.6] I tried to do it by comparing the list terms successively but in that case if one list is shorter it will be emptied before the 1other and i will be comparing a number with nothing. I tried to use while but I can only check if the list is emptied for one and not both. Please if you could help me :s Ps: Not beeing english i hope you'll pass on the language mistakes I did.
Python: recreate union and intersect of 2 lists with basic tools
0.197375
0
0
95
19,759,594
2013-11-03T23:40:00.000
0
0
0
0
python,sql,qt,sqlite
19,764,106
1
false
0
1
SQLite has no mechanism by which another user can be notified. You have to implement some communication mechanism outside of SQLite.
1
0
0
I have PyQt application which uses SQLite files to store data and would like to allows multiple users to read and write to the same database. It uses QSqlDatabase and QSqlTableModels with item views for reading and editing. As is multiple users can launch the application and read/write to different tables. The issue is this: Say user1's application reads table A then user2 writes to index 0,0 on table A. Since user1 application has already read and cached that cell and doesn't see user2's change right away. The Qt item view's will update when the dataChanged signal emits but in this case the data is being changed by another application instance. Is there some way to trigger file changes by another application instance. What's the best way to handle this. I'm assuming this is really best solved by using an SQL server host connection rather than SQLite for the database, but in the realm of SQLite what would be my closest workaround option? Thanks
Signaling Cell Changes across multiple QSqlDatabase to the same SQliteFile
0
1
0
74
19,759,934
2013-11-04T00:23:00.000
1
0
0
1
python,google-app-engine,google-cloud-datastore
19,760,171
1
true
1
0
Why is including the paths that onerous ? Normally the remote_api shell is used interactively but it is a good tool that you can use as the basis of acheiving what your want. The simplest way will be to copy and modify the remote_api shell so that rather than presenting an interactive shell you get it to run a named script. That way it will deal with all the path setup. In the past I have integrated the remote_api inside a zope server, so that plone could publish stuff to appengine. All sort of things are possible with remote_api, however you need to deal with imports like anything else in python, except that appengine libraries are not installed in site-packages.
1
0
0
I'm trying to do some local processing of data entries from the GAE datastore and I am trying to do this by using the remote_api. I just want to write some quick processing scripts that pull some data, but I am getting import errors saying that Python cannot import from google. Am I supposed to run the script from within the development environment somehow. Or perhaps I need to include all of the google stuff in my Python path? That seems excessive though.
Using the GAE remote_api to Create Local Scripts
1.2
0
0
71
19,761,351
2013-11-04T03:47:00.000
1
1
1
0
python,matlab,python-2.7,path,directory
19,761,894
2
false
0
0
Your $PATH should control where python comes from, but I don't believe it will control where your pgcode.py comes from - at least, not in the way you're using it now. You might want to either use a #!/usr/bin/env python and make your script executable, or be very mindful of what directory you're in when you try to python pgcode.py (you can prepend "pwd;" to your python command to see), or specify a full path to pgcode.py. HTH
1
0
0
I am looking to run a file I created in python from a matlab script. I have checked that my python file works if I run it from the python interface. However I have not been able to get my python to run from matlab. The following is the code situation I am in. In matlab., I have the following code:(My file name is pgcode.py) ! python pgcode.py and interchangeably I have use this code as well: system('python pgcode.py') The error that results in matlab is: "python: can't open file 'pgcode.py': [Errno 2] No such file or directory" I have set my PATH directory and I really think this is an issue with setting the path so that I can find the file I have created but I haven't been able to figure out how to do this. I am using windows and Python 2.7.5. Any help is much appreciated. Thanks in advance!
Run Python file from matlab .m file
0.099668
0
0
746
19,761,546
2013-11-04T04:14:00.000
1
0
0
1
python,networking,monitoring
19,761,799
1
false
0
0
In python, you'd probably have to wrap things - it could be a bit of a challenge. In Linux, the netstat program will probably do something that's at least related to what you want.
1
0
0
I have a Python program that queries multiple remote services (MongoDB, MySQL, etc). Is there a way to track how much data my program is transferring over the network either within the Python program or through some Linux utility?
How can I track how much data my Python program is sending / receiving over the network?
0.197375
0
0
73
19,761,898
2013-11-04T05:01:00.000
0
0
0
0
python,django,windows
19,762,217
2
false
1
0
It is hard to tell whether the issue is related to Windows specifically, rather than compatibility issues with images/CSS/Javascript/plugins such as Flash. Are you running the latest versions of those browsers (or at least the same versions as on your desktop)? Do you have different security software/firewalls? Do other sites load inconsistently? Seems unlikely to be a Django issue (although you can try loading sites like djangoproject.com).
1
0
0
I have several django projects and they work well on my desktop. But when I run them on my laptop, they run ok for sometime. Then on a random occasion, opening a page won't work. The browser keeps trying to load the page (title tab keeps spinning, URL changes to the page its trying to open, and the page turns blank), while the development server (django on windows shell) says it has successfully served the page (200 status). This behavior is consistent among Firefox, IE and Chrome. I tried changing ports, using machine IP instead of localhost, loading static files on external server, but nothing works. I tried opening the site (using laptop computer name) from desktop browsers and behaves the same. Another interesting thing is, even if I shutdown and restart the django server, I wont be able to open the page that have failed previously unless I close the loading page. My laptop is running a basic Windows 8, while desktop is Windows 8 Pro. I think the windows version has something to do with it. Does anyone know how to solve this? I hope I made myself clear. Thanks.
Why does django stops loading a page after opening several pages?
0
0
0
908
19,762,614
2013-11-04T06:19:00.000
0
0
0
0
eclipse,python-2.7,django-1.5
20,859,383
1
false
1
0
I had the same problem and managed to fix. Apparently, what caused the problem is that I have included the site-packages directory from a virtual environment into the PYTHONPATH of interpreters. After removing it, the grayed out buttons went normal.
1
0
0
I have installed eclipse(indego)/python(2.7)/django(1.5.5) properly on a ubuntu 12.04 64bit. but when I start a new pydev project and give a new project name, the 'next' button and 'finish' button are both grey, and no error are shown in create diaglog box. How to create a django project?
Next button is grey
0
0
0
735
19,763,836
2013-11-04T08:16:00.000
0
0
0
0
python,openerp,invoice
19,797,646
1
false
1
0
A couple of things: You can't just add the tax to the invoice total. Invoice totals are stored functional fields that are calculated as the sum of the invoice lines so you would need to adjust the tax on a specific invoice line, or add a line with the taxes. Taxes are managed via fiscal positions. Create your tax object, then create a fiscal position, then add this as the default for the customer, or just apply it to an invoice on a case by case basis.
1
0
0
I am working with OpenERP 7. For business requirement, I have to add a tax to the invoice, this tax will be calculated like that : tax1= amount_untaxed * 0,1 % + amount_tax * 0.1 % A little bit of help will be appreciated .
How to add a calculated tax in OpenERP 7
0
0
0
963
19,767,569
2013-11-04T12:15:00.000
2
0
0
1
python,django,macos,postgresql
19,772,866
5
false
1
0
It seems that it's libssl.1.0.0.dylib that is missing. Mavericks comme with libssl 0.9.8. You need to install libssl via homebrew. If loaderpath points to /usr/lib/, you also need to symlink libssl from /usr/local/Cell/openssl/lib/ into /usr/lib.
1
2
0
I'm trying to get Django running on OS X Mavericks and I've encountered a bunch of errors along the way, the latest way being that when runpython manage.py runserver to see if everything works, I get this error, which I believe means that it misses libssl: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: @loader_path/../lib/libssl.1.0.0.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so Reason: image not found I have already upgraded Python to 2.7.6 with the patch that handles some of the quirks of Mavericks. Any ideas? Full traceback: Unhandled exception in thread started by > Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 93, in inner_run self.validate(display_num_errors=True) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 280, in validate num_errors = get_validation_errors(s, app) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/validation.py", line 28, in get_validation_errors from django.db import models, connection File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/init.py", line 40, in backend = load_backend(connection.settings_dict['ENGINE']) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/init.py", line 34, in getattr return getattr(connections[DEFAULT_DB_ALIAS], item) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 93, in getitem backend = load_backend(db['ENGINE']) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 27, in load_backend return import_module('.base', backend_name) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module import(name) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 14, in from django.db.backends.postgresql_psycopg2.creation import DatabaseCreation File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/creation.py", line 1, in import psycopg2.extensions File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/init.py", line 50, in from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: @loader_path/../lib/libssl.1.0.0.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so Reason: image not found
Django can't find libssl on OS X Mavericks
0.07983
1
0
9,380
19,769,186
2013-11-04T13:53:00.000
0
0
0
0
python,django,django-rest-framework
19,815,648
4
false
1
0
It seems like you should be overriding restore_object() in your serializer, not save(). This will allow you to create your object correctly. However, it looks like you are trying to abuse the framework -- you are trying to make a single create() create two objects (the user and the profile). I am no DRF expert, but I suspect this may cause some problems. You would probably do better by using a custom user model (which would also include the profile in the same object).
1
7
0
In django, creating a User has a different and unique flow from the usual Model instance creation. You need to call create_user() which is a method of BaseUserManager. Since django REST framework's flow is to do restore_object() and then save_object(), it's not possible to simply create Users using a ModelSerializer in a generic create API endpoint, without hacking you way through. What would be a clean way to solve this? or at least get it working using django's built-in piping? Edit: Important to note that what's specifically not working is that once you try to authenticate the created user instance using django.contrib.auth.authenticate it fails if the instance was simply created using User.objects.create() and not .create_user().
How to create a django User using DRF's ModelSerializer
0
0
0
1,321
19,771,234
2013-11-04T15:35:00.000
1
0
0
0
jquery,python,selenium,selenium-webdriver
19,771,618
4
false
1
0
Selenium only works with your web browser. If you are opening something other than a web browser such as file browser you cannot interact with it. Drag and drops work with items within a web browser but not from program such as Windows Explorer or a Linux file explorer to a web browser. Create and element in your browser with jQuery and drag and drop it.
1
0
0
how can I simulate the action of dragging and dropping a file from the filesystem to an element that has a ondrag event trigger? As for the normale "file" input, I was able to set the value of the input with jQuery. Can't I create a javascript File object or use any similar hack? Thanks Thanks
Selenium python: simulate file drag
0.049958
0
1
1,516
19,776,363
2013-11-04T20:19:00.000
12
0
0
0
python,django,webserver
19,776,830
1
true
1
0
It's exactly what it says on the tin - a simple, lightweight web server implemented in Python that ships with Django and is intended for development purposes only. It's not a free-standing web server in its own right and is intended purely for developing applications with Django - you should never use it in production because it simply doesn't offer all the functionality you need in a production web server. A web server can be implemented in virtually any programming language, and so it makes sense to ship one implemented in Python with Django in order that you can get working with it immediately without having to install something like Apache as well. Most web servers that might be used in production, such as Apache and Nginx, are written in C so it wouldn't really be practical to ship them with Django. Also, shipping your own development server cuts down on complexity. Apache and Nginx are both complex pieces of software that require a fair amount of configuration, and while there are ways to automate that during development, it's not something you really want to have to deal with when you'd rather be writing code. All you need to get you started is something that will serve static and dynamic content - you don't need a lot of the other functionality required. It's notable that even PHP now ships with a development server. When you go live with a Django project, you should use of course use a proper web server. It's generally recommended that with Django, in production you should use two web servers, one to serve static content, the other to serve dynamic content, because involving Django in serving static content will slow it down. This sounds odd at first, but it actually makes a lot of sense, because what you do is set one web server to serve all the static content, then have it reverse proxy to the other server, which is running on a non-standard port, and serves all the dynamic content. The setup I have for my current project is Nginx for the static content, with Gunicorn for the dynamic content.
1
4
0
What type of server Django uses when "runserver" command is ran? Documentation says more or less that it's "lightweight development Web server". Is it for example Apache? Thanks in advance.
What type of server Django runserver uses?
1.2
0
0
2,017
19,776,571
2013-11-04T20:31:00.000
30
0
0
1
macos,error-handling,ipython,dlopen
22,560,206
1
true
0
0
Shared object location under OS X is sometimes tricky. When you directly call dlopen() you have the freedom of specifying an absolute path to the library, which works fine. However, if you load a library which in turn needs to load another (as appears to be your situation), you've lost control of specifying where the library lives with its direct path. There are environment variables that you could set before running your main program that tell the dynamic loader where to search for things. In general these are a bad idea (but you can read about them via the man dyld command on an OS X system). When an OS X dynamic library is created, it's given an install name; this name is embedded within the binary and can be viewed with the otool command. otool -L mach-o_binary will list the dynamic library references for the mach-o binary you provide the file name to; this can be a primary executable or a dylib, for example. When a dynamic library is statically linked into another executable (either a primary executable or another dylib), the expected location of where that dylib being linked will be found is based on the location written into it (either at the time it was built, or changes that have been applied afterwards). In your case, it seems that phys_services.so was statically linked against libphys-services.dylib. So to start, run otool -L phys_services.so to find the exact expectation of where the dylib will be. The install_name_tool command can be used to change the expected location of a library. It can be run against the dylib before it gets statically linked against (in which case you have nothing left to do), or it can be run against the executable that loads it in order to rewrite those expectations. The command pattern for this is install_name_tool -change <old_path> <new_path> So for example, if otool -L phys_services.so shows you /usr/lib/libphys-services.dylib and you want to move the expectation as you posed in your question, you would do that with install_name_tool -change /usr/lib/libphys-services.dylib @rpath/lib/libphys-services.dylib phys_services.so. The dyld man page (man dyld) will tell you how @rpath is used, as well as other macros @loader_path and @executable_path.
1
12
0
I am a newbie in this field. My laptop is Macbook air, Software: OS X 10.8.5 (12F45). I am running a code which gives me the following error: dlopen(/Users/ramesh/offline/build_icerec/lib/icecube/phys_services.so, 2): Library not loaded: /Users/ramesh/offline/build_icerec/lib/libphys-services.dylib Referenced from: /Users/ramesh/offline/build_icerec/lib/icecube/phys_services.so Reason: image not found I did google search and found variety of answers. I think the one that works is to use " -install_name @rpath/lib ". My question is, how to use -install_name @rpath/lib in my case?
Error: dlopen() Library not loaded Reason: image not found
1.2
0
0
27,135
19,779,371
2013-11-04T23:36:00.000
1
0
0
1
python,command-line,compiler-construction,command-line-interface,python-2.6
19,783,232
1
true
0
0
Closing the loop: bdist in the path is a sign that the package was installed with setup.py install and is running from the standard Python system path, not from wherever you have it checked out. Easy fix is to setup.py install it again. Harder fix is to uninstall it and fiddle with Apache's working directory, but that's not quite my area. :)
1
0
0
I have a script that I am running at /var/scripts/SomeAppName/source/importer/processor.py That script triggers an error that has a line that says: File "build/bdist.linux-i686/egg/something/cms/browser.py", line 43, in GetBrowser The problem I'm running into is that I'm unable to locate build/bdist.linux-i686/egg/something/cms/browser.py but I can locate /var/scripts/AnotherApp/appcommon/cms/browser.py and /var/scripts/AnotherApp/build/lib/appcommon/cms/browser.py I have modieified both of those files to remove the part that is throwing the error but am still getting the same error as if the file hasn't been modified at all. I'm guessing the problem is that I'm not modifying the correct file or I need to compile the script some how but I'm just not able to find out where/how to do this. I have tried restarting apache but with no luck. Any help or guidance as to where I should be looking or if I need to run some sort of command to re-compile to browser.py file would be appreciated.
I can't locate correct Python script to update
1.2
0
0
40
19,779,492
2013-11-04T23:47:00.000
0
0
1
0
debugging,visual-studio-2012,ironpython
19,797,185
2
false
0
0
IronPython doesn't generate debugging information for its generated code by default, so VS just does the best it can. If you're running ipy.exe, then you should run with the -X:Debug command-line option; if you're embedding, you'll need to pass "Debug" as true when creating the engine.
1
2
0
I'm using Visual Studio 2012 to debug my IronPython program. I've got IronPython and PyTools installed already. While debugging, when I hover over a variable, say tenants_path, the value that's shown is IronPython.Runtime.ClosureCell. Why is this happening?
Debugging IronPython in Visual Studio
0
0
0
3,583
19,780,911
2013-11-05T02:21:00.000
0
0
1
0
java,python,arrays,algorithm,big-o
19,781,014
2
false
0
0
templatetypedef's suggestion to use a hash table is a good one. I want to explain a little more about why. The key here is realizing that you are essentially searching for some value in a set. You have a set of numbers you are searching in (2 * each value in the input array), and a set of numbers you are searching for (each value in the input array). Your brute-force naive case is just looking up values directly in the search-in array. What you want to do is pre-load your "search-in" set into something with faster lookups than an array (like a hash table), then you can search from there. You can also further prune your results by not searching for A[i] where A[i] is odd; because you know that A[i] = 2 * A[j] can never be true if A[i] is odd. You can also compute the minimum and maximum values in the "search-in" array on the fly during initialization and prune all A[i] outside that range. The performance there is hard to express in big O form since it depends on the nature of the data, but you can calculate a best- and worst- case and an amortized case as well. However, a proper choice of hash table size (if your value range is small you can simply choose a capacity that is larger than your value range, where the hash function is the value itself) may actually make pruning more costly than not in some cases, you'd have to profile it to find out.
2
2
0
below I've listed a problem I'm having some trouble with. This problem is a simple nested loop away from an O(n^2) solution, but I need it to be O(n). Any ideas how this should be tackled? Would it be possible to form two equations? Given an integer array A, check if there are two indices i and j such that A[j] = 2∗A[i]. For example, on the array (25, 13, 16, 7, 8) the algorithm should output “true” (since 16 = 2 * 8), whereas on the array (25, 17, 44, 24) the algorithm should output “false”. Describe an algorithm for this problem with worst-case running time that is better than O(n^2), where n is the length of A. Thanks!
A[j] = 2∗A[i] in list with better than O(n^2) runtime
0
0
0
109
19,780,911
2013-11-05T02:21:00.000
6
0
1
0
java,python,arrays,algorithm,big-o
19,780,927
2
true
0
0
This is a great spot to use a hash table. Create a hash table and enter each number in the array into the hash table. Then, iterate across the array one more time and check whether 2*A[i] exists in the hash table for each i. If so, then you know a pair of indices exists with this property. If not, you know no such pair exists. On expectation, this takes time O(n), since n operations on a hash table take expected amortized O(1) time. Hope this helps!
2
2
0
below I've listed a problem I'm having some trouble with. This problem is a simple nested loop away from an O(n^2) solution, but I need it to be O(n). Any ideas how this should be tackled? Would it be possible to form two equations? Given an integer array A, check if there are two indices i and j such that A[j] = 2∗A[i]. For example, on the array (25, 13, 16, 7, 8) the algorithm should output “true” (since 16 = 2 * 8), whereas on the array (25, 17, 44, 24) the algorithm should output “false”. Describe an algorithm for this problem with worst-case running time that is better than O(n^2), where n is the length of A. Thanks!
A[j] = 2∗A[i] in list with better than O(n^2) runtime
1.2
0
0
109
19,782,075
2013-11-05T04:46:00.000
4
0
1
1
python,execution,terminate,termination
33,560,303
17
false
0
0
you can also use the Activity Monitor to stop the py process
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
0.047024
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
5
0
1
1
python,execution,terminate,termination
51,491,746
17
false
0
0
To stop your program, just press CTRL + D or exit().
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
0.058756
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
2
0
1
1
python,execution,terminate,termination
52,684,880
17
false
0
0
Press Ctrl+Alt+Delete and Task Manager will pop up. Find the Python command running, right click on it and and click Stop or Kill.
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
0.023525
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
4
0
1
1
python,execution,terminate,termination
53,984,398
17
false
0
0
Control+D works for me on Windows 10. Also, putting exit() at the end also works.
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
0.047024
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
3
0
1
1
python,execution,terminate,termination
55,071,056
17
false
0
0
If you are working with Spyder, use CTRL+. and you will restart the kernel, also you will stop the program.
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
0.035279
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
3
0
1
1
python,execution,terminate,termination
61,886,193
17
false
0
0
Windows solution: Control + C. Macbook solution: Control (^) + C. Another way is to open a terminal, type top, write down the PID of the process that you would like to kill and then type on the terminal: kill -9 <pid>
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
0.035279
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
8
0
1
1
python,execution,terminate,termination
51,932,807
17
false
0
0
Ctrl+Z should do it, if you're caught in the python shell. Keep in mind that instances of the script could continue running in background, so under linux you have to kill the corresponding process.
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
1
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
2
0
1
1
python,execution,terminate,termination
67,751,184
17
false
0
0
Try using: Ctrl + Fn + S or Ctrl + Fn + B
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
0.023525
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
33
0
1
1
python,execution,terminate,termination
46,964,691
17
false
0
0
Ctrl-Break it is more powerful than Ctrl-C
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
1
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
76
0
1
1
python,execution,terminate,termination
19,782,093
17
true
0
0
To stop your program, just press Control + C.
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
1.2
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
190
0
1
1
python,execution,terminate,termination
34,029,481
17
false
0
0
You can also do it if you use the exit() function in your code. More ideally, you can do sys.exit(). sys.exit() which might terminate Python even if you are running things in parallel through the multiprocessing package. Note: In order to use the sys.exit(), you must import it: import sys
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
1
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
55
0
1
1
python,execution,terminate,termination
44,786,454
17
false
0
0
To stop a python script just press Ctrl + C. Inside a script with exit(), you can do it. You can do it in an interactive script with just exit. You can use pkill -f name-of-the-python-script.
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
1
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
7
0
1
1
python,execution,terminate,termination
59,539,599
17
false
0
0
exit() will kill the Kernel if you're in Jupyter Notebook so it's not a good idea. raise command will stop the program.
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
1
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
59
0
1
1
python,execution,terminate,termination
53,211,247
17
false
0
0
If your program is running at an interactive console, pressing CTRL + C will raise a KeyboardInterrupt exception on the main thread. If your Python program doesn't catch it, the KeyboardInterrupt will cause Python to exit. However, an except KeyboardInterrupt: block, or something like a bare except:, will prevent this mechanism from actually stopping the script from running. Sometimes if KeyboardInterrupt is not working you can send a SIGBREAK signal instead; on Windows, CTRL + Pause/Break may be handled by the interpreter without generating a catchable KeyboardInterrupt exception. However, these mechanisms mainly only work if the Python interpreter is running and responding to operating system events. If the Python interpreter is not responding for some reason, the most effective way is to terminate the entire operating system process that is running the interpreter. The mechanism for this varies by operating system. In a Unix-style shell environment, you can press CTRL + Z to suspend whatever process is currently controlling the console. Once you get the shell prompt back, you can use jobs to list suspended jobs, and you can kill the first suspended job with kill %1. (If you want to start it running again, you can continue the job in the foreground by using fg %1; read your shell's manual on job control for more information.) Alternatively, in a Unix or Unix-like environment, you can find the Python process's PID (process identifier) and kill it by PID. Use something like ps aux | grep python to find which Python processes are running, and then use kill <pid> to send a SIGTERM signal. The kill command on Unix sends SIGTERM by default, and a Python program can install a signal handler for SIGTERM using the signal module. In theory, any signal handler for SIGTERM should shut down the process gracefully. But sometimes if the process is stuck (for example, blocked in an uninterruptable IO sleep state), a SIGTERM signal has no effect because the process can't even wake up to handle it. To forcibly kill a process that isn't responding to signals, you need to send the SIGKILL signal, sometimes referred to as kill -9 because 9 is the numeric value of the SIGKILL constant. From the command line, you can use kill -KILL <pid> (or kill -9 <pid> for short) to send a SIGKILL and stop the process running immediately. On Windows, you don't have the Unix system of process signals, but you can forcibly terminate a running process by using the TerminateProcess function. Interactively, the easiest way to do this is to open Task Manager, find the python.exe process that corresponds to your program, and click the "End Process" button. You can also use the taskkill command for similar purposes.
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
1
0
0
854,269
19,782,075
2013-11-05T04:46:00.000
8
0
1
1
python,execution,terminate,termination
53,210,260
17
false
0
0
When I have a python script running on a linux terminal, CTRL+\ works. (not CRTL + C or D)
15
161
0
I wrote a program in IDLE to tokenize text files and it starts to tokeniza 349 text files! How can I stop it? How can I stop a running Python program?
How to stop/terminate a python script from running?
1
0
0
854,269
19,782,573
2013-11-05T05:40:00.000
0
0
0
1
python,windows
19,782,967
1
false
0
0
For Linux, see documentation on upstart (for Ubuntu) or service (for RedHat). The write a start-up script to start your Python script with appropriate rights. You can also configure it to be restarted if it crashes. Windows has a similar facility for start-up programs, where you can register your program to start.
1
1
0
Working on windows platform, I have a python application which once invoked, remembers its state and resumes in case of system crash or reboot. The application actually runs some other executables or in technological terms is of type framework. The typical scenario where the executable need to run with admin mode passes for first time but fails after resuming from crash or reboot. What I believe is I need to invoke the resumed application with admin mode. In what way this could be achieved, Thanks in advance!
Run python application with admin privileges
0
0
0
341
19,783,877
2013-11-05T07:36:00.000
1
0
0
1
python,linux,ubuntu,python-3.x,usb
19,872,640
2
true
0
0
I can detect it rather easily through monitoring the /dev/disks/by-label/ directory.
1
2
0
I'm building a backup program which involves detecting when media available for backup is inserted. I've looked into detecting the insertion of backup media, and I'm going to use the file system watch service inotify on the /media/username directory. The problem is that I've looked into this directory and there are folders that don't represent any currently available medium. How can I detect the list of currently available mediums (USBs, HDDs) and watch for any future ones? More technically, what are the characteristics of an actively available USB/HDD folder in the /media/username directory?
How can I detect using Python the insertion of only USBs and hard drives on Ubuntu/Linux?
1.2
0
0
4,356
19,784,882
2013-11-05T08:49:00.000
2
0
1
0
python,pygame,canopy
19,785,180
3
false
0
0
You should be able to install every package into your python dist. pygame might not be in the canopy package manager but even then you can install it directly.
1
1
0
I am planning to use canopy editor for my python coding, especially for some basic game development... Could anyone please let me know if we can install pygame package for canopy editor ?
Can we install pygame package with enthought canopy?
0.132549
0
0
4,019
19,784,948
2013-11-05T08:53:00.000
1
0
0
1
python,performance,redis,bottle
19,787,579
3
false
0
0
If you are a beginner you should not start with evented (twisted/tornado/gevent/eventlet...) libs. It will lead you to place you dont know! If you need to scale add machines and balance the load with a load balancer.
1
1
0
I'm prototyping a Python/Redis based API and am serving JSON using Bottle but unfortunately out of the box Bottle performs badly under load and under high concurrency. Some initial testing on real traffic results in the python script crashing without terminating, which means the API is unresponsive and not restarting*. What is currently the best solution to scale a Python/Redis API in terms of performance as well as documentation. I find the bottle+greenlet solution poorly documented and not easy to implement for a Python beginner like me. I heard tornado is good but that its integration with Redis is slower than Bottle's. *Seems that when bottle is unable to send the body of the HTTP request to the client, the server will bug out with "[Errno 32] Broken pipe" errors, which seems like a bad reason for a server to stop working
Best high concurrency Python / Redis server
0.066568
0
0
1,710
19,787,324
2013-11-05T10:55:00.000
0
0
0
0
python,macos,user-interface,pygame
19,800,717
1
false
0
1
Well, this ended up working. When I want to hide the window, I do pygame.display.quit() and make my code properly handle not having a display. When I want to show it, I do pygame.display.set_mode(...) with the former resolution. The net effect is that of hiding & showing the window. Unfortunately the window gets created in a different spot than where it started, and although apparently you can tell SDL to create the window in a particular spot, I haven't been able to find a way to get the window's location...
1
1
0
Pressing command-H in OSX immediately hides the active window. How do I achieve the same effect, programmatically, from Python? Specifically, I'd like to find a particular window that my application creates and then be able to show & hide it programmatically. I already know how to do this with pywin32 but I'm afraid my expertise there doesn't quite cover OSX as well. If it helps, the window in question is one created by pygame. I know that pygame has pygame.display.iconify() but that doesn't satisfy my requirements - the window doesn't disappear immediately, but rather the disappearance is animated, and there's no corresponding "uniconify" function that I can find.
hide pygame window on OSX
0
0
0
819
19,789,816
2013-11-05T13:12:00.000
0
0
0
0
python,django,responsive-design
19,904,103
3
true
1
0
My solution now is as follows: I will use the bottle microframework for generating serverside dynamic html-pages on request. This will cause me to reload the page everytime I want to see new machine information, but for now it is enough for me. Later I can add AJAX for live monitoring (I know this is javascript, I think I have to learn it anyway.) Thanks for your solutions though.
1
1
0
I have a bigger project to handle, so this is what I want to do: I have a Server with an MySQL database and Apache webserver running on. I save some machine information data in the database and want to create a web app to see, e. g. if the machine is running. The web app should be designed responsive, i. e. changing design in accordance to the screen resolution of the current used device. This is important because the app will be used from smartphones and tablets mainly, but should also work on a normal pc. I wrote a Python programm for my machine to get the data, and another Python programm on my server receiving information and saving in the database. So my job now is to create the "responsive website" for my smartphone etc. Then I want to broadcast this with my webserver. Another Point is, that the web app should be build dynamically. If I add another machine to my database, it should appear on my web app to be clickable and then show the related information. First I thought about doing this in HTML5 and CSS3, with the use of jQueryMobile. But I never used javascript. I'm just experienced in the "old" HTML and CSS. Is Django a better choice, since I'm quite experienced in Python? Or do I need both perhaps? I haven't worked with any webframework yet, please help me choosing. Or do I need one at all?
Create a responsive web app with Django or jQueryMobile?
1.2
0
0
7,995
19,790,818
2013-11-05T14:02:00.000
1
0
0
0
python,django,settings,preferences
19,790,956
2
false
1
0
IMHO if the settings depends on user preferences I think a database is the way to store them, not files. Settings files are inherent to project's settings and database storage (normally) is the one who should persist dynamic user preferences.
2
0
0
I want to add to my django project the ability a user to set his own settings, and Django should remember that when the user logins. I also want my project to have some global settings that will be the same for every user. E.g before a user logs in the system the will be a language set for every user.(global setting) but after the user is authenticated django will switch to his prefered language. What is the best approach for that? My thought was settings file for the global settings (language, locale etc) and a settings application for user-specific preferances. The user-specific preferances might have to do with very simple but specific staff like colors of appearence of things etc. Is creating a settings app the right approach? No code just need a point to right direction first before implementing
Preferences and settings in a Django Project
0.099668
0
0
196
19,790,818
2013-11-05T14:02:00.000
1
0
0
0
python,django,settings,preferences
19,790,987
2
false
1
0
Edit: I thought first you want to generate the Django settings based on the admins, I didn't read your question completely. Anyway for user specific settings and preferences, definitely use a database. None use files for such tasks, you can query a database and create all your fields easily using Django's ORM.
2
0
0
I want to add to my django project the ability a user to set his own settings, and Django should remember that when the user logins. I also want my project to have some global settings that will be the same for every user. E.g before a user logs in the system the will be a language set for every user.(global setting) but after the user is authenticated django will switch to his prefered language. What is the best approach for that? My thought was settings file for the global settings (language, locale etc) and a settings application for user-specific preferances. The user-specific preferances might have to do with very simple but specific staff like colors of appearence of things etc. Is creating a settings app the right approach? No code just need a point to right direction first before implementing
Preferences and settings in a Django Project
0.099668
0
0
196
19,791,353
2013-11-05T14:30:00.000
2
0
1
0
python,ruby,linker,interpreted-language,compiled-language
19,791,585
2
false
0
0
Well, in Python, modules are loaded and executed or parsed when the interpreter finds some method or indication to do so. There's no linking but there is loading of course (when the file is requested in the code). Python do something clever to improve its performance. It compiles to bytecode (.pyc files) the first time it executes a file. This improves substantially the execution of the code next time the module is imported or executed. So the behavior is more or less: A file is executed Inside the file, the interpreter finds a reference to another file It parses it and potentially execute it. This means that every class, variable or method definition will become available in the runtime. And this is how the process is done (very general). Of course, there are optimizations and caches to improve the performance. Hope this helps!
2
10
0
In compiled languages, the source code is turned into object code by the compiler and the different object files (if there are multiple files) are linked by the linker and loaded into the memory by the loader for execution. If I have an application written using an interpreted language (for eg., ruby or python) and if the source code is split across files, when exactly are the files brought together. To put it other words when is the linking done? Do interpreted languages have Linkers and Loaders in the first place or the interpreter does everything? I am really confused about this and not able to get my head around it!! Can anyone shine some light on this?!
Linking and Loading in interpreted languages
0.197375
0
0
2,957