Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
18,716,623
2013-09-10T10:34:00.000
0
0
0
0
python,openerp
18,716,823
3
false
1
0
In python library are available to export data in pdf and excel For excel you can use: 1)xlwt 2)Elementtree For pdf genration : 1)Pypdf 2)Reportlab are available
1
1
0
I'm a beginner of openerp 7. i just want to know the details regarding how to generate report in openerp 7 in xls format. The formats supported in OpenERP report types are : pdf, odt, raw, sxw, etc.. Is there any direct feature that is available in OpenERP 7 regarding printing the report in EXCEL format(XLS)
How to print report in EXCEL format (XLS)
0
1
0
2,902
18,717,844
2013-09-10T11:32:00.000
5
0
0
0
python,flask
18,719,851
1
true
1
0
Use 0.0.0.0 as source IP. Also remember that, your VM will be turned off 15 minutes after logout.
1
1
0
I am working on a Flask app, and want to deploy it on Koding so that my other team members can also view/edit it. I cloned the git repository inside a VM ( on Koding.com ), install PIP, installed dependencies, but when I start the flask server, it displays that the server has started and is running on 127.0.0.1:5000. But when I go to :5000, it says VM is not active. NOTE : normally works and displays the files under VM's "Web" folder.
How can I deploy a Flask application on Koding.com
1.2
0
0
626
18,721,204
2013-09-10T14:09:00.000
1
0
0
0
python,computer-vision,cluster-analysis,scikit-learn,k-means
18,735,714
2
false
0
0
K-means is not very robust to noise; and your "bad pictures" probably can be considered as such. Furthermore, k-means doesn't work too well for sparse data; as the means will not be sparse. You may want to try other, more modern, clustering algorithms that can handle this situation much better.
2
0
1
I'm testing some things in image retrival and i was thinking about how to sort out bad pictures of a dataset. For e.g there are only pictures of houses and in between there is a picture of people and some of cars. So at the end i want to get only the houses. At the Moment my approach looks like: computing descriptors (Sift) of all pictures clustering all descriptors with k-means creating histograms of the pictures by computing the euclidean distance between the cluster centers and the descriptors of a picture clustering the histograms again. at this moment i have got a first sort (which isn't really good). Now my Idea is to take all pictures which are clustered to a center with len(center) > 1 and cluster them again and again. So the Result is that the pictures which are particular in a center will be sorted out. Maybe its enough to fit the result again to the same k-means without clustering again?! the result isn't satisfying so maybe someone has got a good idea. For Clustering etc. I'm using k-means of scikit learn.
Sort out bad pictures of a dataset (k-means, clustering, sklearn)
0.099668
0
0
550
18,721,204
2013-09-10T14:09:00.000
1
0
0
0
python,computer-vision,cluster-analysis,scikit-learn,k-means
18,735,840
2
false
0
0
I don't have the solution to your problem but here is a sanity check to perform prior to the final clustering, to check that the kind of features you extracted is suitable for your problem: extract the histogram features for all the pictures in your dataset compute the pairwise distances of all the pictures in your dataset using the histogram features (you can use sklearn.metrics.pairwise_distance) np.argsort the raveled distances matrix to find the indices of the 20 top closest pairs of distinct pictures according to your features (you have to filter out the zero-valued diagonal elements of the distance matrix) and do the same to extract the top 20 most farest pairs of pictures based on your histogram features. Visualize (for instance with plt.imshow) the pictures of top closest pairs and check that they are all pairs that you would expect to be very similar. Visualize the pictures of the top farest pairs and check that they are all very dissimilar. If one of those 2 checks fails, then it means that histogram of bag of SIFT words is not suitable to your task. Maybe you need to extract other kinds of features (e.g. HoG features) or reorganized the way your extract the cluster of SIFT descriptors, maybe using a pyramidal pooling structure to extract info on the global layout of the pictures at various scales.
2
0
1
I'm testing some things in image retrival and i was thinking about how to sort out bad pictures of a dataset. For e.g there are only pictures of houses and in between there is a picture of people and some of cars. So at the end i want to get only the houses. At the Moment my approach looks like: computing descriptors (Sift) of all pictures clustering all descriptors with k-means creating histograms of the pictures by computing the euclidean distance between the cluster centers and the descriptors of a picture clustering the histograms again. at this moment i have got a first sort (which isn't really good). Now my Idea is to take all pictures which are clustered to a center with len(center) > 1 and cluster them again and again. So the Result is that the pictures which are particular in a center will be sorted out. Maybe its enough to fit the result again to the same k-means without clustering again?! the result isn't satisfying so maybe someone has got a good idea. For Clustering etc. I'm using k-means of scikit learn.
Sort out bad pictures of a dataset (k-means, clustering, sklearn)
0.099668
0
0
550
18,727,192
2013-09-10T19:21:00.000
2
0
1
0
python,list,data-structures
18,727,288
2
true
0
0
I have a requirement for a certain special case(s) of list of where the list items would be aware (have an attribute / property) that tells: whether the item is the last item in the list; [and] their index or "enumerator" in the list No you don't. You have a requirement that some other part of your code deal appropriately with the last element, or work with the index. There is no clean way to do what you ask, because it should be the responsibility of the list object to keep track of these things. AT MOST the list items should have a property which holds their parent list, but even then I wouldn't recommend that. Rewrite your code to keep a reference to the list.
1
1
0
I have a requirement for a certain special case(s) of list of where the list items would be aware (have an attribute / property) that tells: whether the item is the last item in the list their index or "enumerator" in the list The approaches I can think of off the top of my head are: Overriding __setitem__, __add__, insert, append, pop et. al Not store them in python lists, but have a "next" attribute pointing to the next item. Adding helper functions to sync the index attributes before evaluation (or after update) Don't store them as attributes but handle it "outside" the classes 1) seems like most pythonic approach, but would require overriding quite a few methods. 2) has the problem of having to re-implement the said methods (and doesn't really help with the indexes if I want to insert() or pop()) 3) & 4) have pitfall of "you must remember to call X before doing Y" All items in the said lists will (or should) always be instances of the same class. Is there a known design pattern for this or a pythonic approach that I'm not aware of?
How to implement a python list of position-aware items
1.2
0
0
135
18,728,883
2013-09-10T21:15:00.000
3
0
1
0
python,latex,ipython-notebook
19,784,607
2
false
0
0
You can set the author using ipython nbconvert --to latex --SphinxTransformer.author='John Doe' file.ipynb
1
3
0
Using the NB convert with default options ("article") I am not getting a footer with page numbers? I know nothing about LaTex but a brief look at the tpl files seems to indicate that I should get footers (maybe with page numbers?) The "book" option give nice footers, but is not a great format for other reasons... I looked at the generated tex file and don't see anything for footers,, I did find that I can replace "unknown author" with my name.. :-) Any guidence on how to modify either the generated tex file or something else?
IPython Notebook NB convert formatting Footers?
0.291313
0
0
1,224
18,729,510
2013-09-10T22:03:00.000
41
1
1
0
python,git
18,729,541
1
true
0
0
You are safe to remove the .pyc entry from your .gitignore, since .py[cod] will cover it. The square brackets are for matching any one of the characters, so it matches .pyc, .pyo and .pyd. Since they are all generated from the .py source, they should not be stored in source control.
1
27
0
What is the difference (if any) between ".pyc" and ".py[cod]" notation in ignoring files. I am noticing I have both on my git ignore file. Thanks
What is the difference between "py[cod]" and "pyc" in .gitignore notation?
1.2
0
0
8,304
18,732,250
2013-09-11T03:36:00.000
8
0
1
1
python,numpy,amazon-ec2,pip,easy-install
18,743,924
4
true
0
0
I ended up just installing numpy through yum, sudo yum install numpy. I guess this is the best I can do for now. When working with virtualenv and I need numpy, I will tell it to use site packages. Thanks for the suggestion @Robert.
1
11
0
I am having trouble installing numpy on an Amazon EC2 server. I have tried using easy_install, pip, pip inside a virtual env, pip inside another virtual env using python 2.7... Every time I try, it fails with the error: gcc: internal compiler error: Killed (program cc1), and then further down the line I get a bunch of python errors, with easy_install I get: ImportError: No module named numpy.distutils, and with pip I get: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 72: ordinal not in range(128). The EC2 instance is running kernel 3.4.43-43.43.amzn1.x86_64. Has anybody solved this problem? Numpy has always been hard for me to install, but I can usually figure it out... at this point I don't care whether it is in it's own virtualenv, I just want to get it installed.
Installing numpy on Amazon EC2
1.2
0
0
14,449
18,742,845
2013-09-11T13:39:00.000
0
0
1
1
python
18,743,249
1
false
0
0
subprocess.check_call is used to check the returned value - use subprocess.Popen this returns a process ID, (pid), which can be used after your time limit with pid.terminate() to end the process, (kill it).
1
0
0
I am using "subprocess" to execute a .exe file from a Python script. I need to do this in a loop, ie start the .exe, run it for a minute, kill it, do it all over again. I am using subprocess.check_call to execute it with arguments, but I don't know how to stop it.
In Python, how do I execute a .exe file, and stop it after n seconds?
0
0
0
1,986
18,743,895
2013-09-11T14:26:00.000
0
0
0
0
python,matplotlib,interactive-mode
18,815,103
1
true
0
0
Ok I have found the necessary functions. I used dir() function to find methods. axvspan() returns a matplotlib.patches.Polygon result. This type of data has set_visible method, using it as x.set_visible(0) I removed the lines and shapes.
1
0
1
I am doing some animating plots with ion()function. I want to draw and delete some lines. I found out axvspan() function, I can plot the lines and shapes with it as I want. But as long as I am doing an animation I also want to delete that lines and shapes. I couldn't find a way to delete them.
Deleting Lines of a Plot Which has Plotted with axvspan()
1.2
0
0
159
18,745,757
2013-09-11T15:46:00.000
2
0
1
0
python,python-3.x
18,745,877
3
false
0
0
One method is printing the backspace escape character (\b), which will will move the text cursor one character back; however, it is your responsibility to print something afterward to replace the text. For example, if the current text in the terminal is Time left: 4 and you print "\b", the user will see nothing has changed. However, if you print "\b5", it will replace it with 5.
1
0
0
Ok, I ask this question on behalf of someone else but I would also like to know if and how it is possible. Let's say you have a given line of code producing the string Time left: 4 Is there any way how I can then after this string has been printed edit the value of the "4" and change it to a "3" and then reprint the new string on the same line so Time left: 4 is replaced with Time left: 3 without causing a new line to be printed. I hope you understand the question I did my best to explain it.
How to alter a string after it has been printed
0.132549
0
0
131
18,746,272
2013-09-11T16:13:00.000
2
0
0
0
python,redirect,cookies,cherrypy
18,746,486
1
true
1
0
To answer my own question: It would appear that if I add cherrypy.response.cookie[<tag>]['path'] = '/' after setting the cookie value, it works as desired.
1
2
0
I am trying to figure out how to set a cookie just before a redirect from Cherrypy. My situation is this: when a user logs in, I would like to set a cookie with the users username for use in client-side code (specifically, inserting the users name into each page to show who is currently logged in). The way my login system works is that after a successful login, the user is redirected to whatever page they were trying to access before logging in, or the default page. Technically they are redirected to a different domain, since the login page is secure while the rest of the site is not, but it is all on the same site/hostname. Redirection is accomplished by raising a cherrypy.HTTPRedirect(). I would like to set the cookie either just before or just after the redirect, but when I tried setting cherrypy.response.cookie[<tag>]=<value> before the redirect, it does nothing. At the moment I have resorted to setting the cookie in every index page of my site, in the hopes that that will cover most of the redirect options, but I don't like this solution. Is there a better option, and if so what?
Python/Cherrypy: set cookie on redirect
1.2
0
1
816
18,747,034
2013-09-11T16:54:00.000
2
0
1
0
ipython-notebook
31,629,560
1
false
0
0
Found a nice workaround for this, using the 'slides' option of nbconvert: In your iPython notebook under "Cell Toolbar" select "Slideshow" Then in the top right of the cells that you don't want to show select Slide Type "skip" Now run python nbconvert your_notebook.ipynb --to slides Instead of serving the slide, just open the resulting html in a browser. And.. It doesn't contain the slides you told it to skip! Hope this helps.
1
2
0
Is there a method to exclude some cells from the NBconvert process For instance. An embedded video is cool when running to HTML, but when converting the HTML to PDF it creates a problem. So I want to exclude it in some instances.
Is it possible to exclude some cells from the ipython notebook when using NBConvert
0.379949
0
0
1,126
18,747,555
2013-09-11T17:24:00.000
0
0
1
0
python,python-3.x,packaging,setuptools,distribute
18,748,012
5
false
0
0
You could quite easily do this with something simple like a .zip file containing all the files; so long as all the files are exported to the same directory then they should all work! The downfall is if there are lots of dependancies for the modules, ie they have extra folders you would need to find. I also think a fair amount of people/companies write their own packaging systems so that all the modules are in 1 .py file that opens in the console and exports everything to it's correct place. This would require a fair amount of work though so you may to try and find one prebuilt. I've gone down this method and it didn't prove too taxing unitl I had to unzip .zips with files in... As another solution you could try PyExe (I think it's called that) to export everything to a single .exe file (Windows only though) I personally havn't used setuptools, distribute or distutils so can't comment on those unfortunately. 1 other thing to bear in mind is the licences for each module, some may not be allowed to be redistributed so check first!
1
5
0
I have a single Python3 .py module with a few dependencies (lockfile, python-daemon). Is there a simple way to package this with its' dependencies so that users do not need to download and install the other modules? An all included install is what I am trying to do. I tried looking at setuptools, distribute, and distutils and ended up even more confused than when I started.
Packaging single Python module with dependencies
0
0
0
554
18,747,730
2013-09-11T17:34:00.000
0
0
0
0
python,database,django
70,710,224
4
false
1
0
I think the best approach is to store the 'main file' in your media path of your project and save the address of file(path to the file) in your model. this way you dont need to convert....
1
13
0
I am using Django for creating one Web service and I want that web service to return Images. I am deciding the basic architecture of my Web service. What I came up to conclusion after stumbling on google is: I should store Images in DB after encoding them to Base64 format. Transferring the Images would be easy when directly Bases64 decoded string is transmitted. But I have one issue how can I store bases64 encoded string in DB using Django models? Also, if you see any flaw in my basic architecture please guide me. I am new to Web services and Django Thanks!!
Storing Images In DB Using Django Models
0
0
0
27,585
18,748,060
2013-09-11T17:54:00.000
1
0
1
1
python,macos,pip
18,748,089
2
true
0
0
Honestly, one way around this is to make sure that virtualenv works with the right version, and just use pip inside the virtualenv.
1
1
0
I went thought and installed pip and then added a bunch of libraries that I like to use and then, only after installing everything, did I realize that everything went into the 2.7.2 sit-packages directory, so the Python2.7.5 version doesn't see anything. Now, If I type python --version in the terminal, the correct version is started. However, pip is still "tied" to the default version of Python. How do I go about telling OSX to look at the new version of Python for everything?
noob, but I installed python 2.7.5 on my mac, how to I "target" that one rather than the built in 2.7.2?
1.2
0
0
257
18,751,328
2013-09-11T21:09:00.000
0
1
0
1
linux,windows,python-2.7,ide,remote-debugging
18,763,976
1
true
0
0
Beaglebone can do whatever we want to do in a a linux PC. But it is slower than PC so compile in pc and run in beaglebone via SSH is better way.
1
0
0
I want to program beaglebone black(linux) from PC(linux/windows) using any Python IDE (scientific IDE's like anaconda/python(xy) is preferred). How can I do that? How I can configure the systems? Sincerly.
Python remote programming / debugging
1.2
0
0
237
18,754,202
2013-09-12T02:08:00.000
2
0
0
1
python,google-app-engine,python-2.7,server-error
18,774,464
3
false
1
0
I'm not sure if this is your formatting when you loaded your code here, but where you define app in main.py should not be part of the contacts class. If it is, your reference to main.app in your app.yaml won't work and your page won't load.
3
0
0
I just started using Google App Engine and I am very new to Python. I may have made a stupid mistake or a fatal error, I don't know, but I realized that the basic "template" I downloaded from a website was old and used Python 2.5. So, I decided to update to Python 2.7 (after recieving a warning in the site's dashboard). I have no idea how to do this, but I blindly followed some instructions on how to update but I'm not sure what I did wrong. I know that I downloaded Python 2.7 (as the download path is C:/Python27/), so there shouldn't be a problem there. Can anybody tell what I'm doing wrong?
Upgrading to Python 2.7 Google App Engine 500 server error
0.132549
0
0
198
18,754,202
2013-09-12T02:08:00.000
0
0
0
1
python,google-app-engine,python-2.7,server-error
18,778,368
3
true
1
0
Thank you everyone for your respective answers and comments, but I recently stumbled upon GAE boilerplate and decided to use that and everything's fine. I kept having very odd problems with GAE beforehand, but the boilerplate is simple and seems to be working fine so far. Anyways, thanks again. (Note: I would delete the question but two people have already answered and received rep from +1s, and they are in fact helpful answers, so I'll leave it be).
3
0
0
I just started using Google App Engine and I am very new to Python. I may have made a stupid mistake or a fatal error, I don't know, but I realized that the basic "template" I downloaded from a website was old and used Python 2.5. So, I decided to update to Python 2.7 (after recieving a warning in the site's dashboard). I have no idea how to do this, but I blindly followed some instructions on how to update but I'm not sure what I did wrong. I know that I downloaded Python 2.7 (as the download path is C:/Python27/), so there shouldn't be a problem there. Can anybody tell what I'm doing wrong?
Upgrading to Python 2.7 Google App Engine 500 server error
1.2
0
0
198
18,754,202
2013-09-12T02:08:00.000
2
0
0
1
python,google-app-engine,python-2.7,server-error
18,754,606
3
false
1
0
I'm submitting as an answer because I'm relatively new to SO and don't have enough rep to comment, so sorry about that... But line 7 of your new main.py uses webapp instead of webapp2, so that may be causing some troubles, but likely isn't the reason that it's not working. Could you also provide the contact.html template?
3
0
0
I just started using Google App Engine and I am very new to Python. I may have made a stupid mistake or a fatal error, I don't know, but I realized that the basic "template" I downloaded from a website was old and used Python 2.5. So, I decided to update to Python 2.7 (after recieving a warning in the site's dashboard). I have no idea how to do this, but I blindly followed some instructions on how to update but I'm not sure what I did wrong. I know that I downloaded Python 2.7 (as the download path is C:/Python27/), so there shouldn't be a problem there. Can anybody tell what I'm doing wrong?
Upgrading to Python 2.7 Google App Engine 500 server error
0.132549
0
0
198
18,755,024
2013-09-12T03:49:00.000
0
0
0
0
python,django,list
18,755,121
1
false
1
0
If you have at least one entry in search_fields and therefore are showing a search box on your admin changelist page, if you have any filters or search terms in effect you should see information to the right of it showing the number of rows that match your current filter and search criteria. It'll be worded as something like "5 results (50 total)". The "50 total" text will be a link to an unfiltered version of the list, showing the whole set. Possibly paginated, but all filters will be cleared. This doesn't appear to be automatically exposed without the search box. The filter settings are simple arguments in the URL querystring, so it should be easy to add a link similar to the one in the search box that just drops the querystring, but you'd have to learn a little about the admin templates to do so. Setting a search_fields entry is probably simpler, if you have anything reasonable to search over.
1
3
0
I have a Django admin control panel and in each list of objects there are lots and lots of list filters. I want to be able to clear all the filters with a click of a button, but can't find where this ability is, if it already exists in Django. Routes I'm considering (but cannot figure out): Make the last item in the breadcrumb link to the full list Make a direct hyperlink as a filter list option Find some way to access all the query options and remove them or simply return a blank one (queryset.all() isn't working; I'm probably barking up the wrong tree.) That kind of thing should already exist! Find out how to use it. Does anybody know how to accomplish this? I've been trying to figure it out all day.
Django Clear All Admin List Filters
0
0
0
1,347
18,755,196
2013-09-12T04:06:00.000
1
0
1
0
python,configparser
18,757,094
1
true
0
0
This is really a matter of opinion, but my advice is to have the values out of the config relatively quickly. The code that deals with data input and the layer that deals with the actual processing should be modular enough that you can change your data source by just feeding in data from a different source. (Coupling and Cohesion) You'll have to use your own judgement to make the call as to where to draw the line, but as a guide: if you're setting the config as a global variable and reading from there or constantly throwing it around as an argument, you're doing it wrong.
1
0
0
I hope there is a 'right' answer to this: When using ConfigParser to manage default values in a .cfg or .ini file, should I copy everything I need to program variables and copy them back out on exit, or should I use the config.get(section, option) directly in my program as needed?
Should I use a ConfigParser directly in my code?
1.2
0
0
91
18,755,831
2013-09-12T05:13:00.000
2
0
1
1
python,newline,configparser
18,756,706
1
true
0
0
You're fine, ConfigParser will still work. The reason is that is uses fp.readline, which reads up to and including the next LF (\n). The value is then stripped of whitespace, which removes the CR (\r). I'd say just use LF (\n) as your line separator - it will work on both systems, but using both won't cause any harm either. Edit: In fact, if you generate a file using ConfigParser.RawConfigParser it will use \n as the line separator.
1
1
0
Does anyone know how Python deals with ConfigParser line endings in the different OSes? Because it follows the Windows INI format. But what about Linux? (As you know, Windows text line endings are typically CRLF, and Unix's are CR.) I want users of my app to take their config files (.INI files) easily from Windows to Linux and I'd like to know if that's going to be problematic. If it does use different line endings for Unix and Windows, what do you recommend?
Python's ConfigParser: Cross platform line endings?
1.2
0
0
1,432
18,756,436
2013-09-12T06:00:00.000
3
0
1
0
python
18,756,529
1
true
0
0
In IDLE, Crtl+n will create a new python file that you can edit your code in. When you click F5 it will prompt you to save the file, which you can save anywhere and then will execute the file. Each time you edit, click Ctrl+S to save and F5 to run the newly updated file.
1
1
0
I am brand new to python and I'm using IDLE. However, as I learn, it is very tedious to retype an entire class over and over again at the prompt while I work out small syntax errors. I would love to simply write a .py script in notepad++ and load it from the IDLE prompt. How is this done? I'm using Windows not UNIX/Linux or Mac
Python - Loading a source script
1.2
0
0
66
18,758,535
2013-09-12T07:51:00.000
3
1
0
0
c++,python,serialization,thrift
18,772,101
1
true
0
1
There are two options that make sense (and a bunch of others): Use the next largest signed integer with Thrift. Of course, with UINT64 this is not possible, as there is no i128, but it works up to UINT32 Cast the unsigned bits into signed. Not very clean and requires documentation, but it works. The "bunch of others" include Convert it into a string and back. And watch your performance going down. Use binary type. Ok, that's a bit far out, but still possible and can be done by just reinterpreting the bits as with 2. above But again, I'd recommend 1. or 2.
1
2
0
Currently i need to transfer data between C++ and Python applications. As long as Thrift doesn't work with unsigned int, what's the best way to transfer unsigned? Is there only way like: assign unsigned to signed serialize -> send -> receive -> deserialize this signed assign signed to unsigned Should i do it manually all the time or there are already some 3rd party libraries? How do i do it in the case of C++/Python applications? In C++/C++ applications i can just static_cast<signed/unsigned>(unsigned/signed) for conversion, but what about Python?
apache thrift, serialize unsigned
1.2
0
0
1,934
18,758,854
2013-09-12T08:10:00.000
4
1
0
0
java,c++,python,objective-c,oop
18,759,136
3
false
0
0
Below are the parameter which makes any code Secure : 1. Code should do only what was intended. Eg: "select * from tablename where id='" + txtUserInputId + "'" In above query it is vulnerable to SQL injection. 2. Code must validate all the user inputs. 3. Authorization should be implemented properly other than Authentication. 4. User input data should be sanitized before processing. 5. Session should be managed properly. It also affect the security of code how sessions are managed in .Net or Java or any programming language. 6. Memory must be managed property. One process should not be able to access memory of other process. 7. Database constraints must be validated before any database operation. 8. Configurations must be protected from outside world. For eg: .Net framework does not allow users to see Web.config file. Web.config file may contain sensitive information like DB credentials. Note: You can say that C#.Net is secure when it comes to query execution. Because it provides CommandParameter which automatically handles user input data for you.
1
0
0
What does a secure code actually mean? Is it that you cannot make that code do something else, it wasn't supposed to do? Many of my peers say to migrate to c++ or java as they are more secure because of oops, but when I ask why, they just say, "it's just.......it is". An example would be so much appreciated. And I am fairly noob in C language, super-noob in c++. (just in case you wonder what complexity of answer would make me understand.)
How would you explain a layman person or a beginner in programming, the bold point of object oriented approach - the SECURITY?
0.26052
0
0
247
18,760,480
2013-09-12T09:29:00.000
1
0
0
0
python,svn,pysvn
18,771,762
1
false
0
0
Sorry found it. Client.status(dirname) return an array. Last element of this array is the directory. So using myclient.status(dirname)[-1].text_status return directory status. Have a nice day. Ouille.
1
1
0
I try to get directory "svn" status (unversionned, normal, ...) of a particular directory using pysvn Client.status(dirname) function. But as said in the doc, pysvn return an array with file status within directory, not directory himself. Is there an other way to obtain this information ? Have a nice day. Ouille
How to get svn directory status using pysvn
0.197375
0
0
699
18,761,985
2013-09-12T10:39:00.000
0
0
1
1
python,pyinstaller
18,767,999
1
true
0
0
Looks like os.system('sudo useradd user') solved the issue.
1
0
0
I have a python script which adds a user using the command os.system('useradd user'). This code works fine when run like a python script like this sudo python script.py. However, once I convert it to executable with pyinstaller with the command python pyinstaller.py --onefile script.py, and run the executable like this sudo ./script, I get an error useradd: error while loading shared libraries: libselinux.so.1: failed to map segment from shared object: Permission denied. Any idea what is the issue and how to fix?
After converting python script to executable with pyinstaller I get: error while loading shared libraries... Permission denied
1.2
0
0
550
18,769,092
2013-09-12T15:58:00.000
1
1
0
0
python,django,eclipse,unit-testing,pydev
19,786,166
1
true
1
0
You can create a new PyDev django debug configuration in eclipse and set the program arguments to 'test'. In this case, the debug configuration will execute the following command: `python manage.py test' and your breakpoints inside test cases will get hit.
1
2
0
I've got problem concerning me a long time. I either run tests from eclipse (Python unittest) using Pydev or Nose test runner. That way it's possible to debug tests and watch them in PyUnit view. But that way test database is not created, manage.py is not used. Or I run them via manage.py test - test db is being created, but above features not available that way. Is that possible to debug tests in eclipse which are being run on test db? Regards, okrutny
How to run django tests in Eclipse to make debugging possible, but on test database
1.2
0
0
1,473
18,770,504
2013-09-12T17:15:00.000
0
0
1
0
ipython-notebook,jupyter-notebook
62,800,143
10
false
0
0
I tried all the options above and none of them worked. This is how I got rid of the scrolling cell. Right-click on the cell, and click "disable scrolling for outputs" I know this doesn't resize the scrolling cell, but it does make my code more legible since the scrolling cells are very small(for me at least).
4
134
0
By default the ipython notebook ouput is limited to a small sub window at the bottom. This makes us force to use separate scroll bar that comes with the output window, when the output is big. Any configuration option to make it not limited in size, instead run as high as the actual output is? Or option to resize it once it gets created?
resize ipython notebook output window
0
0
0
129,501
18,770,504
2013-09-12T17:15:00.000
32
0
1
0
ipython-notebook,jupyter-notebook
49,837,334
10
false
0
0
I just placed my cursor in the grey box next to the output and clicked and then all of the output was displayed.
4
134
0
By default the ipython notebook ouput is limited to a small sub window at the bottom. This makes us force to use separate scroll bar that comes with the output window, when the output is big. Any configuration option to make it not limited in size, instead run as high as the actual output is? Or option to resize it once it gets created?
resize ipython notebook output window
1
0
0
129,501
18,770,504
2013-09-12T17:15:00.000
286
0
1
0
ipython-notebook,jupyter-notebook
38,704,369
10
false
0
0
You can toggle the scroll window in the main menu of the notebook Cell -> Current Outputs -> Toggle Scrolling
4
134
0
By default the ipython notebook ouput is limited to a small sub window at the bottom. This makes us force to use separate scroll bar that comes with the output window, when the output is big. Any configuration option to make it not limited in size, instead run as high as the actual output is? Or option to resize it once it gets created?
resize ipython notebook output window
1
0
0
129,501
18,770,504
2013-09-12T17:15:00.000
-2
0
1
0
ipython-notebook,jupyter-notebook
59,451,884
10
false
0
0
In JupyterLab you can right click and choose: Create New View for Output.
4
134
0
By default the ipython notebook ouput is limited to a small sub window at the bottom. This makes us force to use separate scroll bar that comes with the output window, when the output is big. Any configuration option to make it not limited in size, instead run as high as the actual output is? Or option to resize it once it gets created?
resize ipython notebook output window
-0.039979
0
0
129,501
18,773,414
2013-09-12T20:05:00.000
4
0
1
0
python-3.x,pyscripter
18,924,536
1
false
0
0
In PyScripter 2.5.3 right mouse click on interpreter window and to choose in a pop-up menu 'interpreter editor option'.
1
3
0
I am using PyScripter as IDE for Python. Python version 3.3.2 64 bit version. Python interpreter in scripter is very small for visualisation. Can anybody please help me how to increase the font of python interpreter in PyScripter?
About font of python interpreter in PyScripter
0.664037
0
0
776
18,776,988
2013-09-13T01:48:00.000
1
0
1
0
python,windows,opencv,installation
18,777,073
3
false
0
0
Try to put the python(2.7) at your Windows path. Do the following steps: Open System Properties (Win+Pause) or My Computer and right-click then Properties Switch to the Advanced tab Click Environment Variables Select PATH in the System variables section Click Edit Add python's path to the end of the list (the paths are separated by semicolons). example C:\Windows;C:\Windows\System32;C:\Python27
1
3
1
Recently, I have been studying OpenCV to detect and recognize faces using C++. In order to execute source code demonstration from the OpenCV website I need to run Python to crop image first. Unfortunately, the message error is 'ImportError: No module named Image' when I run the Python script (this script is provided by OpenCV website). I installed "python-2.7.amd64" and downloaded "PIL-1.1.7.win32-py2.7" to install Image library. However, the message error is 'Python version 2.7 required, which was not found in the registry'. And then, I downloaded the script written by Joakim Löw for Secret Labs AB / PythonWare to register registry in my computer. But the message error is "Unable to register. You probably have the another Python installation". I spent one month to search this issue on the internet but I cannot find the answer. Please support me to resolve my issue. Thanks, Tran Dang Bao
Python 2.7 - ImportError: No module named Image
0.066568
0
0
22,097
18,777,737
2013-09-13T03:19:00.000
7
0
1
0
python
18,777,751
5
true
0
0
max(abs(i) for i in [5, -2, -6, 5])
1
4
0
How to calculate the absolute value for an array in python? for example: a = [5,-2,-6,5] I want to know the max of abs(a), and the answer should be 6. Thank you!
How to calculate the absolute value for an array in python?
1.2
0
0
26,560
18,778,046
2013-09-13T03:57:00.000
1
0
1
0
python,with-statement
18,778,090
3
false
0
0
Think of with as creating a "supervisor" (context manager) over a code block. The supervisor can even be given a name and referenced within the block. When the code block ends, either normally or via an exception, the supervisor is notified and it can take appropriate action based on what happened.
1
5
0
I am new to Python. In one tutorial of connecting to mysql and fetching data, I saw the with statement. I read about it and it was something related to try-finally block. But I couldn't find a simpler explanation that I could understand.
What does the 'with' statement do in python?
0.066568
0
0
14,436
18,778,266
2013-09-13T04:21:00.000
0
0
1
0
python,image,matplotlib
18,778,542
2
false
0
0
If you simply need arrows pointing up and down, use Unicode arrows like "↑" and "↓". This would be really simple if rendering in a browser.
1
0
1
I need to make a very simple image that will illustrate a cash flow diagram based on user input. Basically, I just need to make an axis and some arrows facing up and down and proportional to the value of the cash flow. I would like to know how to do this with matplot.
Cash flow diagram in python
0
0
0
875
18,780,590
2013-09-13T07:24:00.000
1
0
1
0
python,performance,algorithm,caching
18,780,954
3
false
0
0
Only experiments will tell you which is better. Here's another simple idea to consider: link all the keys in a linked list in order of arrival. Each time you retrieve a key, iterate from the beginning of the list and remove all expired items, from both the list and the dictionary.
2
3
0
I need simple cache structure (in python, but it doesn't really matter), with some specific requirements: Up to several millions of small objects (100 bytes on average) Speed is the key (both put and get), I'd expect operation times at about few microseconds Only one thread accessing this - so it can be all just in memory (do not need persistence) Keys are MD5 hashes (if it matters) There's an expiration time, global for the cache - every key should be removed from the cache after expiration time, counting from the time of first put Now, the point is how to implement expiration - as everything other can be done using simple dictionary. The simplest solution - to iterate all data regularly and remove expired keys - could lock whole cache for too long. It could be improved by iterating parts of the data with every cleanup process - but still it will take some time (or won't clean it fast enough). Also removing keys one by one looks like the waste of CPU - as they could be removed in batches (don't have to be removed just after expiration - we can afford some extra RAM for keeping expired keys a little bit longer). Checking keys during the retrieve is not enough (although it should be done nevertheless, to not return expired keys) - as many keys can be never retrieved and then they will stay forever (or just too long). Most answers for that problem suggest using memcached, but I think this will be waste of CPU, especially as I keep objects which can be put to the dictionary by the reference, but using memcached they would have to be (de)serialized. I have some idea how to implement this: split data into time slices, having actually several dictionaries - for example, if expire time is 60 seconds, then we have (at most) 4 dictonaries and every 20 seconds we add new one - where new keys are put, and remove the 4th one - where we'll have keys added over 60 seconds ago. This makes cleaning very fast at the cost of retrieve time, where you need to lookup in 4 dictionaries instead of one (and RAM usage increased by 33%). So finally the question - which is: is there any better solution? Or maybe I'm wrong and some of mentioned solutions (removing keys one by one) would be better and faster? I don't want to reinvent the wheel, but didn't find any good solution in the net.
Optimal algorithm for cache with expire
0.066568
0
0
1,205
18,780,590
2013-09-13T07:24:00.000
0
0
1
0
python,performance,algorithm,caching
18,781,248
3
false
0
0
One implementation of a hashtable is to store a list of (key, value) for each hash value. You can extend this to storing a list of (key, insertion time, value) for each hash value. On both get and set, you can throw away expired items as you scan for the key you're interested in. Yes, it may leave expired items in the hashtable for arbitrarily long, but only O(N) items on average, where N is the size of your hash table. Good properties of this approach are that there's no concurrent cleanup going on, and the overhead is more or less constant. You'll have to code this in C rather than Python if you care about speed.
2
3
0
I need simple cache structure (in python, but it doesn't really matter), with some specific requirements: Up to several millions of small objects (100 bytes on average) Speed is the key (both put and get), I'd expect operation times at about few microseconds Only one thread accessing this - so it can be all just in memory (do not need persistence) Keys are MD5 hashes (if it matters) There's an expiration time, global for the cache - every key should be removed from the cache after expiration time, counting from the time of first put Now, the point is how to implement expiration - as everything other can be done using simple dictionary. The simplest solution - to iterate all data regularly and remove expired keys - could lock whole cache for too long. It could be improved by iterating parts of the data with every cleanup process - but still it will take some time (or won't clean it fast enough). Also removing keys one by one looks like the waste of CPU - as they could be removed in batches (don't have to be removed just after expiration - we can afford some extra RAM for keeping expired keys a little bit longer). Checking keys during the retrieve is not enough (although it should be done nevertheless, to not return expired keys) - as many keys can be never retrieved and then they will stay forever (or just too long). Most answers for that problem suggest using memcached, but I think this will be waste of CPU, especially as I keep objects which can be put to the dictionary by the reference, but using memcached they would have to be (de)serialized. I have some idea how to implement this: split data into time slices, having actually several dictionaries - for example, if expire time is 60 seconds, then we have (at most) 4 dictonaries and every 20 seconds we add new one - where new keys are put, and remove the 4th one - where we'll have keys added over 60 seconds ago. This makes cleaning very fast at the cost of retrieve time, where you need to lookup in 4 dictionaries instead of one (and RAM usage increased by 33%). So finally the question - which is: is there any better solution? Or maybe I'm wrong and some of mentioned solutions (removing keys one by one) would be better and faster? I don't want to reinvent the wheel, but didn't find any good solution in the net.
Optimal algorithm for cache with expire
0
0
0
1,205
18,783,390
2013-09-13T09:56:00.000
-3
0
1
1
python,shared-libraries,pip,include-path,pyodbc
18,847,849
7
false
0
0
Just in case it's of help to somebody, I still could not find a way to do it through pip, so ended up simply downloading the package and doing through its 'setup.py'. Also switched to what seems an easier to install API called 'pymssql'.
1
88
0
I am using pip and trying to install a python module called pyodbc which has some dependencies on non-python libraries like unixodbc-dev, unixodbc-bin, unixodbc. I cannot install these dependencies system wide at the moment, as I am only playing, so I have installed them in a non-standard location. How do I tell pip where to look for these dependencies ? More exactly, how do I pass information through pip of include dirs (gcc -I) and library dirs (gcc -L -l) to be used when building the pyodbc extension ?
python pip specify a library directory and an include directory
-0.085505
0
0
69,488
18,785,825
2013-09-13T12:00:00.000
5
0
1
0
python,ctypes,cython,pysdl2
19,071,067
1
false
0
1
Here's a brief summary of how both tools work: ctypes is a very pythonic wrapper over a library called cffi, which is able to load shared libraries (.so or .dll files), and call them, without first compiling any code to wrap the functions defined in those libraries. You do have to tell ctypes about the functions it'll call, so that it can convert from the python types (int, str, and so on) to the abi expressed in the shared lib (uint32_t, char *, and so on). Cython is a 'sort of python' to C translator. The generated C code can be compiled and the result is a special sort of shared library (.so or .dll again) which has all the right functions to be a Python C extension. Cython is very smart, based on the type annotations in the input, it knows whether to emit code that directly calls C functions (when you use cdef) or calls regular python objects by way of the PyObject_Call C api. Since you can (more or less) freely mix C and python in Cython sources, you should have no difficulty using PySDL2 in your Cython library, just invoking it as though it were regular python, import it, call it, everything should "just work". That said, You might benefit from including libsdl declarations in your code, directly, if you end up calling out to SDL from tight inner loops, to avoid the overhead of converting from the low level C types to python types, just to have ctypes convert them back again. You could probably put that off until your application has grown a bit and you notice some performance bottlenecks.
1
3
0
We are currently using Cython to make bindings to some networking and DB libraries. We want also use SDL, but PySDL2 uses ctypes for binding. While Cython is whole interpreter, ctypes is just library. But, Cython and ctypes are most often portrayed as alternatives to each other. Thus I am totally unsure if they are compatible. So, question: it is possible to use Cython and ctypes together in one project?
Cython + ctypes?
0.761594
0
0
2,695
18,790,301
2013-09-13T15:43:00.000
1
0
1
0
python,django
18,794,763
2
false
1
0
You could add a model (with a db table) that stores values for a, b and x. Then for each query, you could look for an instance with a and b and return the associated x.
2
0
0
I am writing an API which returns json according to queries. For example: localhost/api/query?a=1&b=2. To return the json, I need to do some pre-calculations to calculate a value, say, x. The pre-calculation takes long time (several hundred milliseconds). For example, The json file returns the value of x+a+b. When the user query localhost/api/query?a=3&b=4, x will be calculate again and this is a waste of time since x won't change for any query. The question is how can I do this pre-calculation of x for all queries (In the real app, x is not a value but a complex object returned by wrapped C++ code).
How to avoid repeated pre-calculation in django view
0.099668
0
0
161
18,790,301
2013-09-13T15:43:00.000
2
0
1
0
python,django
18,790,769
2
false
1
0
If you are using some sort of cache (memcached, redis) you can store it there. You can try to serialize the object with pickle, msgpack etc. That you can retrieve and deserialze it.
2
0
0
I am writing an API which returns json according to queries. For example: localhost/api/query?a=1&b=2. To return the json, I need to do some pre-calculations to calculate a value, say, x. The pre-calculation takes long time (several hundred milliseconds). For example, The json file returns the value of x+a+b. When the user query localhost/api/query?a=3&b=4, x will be calculate again and this is a waste of time since x won't change for any query. The question is how can I do this pre-calculation of x for all queries (In the real app, x is not a value but a complex object returned by wrapped C++ code).
How to avoid repeated pre-calculation in django view
0.197375
0
0
161
18,790,639
2013-09-13T16:01:00.000
1
0
1
1
python,ubuntu,pip
21,244,963
1
true
0
0
I do not really have a solution for the pip path lookup, but deleting /usr/local/lib/python2.7/dist-packages/_PACKAGE_NAME did the trick for me. At the very least it allowed me to install anew.
1
2
0
I installed several packages (among them patsy and statsmodels) with pip 1.3.1 in kubuntu 13.04. They were put into /usr/local/lib, instead of /usr/lib. When using pip freeze or pip list, these packages appear fine, and are usable in python. However, when I use pip uninstall I get "Can't uninstall 'statsmodels'. No files were found to uninstall." The structure of install packages in /usr/local/lib/python2.7/dist-packages seem correct, and installed-files.txt has everything listed. How do I make pip see these files and uninstall them?
PIP uninstall not looking into /usr/local
1.2
0
0
1,195
18,791,469
2013-09-13T16:55:00.000
0
0
0
0
python,matplotlib,bigdata,heatmap
36,248,111
1
true
0
0
I solved by downsampling the matrix to a smaller matrix. I decided to try two methodologies: supposing I want to down-sample a matrix of 45k rows to a matrix of 1k rows, I took a row value every 45 rows another methodology is, to down-sample 45k rows to 1k rows, to group the 45k rows into 1k groups (composed by 45 adjacent rows) and to take the average for each group as representative row Hope it helps.
1
1
1
I am trying to plot a heatmap of a big microarray dataset (45K rows per 446 columns). Using pcolor from matplotlib I am unable to do it because my pc goes easily out of memory (more than 8G).. I'd prefer to use python/matplotlib instead of R for personal opinion.. Any way to plot heatmaps in an efficient way? Thanks
How to plot a heatmap of a big matrix with matplotlib (45K * 446)
1.2
0
0
1,446
18,792,228
2013-09-13T17:43:00.000
0
1
1
0
python,linux,shell,vi
18,792,440
3
false
0
0
I might be interpreting your questions incorrectly but this is my suggestion. Maybe you can open more than one terminal. On one terminal, write/edit your code and save it. I'm assuming with ':w' and leave the terminal open. Then on the other terminal, compile your code.
1
1
0
Sometimes I need to test my python code in shell, so I have to edit the code, save and quit and run the code. Then reopen the file to modify my code if anything goes wrong. Then save and quit .... I am wondering is there a handy feature in VI to easily test the code inside VI?
Python Test Code inside VI
0
0
0
250
18,792,536
2013-09-13T18:03:00.000
0
0
0
0
python,xml,pdf
18,794,296
1
true
1
0
Perhaps what you are looking for is whether Adobe LiveCycle Designer support command-line arguments to do that. You could then automate this with python by issuing the command-line, hum, commands.
1
2
0
At my office we had PDFs designed using Adobe LiveCycle Designer that allows you to import xml data into the form to populate it. I would like to know if I could automate the process of importing the xml data into the form using python. Ideally I would like it if I didn't have to re-create the form using python since the form itself is quite complex. I've looked up several different modules and they all seem to be able to read pdfs or create them from scratch, but not populate them. Is there a python module out there that would have that kind of functionality? Edit: I should mention that I don't have access to LiveCycle.
Import XML data into a PDF form using Python
1.2
0
1
935
18,792,537
2013-09-13T18:04:00.000
1
0
0
0
python,security
18,792,613
5
false
0
0
I think it would be better to set up an account for them on the database, and control access to the database using database rights.
4
4
0
I have a piece of python code, it uses MySQLdb to connect to a database. The username/password are in the script in plaintext. I want to deliver this code to client but do not want them to know the username/password. What is the best way to do that? If the MySQLdb module can be packaged as well, it would be even better. Let us assume it is on Linux and the client has standard Python interpreter. It does not need to be super safe, not disclosing the username/password in plaintext is good enough. The client need to have Write permission to our database. But the Write operation can only come from our program, we do not want the client to write in arbitrary way.
How to hide username/password in Python codes?
0.039979
0
0
15,788
18,792,537
2013-09-13T18:04:00.000
2
0
0
0
python,security
18,792,609
5
false
0
0
Strings are possibly the hardest thing to hide, no matter what you do to compile the executable with py2exe, or obfuscate it, etc. In practice, you cannot "hide" connection strings from the client. Of course, you could encrypt them, but then you'd need a means of decrypting them, and the client could easily (relatively) decrypt the username and password themselves.
4
4
0
I have a piece of python code, it uses MySQLdb to connect to a database. The username/password are in the script in plaintext. I want to deliver this code to client but do not want them to know the username/password. What is the best way to do that? If the MySQLdb module can be packaged as well, it would be even better. Let us assume it is on Linux and the client has standard Python interpreter. It does not need to be super safe, not disclosing the username/password in plaintext is good enough. The client need to have Write permission to our database. But the Write operation can only come from our program, we do not want the client to write in arbitrary way.
How to hide username/password in Python codes?
0.07983
0
0
15,788
18,792,537
2013-09-13T18:04:00.000
3
0
0
0
python,security
18,792,604
5
false
0
0
There are all sorts of things you can do to avoid this, a couple of which are: Package your data with your app, sqlite db file is easily packageable Expose your app logic through some sort of API and distribute a client, which uses the api to communicate with your app, so the client will never have to know any db credentials Create a mysql user account with correct permissions for your clients
4
4
0
I have a piece of python code, it uses MySQLdb to connect to a database. The username/password are in the script in plaintext. I want to deliver this code to client but do not want them to know the username/password. What is the best way to do that? If the MySQLdb module can be packaged as well, it would be even better. Let us assume it is on Linux and the client has standard Python interpreter. It does not need to be super safe, not disclosing the username/password in plaintext is good enough. The client need to have Write permission to our database. But the Write operation can only come from our program, we do not want the client to write in arbitrary way.
How to hide username/password in Python codes?
0.119427
0
0
15,788
18,792,537
2013-09-13T18:04:00.000
4
0
0
0
python,security
30,562,938
5
false
0
0
import base64 p = 'desired password' y = base64.b64encode (p) print y and if you want to decode y: z = base64.b64decode (y) print z
4
4
0
I have a piece of python code, it uses MySQLdb to connect to a database. The username/password are in the script in plaintext. I want to deliver this code to client but do not want them to know the username/password. What is the best way to do that? If the MySQLdb module can be packaged as well, it would be even better. Let us assume it is on Linux and the client has standard Python interpreter. It does not need to be super safe, not disclosing the username/password in plaintext is good enough. The client need to have Write permission to our database. But the Write operation can only come from our program, we do not want the client to write in arbitrary way.
How to hide username/password in Python codes?
0.158649
0
0
15,788
18,795,081
2013-09-13T20:54:00.000
2
0
1
0
python,django,virtualenv,virtualenvwrapper
18,795,383
4
true
1
0
I would recommend just starting from scratch with a new virtualenv. That is the reason that they are built: one virtualenv can house a project that uses one version of Django, but another project can use a separate version of Django (perhaps an older version because an app you're using doesn't yet work with the newer version). If you are attempting to completely recreate the same environment (probably because you want to run the project in another spot), you can use the pip freeze in alexcxe's answer. This will install everything again from scratch, attempting to install the exact same version. You may or may not want to do this, for the reasons I mentioned in the first paragraph. This is the entire point of virtual environments. I have 20 different projects on my computer, each with their own virtualenv. It's fairly common to work in this manner.
1
5
0
Using virtualenvwrapper, I installed Django for one virtualenv. Now I can't reach it outside that environment. I want to be able to start new Django projects both outside any virtualenv, and inside new virtualenvs. Do I need to reinstall Django or can I somehow import the installation from my first virtualenv?
Do I need to reinstall Django for new virtualenv?
1.2
0
0
3,791
18,798,508
2013-09-14T05:06:00.000
1
0
1
0
python,python-3.x,cygwin
18,806,079
2
false
0
0
I do not know details about Cygwin distribution of Python 3.3, but the official distribution of Python 3.3 for Windows contains Python Launcher for Windows -- in the form of py.exe and pyw.exe located in c:\Windows. During the installation, the .py and .pyw extensions are associated with the launcher. If the script does not contains the #!python3 as the first line, the Python launcher starts the highest found version of Python 2.x. The chance is that or the Python launcher is also part of the Cygwin distribution, or you had the non-Cygwin version installed earlier; hence the launcher and the association is already active (and you do not know). How exactly do you execute your script? Try to add the magic line #!python3 as the very first line of the script.
2
3
0
I'm trying to run Python 3 on Cygwin (windows 8) but for some reason , It recognizes only Python 2. Can I separately download python and put it in same folder in the Cygwin folder in program files?
Why does Cygwin ignore python 3
0.099668
0
0
1,345
18,798,508
2013-09-14T05:06:00.000
1
0
1
0
python,python-3.x,cygwin
18,798,611
2
true
0
0
You must have the default version of python as a 2.x version instead of your 3.x version but this can be fixed by a quick system hack: Goto My Computer -----> System properties -----> Advanced System settings -----> Advanced -----> Environment Variables and then check if the variable path contains your python2 (default : C:\Python27) installation path if so remove it first and then replace it with your python3 (default : C:\Python33) installation directory
2
3
0
I'm trying to run Python 3 on Cygwin (windows 8) but for some reason , It recognizes only Python 2. Can I separately download python and put it in same folder in the Cygwin folder in program files?
Why does Cygwin ignore python 3
1.2
0
0
1,345
18,799,033
2013-09-14T06:29:00.000
0
0
0
0
python,flash,selenium,webdriver
18,804,048
1
false
0
0
Use flashselenium or sikuli for flash object testing.
1
0
0
After trying to find some help on the internet related to flash testing through selenium, all I find is FlexUISelenium package available for Selenium RC. I DO NOT find any such package available for Selenium Webdriver. I am working with python and selenium webdriver and I do not see any packages available to automate flash applications. Is there any such package available at all for webdriver? If not, how do I start automating a flash application in webdriver?
How to perform flash object testing in selenium webdriver with python?
0
0
1
2,568
18,799,714
2013-09-14T08:11:00.000
0
0
1
0
python,python-idle
18,799,741
2
false
0
0
To check if something, you use a conditional clause (if/elif/else) To check what letters are used in a string, you can use a set. For example, if the input is BANANA, you can do set("BANANA") to create a set of unique values ({"B", "A", "N"}) To check if certain letters are in the set, you can use the all() function. all(letter in the_set for letter in ['B', 'A', 'N']. Or, you can just compare one set to another (eg {'A', 'B', 'C'} == set('abbcacabacacba') Finally, if the above conditional is True, then return True Now have a go at writing some code. If you're having trouble, feel free to create another question supplying what you have tried and what errors occur/what the problem is.
1
1
0
Can somebody pls help me on how to write the code to check if few given letters are part of the string entered. the output must be true if the letters are present or false. For example: Return True if and only if the name is valid (that is, it contains no characters other than 'B' 'A' 'N') if the word entered is BANANA. Pls help me with the code.
How to if few letters are part of the string entered?
0
0
0
63
18,802,940
2013-09-14T14:48:00.000
0
0
0
1
python,google-app-engine,google-cloud-endpoints
18,817,455
2
false
1
0
Check if you are running out of resources.
1
0
0
I have an issue with debugging and Cloud Endpoints. I'm using tons of endpoints in my application, and one endpoint consistently returns with error code 500, message "Internal Error". This endpoint does not appear in my app's logs, and when I run its code directly in the interactive console (in production), everything works fine. There might be a bug in my code that I am failing to see, however, the real problem here is that the failing endpoints request is NOT showing up in my app's logs – which leaves me with no great way to debug the problem. Any tips? Is it possible to force some kind of "debug" mode where more information (such as a stack trace) is conveyed back to me in the 500 response from endpoints? Why isn't the failing request showing up in my app's logs?
Debugging Cloud Endpoints: Error making request, but no request appears in logs
0
0
0
177
18,803,444
2013-09-14T15:43:00.000
0
1
0
0
python,eclipse,eclipse-plugin,leap-motion
18,819,594
2
false
0
0
Hi everyone now i can compile successed . i'm find how to solve this problem in here. how to solve this problem is copy msvcp100.dll , msvcp100d.dll , msvcr100.dll msvcpr100d.dll into your project folder. the problem is we call leap.py and leap.py call LeapPython.pyd and LeapPython.pyd must to use .dll file then we must to include 4 .dll file into my project thank you everyone for answer
1
0
0
I am developing leap motion with the help of python. I have downloaded eclipse and installed python plugin Now I need to add library of leap motion , that is LeapPython.pyd How to add this library on eclipse ? Any help would be appreciated. thank you
How to add library on eclipse (python) ?
0
0
0
18,027
18,805,203
2013-09-14T18:45:00.000
7
0
1
0
python,python-idle
45,206,452
7
false
0
0
If you are trying to track down which line caused an error, if you right-click in the Python shell where the line error is displayed it will come up with a "Go to file/line" which takes you directly to the line in question.
2
119
0
In the main shell of IDLE, errors always return a line number but the development environment doesn't even have line numbers. Is there anyway to turn on line numbers?
How to turn on line numbers in IDLE?
1
0
0
187,291
18,805,203
2013-09-14T18:45:00.000
2
0
1
0
python,python-idle
57,194,672
7
false
0
0
Line numbers were added to the IDLE editor two days ago and will appear in the upcoming 3.8.0a3 and later 3.7.5. For new windows, they are off by default, but this can be reversed on the Setting dialog, General tab, Editor section. For existing windows, there is a new Show (Hide) Line Numbers entry on the Options menu. There is currently no hotkey. One can select a line or bloc of lines by clicking on a line or clicking and dragging. Some people may have missed Edit / Go to Line. The right-click context menu Goto File/Line works on grep (Find in Files) output as well as on trackbacks.
2
119
0
In the main shell of IDLE, errors always return a line number but the development environment doesn't even have line numbers. Is there anyway to turn on line numbers?
How to turn on line numbers in IDLE?
0.057081
0
0
187,291
18,805,490
2013-09-14T19:18:00.000
1
1
0
1
python,mysql,crontab,raspberry-pi
18,811,200
1
true
0
0
solved the problem in quite ugly way, but it's working now. Just added: time.sleep(5) before trying to connect to mysql db. I would be pleased if someone has a better solution.
1
0
0
A Raspberry Pi (raspian wheezy) has a cronjob, created as user pi with "sudo crontab -e" so it should has root grants. ps aux | grep /home/.../myscript.py ...say it's owner is user "pi"!? (is this correct?) The python script called from crontab is working fine if i call it from the terminal. It's reading data from UART (serial port), and save it into a mysql database. My python script got 'chmod 777' grants. The crontab file: @reboot sudo python /home/pi/pythonprogram/myscript.py & > /home/pi/pythonprogram/myscript.log crontab log file: Error mysql: 2002 Can't connect to local MYSQL server throught socket '/var/run/mysqld/mysqld.sock' (2) May it be that my script is called first, before the server (mysql and apache) are running during the boot up process? Is there a way to prevent this? What else could be the reason for those error?
Raspberry Pi crontab starts py script at bootup -> logging: error mysql 2002 (can't connect to local server)
1.2
0
0
1,257
18,805,720
2013-09-14T19:42:00.000
1
0
1
0
python
18,805,746
3
false
0
0
To stay in Python afterwards you could just type 'python' on the command prompt, then run your code from inside python. That way you'll be able to manipulate the objects (lists, dictionaries, etc) as you wish.
1
2
0
All I know how to do is type "python foo.py" in dos; the program runs but then exits python back to dos. Is there a way to run foo.py from within python? Or to stay in python after running? I want to do this to help debug, so that I may look at variables used in foo.py (Thanks from a newbie)
How do I run a scipt from within Python on windows/dos?
0.066568
0
0
83
18,807,022
2013-09-14T22:29:00.000
1
0
1
0
python,google-app-engine,google-cloud-datastore
18,807,184
2
false
1
0
If the logic is fixed, keep it in your code. Maybe you can procedurally generate the dicts on startup. If there is a dynamic component to the logic (something you want to update frequently), a data store might be a better bet, but it sounds like that's not applicable here. Unless the number of combinations runs over the millions, and you'd want to trade speed in favour of a lower memory footprint, stick with putting it in the application itself.
1
0
0
The context for this question is: A Google App Engine backend for a two-person multiplayer turn-based card game The game revolves around different combinations of cards giving rise to different scores in the game Obviously, one would store the state of a game in the GAE datastore, but I'm not sure on the approach for the design of the game logic itself. It seems I might have two choices: Store entries in the datastore with a key that is a sorted list of the valid combinations of cards that can be player. These will then map to the score values. When a player tries to play a combination of cards, the server-side python will sort the combination appropriately and lookup the key. If it succeeds, it can do the necessary updates for the score, if it fails then the combination wasn't valid. Store the valid combinations as a python dictionary written into the server-side code and perform the same lookups as above to test the validity/get the score but without a trip to the datastore. From a cost point of view (datastore lookups aren't free), option 2 seems like it would be better. But then there is the performance of the instance itself - will the startup time, processing time, memory usage start to tip me into greater expense? There's also the code maintanence issue of constructing that Python dictionary, but I can bash together some scripts to help me write the code for that on the infrequently occasions that the logic changes. I think there will be on the order of 1000 card combinations (that can produce a score) of between 2 and 6 cards if that helps anyone who wants to quantify the problem. I'm starting out with this design, and the summary of the above is whether it is sensible to store the static logic of this kind of game in the datastore, or simply keep it as part of the CPU bound logic? What are the pros and cons of both approaches?
Where to hold static information for game logic?
0.099668
1
0
209
18,813,288
2013-09-15T14:07:00.000
2
0
0
1
python,macos,web2py
18,813,520
1
true
1
0
If you are using the Mac binary, I think the applications are in /web2py/web2py.app/Contents/Resources/applications/. Note, you can also run the source version of web2py, in which case, the applications will be in /web2py/applications/.
1
4
0
I am pulling my hair out trying to figure out where web2py stores the project files by default in OS X. It is not located in the same directoy as the web2py.app . I can launch the web interface and see project in the admin view but want to edit the files from sublime text as opposed to the admin web interface. I've looked through the web2py book and google user book with no luck. Any suggestions, this seems like it should be fairly obvious...
Where does Web2py save project files OS X?
1.2
0
0
912
18,814,543
2013-09-15T16:12:00.000
1
0
1
1
python,c,windows,compiler-construction,windows-server-2008-r2
18,814,595
1
true
0
0
Yes, but there are other issues you need to watch-out for. Are both systems either 32-bit or 64-bit? Not just the hardware, but the Python version as well. Are both systems running the same version of Python? That's tied to both major and minor version numbers (see sys.version_info). Edit: Your edit has answered these questions, so you should be fine. Make sure you keep any Python upgrades and modules in step, and only use the 64-bit versions.
1
0
0
I installed a Python package (such as SQLAlchemy), and it compiled C into binaries on a Windows 7 machine. Can I expect to be able to reuse the compiled binaries on Windows Server 2008-R2? Edit: Both are AMD64 Python 2.7.3 and Windows 64 bit.
Porting compiled code (distutils) from Windows 7 to Windows Server
1.2
0
0
45
18,817,690
2013-09-15T21:29:00.000
4
0
1
0
ipython,jupyter-notebook,ipython-notebook,jupyter
50,744,815
9
false
0
0
If what you want is to remove the numbers themselves, so that each cell shows In [ ] (instead of something like In [247] which is leftover from some previous incarnation of the kernel), use "Cell" > "All Output" > "Clear" (in Jupyter Notebook 5.4.0) or "Edit" > "Clear All Outputs" (In Jupyter Lab 0.32.1). This will remove all the numbers, even if you're in the middle of running a notebook. It will not reset the numbering back to 1; e.g. if the last cell you executed was 18, the next will be 19. If you're using this because you want clarity about which cells you've executed during this run of the kernel and which cells you haven't executed yet, use "Cell" > "All Output" > "Clear" (or "Edit" > "Clear All Outputs") immediately after you start (or restart) the kernel. This can be useful when restarting a kernel, or when opening a saved or duplicated notebook. This will also remove all outputs from the notebook. Thanks to user2651084 in a previous comment for this.
5
90
0
I just wrote my first extensive Python tutorial using IPython notebooks. All went well, except I did a lot of testing and moving blocks around. How do I reset the In [ ]: numbering? I have tried quitting and reloading, but that doesn't seem to work.
How do I reset the Jupyter/IPython input prompt numbering?
0.088656
0
0
61,595
18,817,690
2013-09-15T21:29:00.000
19
0
1
0
ipython,jupyter-notebook,ipython-notebook,jupyter
33,576,660
9
false
0
0
Every .ipynb file can be opened in an editor. Everything written there is in plain text (JSON). For each cell which has the "cell_type": "code" there'd be another key-value pair as "execution_count": <number>. As you might have guessed, that is the prompt numbering. Hence, if the notebook contains code which will take time to execute (as was, in my case) this method would be time efficient. Now, either you can manually change each execution_count or write a simple script to get the numbering right. To check the results just refresh the notebook in the browser without stopping the kernel. And, everything will be as per your needs, even all the variables/loaded data will remain in the environment.
5
90
0
I just wrote my first extensive Python tutorial using IPython notebooks. All went well, except I did a lot of testing and moving blocks around. How do I reset the In [ ]: numbering? I have tried quitting and reloading, but that doesn't seem to work.
How do I reset the Jupyter/IPython input prompt numbering?
1
0
0
61,595
18,817,690
2013-09-15T21:29:00.000
74
0
1
0
ipython,jupyter-notebook,ipython-notebook,jupyter
26,464,982
9
false
0
0
I think, the only way to to what you want is: - 'Kernel > Restart' (restart the kernel) and then 'Cell > Run All' (run the script).
5
90
0
I just wrote my first extensive Python tutorial using IPython notebooks. All went well, except I did a lot of testing and moving blocks around. How do I reset the In [ ]: numbering? I have tried quitting and reloading, but that doesn't seem to work.
How do I reset the Jupyter/IPython input prompt numbering?
1
0
0
61,595
18,817,690
2013-09-15T21:29:00.000
20
0
1
0
ipython,jupyter-notebook,ipython-notebook,jupyter
18,818,142
9
false
0
0
You can reset the kernel (shortcut: C-m .) and re-run the whole notebook. Quitting and reloading doesn't work because the code is not re-evaluated.
5
90
0
I just wrote my first extensive Python tutorial using IPython notebooks. All went well, except I did a lot of testing and moving blocks around. How do I reset the In [ ]: numbering? I have tried quitting and reloading, but that doesn't seem to work.
How do I reset the Jupyter/IPython input prompt numbering?
1
0
0
61,595
18,817,690
2013-09-15T21:29:00.000
2
0
1
0
ipython,jupyter-notebook,ipython-notebook,jupyter
55,584,413
9
false
0
0
Cell > All Output > Clear Clear all In []: numbers but do not reset them back to 1 for the next cell you run. Kernel > Restart & Clear Output Restart the kernel, clear output, clear In []: numbers and reset them to 1, and clear output.
5
90
0
I just wrote my first extensive Python tutorial using IPython notebooks. All went well, except I did a lot of testing and moving blocks around. How do I reset the In [ ]: numbering? I have tried quitting and reloading, but that doesn't seem to work.
How do I reset the Jupyter/IPython input prompt numbering?
0.044415
0
0
61,595
18,818,608
2013-09-15T23:40:00.000
1
1
1
0
python,unit-testing,testing
18,818,630
3
false
0
0
It comes down to how much of a purist you want to be. I would not go crazy and mock class X if it's just another class free of dependencies on external resources like a database etc.. The important thing is that you have full test coverage for your code. IMO it's not a problem if already tested code runs as "trusted code" in other tests.
3
1
0
I understand that unit tests must be as isolated as possible, i.e. are self-contained and do not have to rely on outside resources like databases, network access or even the execution of previous unit tests. However, suppose I want to test class Y. Class Y uses class X. However, I have already a number of unit tests that test class X. I think that in the unit tests of class Y, I could just assume that class X works properly and use instantiations of it to test class Y (instantiated in the class Y unit tests, so no leftovers or other pollution). Is this correct? Or do I need to mock class X when testing class Y or do something else entirely? If so or if I should mock class X, what are the reasons for that?
Is assuming that another unit test has tested the input of the unit code breaking isolation?
0.066568
0
0
66
18,818,608
2013-09-15T23:40:00.000
2
1
1
0
python,unit-testing,testing
18,818,629
3
false
0
0
Your unit tests for class Y should only test class Y's code. You should assume that everything class Y relies on is already working (and tested). This is standard unit testing. You want to reduce external dependencies, and try to isolate your tests so that you're really only testing class Y's functionality in class Y's tests, but in the real world, everything is connected. In my opinion it's much better to use class X and assume it works than it is to mock out class X to provide purer unit isolation. Either way, you should assume that class X is a black box and that it works.
3
1
0
I understand that unit tests must be as isolated as possible, i.e. are self-contained and do not have to rely on outside resources like databases, network access or even the execution of previous unit tests. However, suppose I want to test class Y. Class Y uses class X. However, I have already a number of unit tests that test class X. I think that in the unit tests of class Y, I could just assume that class X works properly and use instantiations of it to test class Y (instantiated in the class Y unit tests, so no leftovers or other pollution). Is this correct? Or do I need to mock class X when testing class Y or do something else entirely? If so or if I should mock class X, what are the reasons for that?
Is assuming that another unit test has tested the input of the unit code breaking isolation?
0.132549
0
0
66
18,818,608
2013-09-15T23:40:00.000
1
1
1
0
python,unit-testing,testing
18,818,856
3
true
0
0
I'll play devil's advocate here and recommend that unless this is an integration test of some kind, you don't use class X in your class Y tests, but use a Mock (or even a stub) instead. My reasoning behind this is: If your test of Y relies on some side-effect or state from X being invoked by Y, then by definition it is not a unit test. Therefore all you want in your Unit Tests for class Y is something that looks and behaves like a class X whilst at the same time being fully defined by, and under the control of, the test method driving class Y. Since assumptions are antitheses to unit testing, if you want to ensure that when X.SomeMethod is invoked during a test of Y that nothing explodes, the only way to be 100% certain (and therefore have 100% confidence in your test) is to provide via a Mock or Stub an implementation of X.SomeMethod that you can guarantee won't fail because it does nothing and therefore cannot possibly fail. Since your class X is already written and doesn't contain methods that do nothing, you therefore cannot use it for a unit test of class Y. Another point to consider is how you can simulate failure when using a "real" class X. . How do you provide X to Y such that X always causes an exception, in order to test how Y behaves when faced with a dodgy X dependency? The only sane solution is to use a Mock/Stub of X. (of course you might not be going to this level of detail with your unit tests so I mention it just as an example) Consider what may happen 6 months down the line when a change in class X which you did not unit test properly (omission of test, genuine error in designing the test, etc) causes an exception to be thrown when X.SomeMethod is invoked during a test of class Y. How can you know immediately that the problem is class X? Or indeed class Y? You can't, and therefore have lost the primary benefit of isolated unit tests. Of course when you move on to Integration tests you will use class X to test how class Y behaves in a production context but that's a whole different question...
3
1
0
I understand that unit tests must be as isolated as possible, i.e. are self-contained and do not have to rely on outside resources like databases, network access or even the execution of previous unit tests. However, suppose I want to test class Y. Class Y uses class X. However, I have already a number of unit tests that test class X. I think that in the unit tests of class Y, I could just assume that class X works properly and use instantiations of it to test class Y (instantiated in the class Y unit tests, so no leftovers or other pollution). Is this correct? Or do I need to mock class X when testing class Y or do something else entirely? If so or if I should mock class X, what are the reasons for that?
Is assuming that another unit test has tested the input of the unit code breaking isolation?
1.2
0
0
66
18,818,634
2013-09-15T23:45:00.000
1
0
0
0
python,sql,postgresql,sqlalchemy
18,818,835
2
false
0
0
I'm not sure about the SQLalchemy part, but as far as the SQL queries I would do it in two steps: Get the times. For example, something like. SELECT DISTINCT valid_time FROM MyTable LIMIT 3 ORDER BY valid_time DESC; Get the rows with those times, using the previous step as a subquery: SELECT * FROM MyTable WHERE valid_time IN (SELECT DISTINCT valid_time FROM MyTable LIMIT 3 ORDER BY valid_time DESC);
1
0
0
I have a postgres DB in which most of the tables have a column 'valid_time' indicating when the data in that row is intended to represent and an 'analysis_time' column, indicating when the estimate was made (this might be the same or a later time than the valid time in the case of a measurement or an earlier time in the case of a forecast). Typically there are multiple analysis times for each valid time, corresponding to different measurements (if you wait a bit, more data is available for a given time, so the analysis is better but the measurment is less prompt) and forecasts with different lead times. I am using SQLalchemy to access this DB in Python. What I would like to do is be able to pull out all rows with the most recent N unique datetimes of a specified column. For instance I might want the 3 most recent unique valid times, but this will typically be more than 3 rows, because there will be multiple analysis times for each of those 3 valid times. I am new to relational databases. In a sense there are two parts to this question; how can this be achieved in bare SQL and then how to translate that to the SQLalchemy ORM?
Selecting the rows with the N most recent unique values of a datetime
0.099668
1
0
144
18,818,819
2013-09-16T00:14:00.000
10
0
1
0
python,memory-management,file-io
18,818,982
1
true
0
0
Search site docs.python.org for readinto to find docs appropriate for the version of Python you're using. readinto is a low-level feature. They'll look a lot like this: readinto(b) Read up to len(b) bytes into bytearray b and return the number of bytes read. Like read(), multiple reads may be issued to the underlying raw stream, unless the latter is interactive. A BlockingIOError is raised if the underlying raw stream is in non blocking-mode, and has no data available at the moment. But don't worry about it prematurely. Python allocates and deallocates dynamic memory at a ferocious rate, and it's likely that the cost of repeatedly getting & free'ing a measly megabyte will be lost in the noise. And note that CPython is primarily reference-counted, so your buffer will get reclaimed "immediately" when it goes out of scope. As to whether Python will reuse the same memory space each time, the odds are decent but it's not assured. Python does nothing to try to force that, but depending on the entire allocation/deallocation pattern and the details of the system C's malloc()/free() implementation, it's not impossible it will get reused ;-)
1
4
0
I'm writing some python code that splices together large files at various points. I've done something similar in C where I allocated a 1MB char array and used that as the read/write buffer. And it was very simple: read 1MB into the char array then write it out. But with python I'm assuming it is different, each time I call read() with size = 1M, it will allocate a 1M long character string. And hopefully when the buffer goes out of scope it will we freed in the next gc pass. Would python handle the allocation this way? If so, is the constant allocation/deallocation cycle be computationally expensive? Can I tell python to use the same block of memory just like in C? Or is the python vm smart enough to do it itself? I guess what I'm essentially aiming for is kinda like an implementation of dd in python.
python read() and write() in large blocks / memory management
1.2
0
0
1,165
18,822,193
2013-09-16T07:07:00.000
0
0
0
0
python,django
18,824,626
2
false
1
0
Of course that's possible and in fact it's the only way to "go live". You don't want to develop in your live server, do you? And it's true for any platform, not just django. If I understood your question correctly, you need a system to push your development code to live. Use a version control system: git, svn, mercurial etc. Identify environment specific code like setting/config files etc. and have separate instances of them for each environment. Create a testing/staging/PP environment which has live data or live-like data and deploy your code there before pushing it to live. To avoid any downtime during deployment process, usually a symbolic link is created which points to the existing code folder. When a new release is to be pushed, a new folder is created with new code, after all other dependencies are done (like setting and database changes) and the sym link is pointed to the new folder.
2
0
0
As a fledgling Django developer, I was wondering if it was customary, or indeed possible, to create a site with Django then transfer the complete file structure to a different machine where it would "go live". Thanks, ~Caitlin
Creating and transferring a site with Django
0
0
0
58
18,822,193
2013-09-16T07:07:00.000
3
0
0
0
python,django
18,822,289
2
false
1
0
You could use GIT or Mercurial - or other version control system. To put the site structure on a central server. After that you could deploy the site for example with fabric to multiple servers. For deployment process you should consider using for example virtualenv to isolate the project from global python packages and requirements.
2
0
0
As a fledgling Django developer, I was wondering if it was customary, or indeed possible, to create a site with Django then transfer the complete file structure to a different machine where it would "go live". Thanks, ~Caitlin
Creating and transferring a site with Django
0.291313
0
0
58
18,823,407
2013-09-16T08:22:00.000
0
0
0
0
python,scrapy
18,831,329
3
false
1
0
Identify the pattern first, then write the scraper for each pattern and then depending upon the link you are tracing use the relevant scraper function.
1
2
0
I am scraping a web page with Scrapy. I wrote my spider and it works just fine, it scrapes a list of Items on a page (let's call it this the Main page). In the Main page every Item I consider has a link that leads to the detail Item page (let's call it this way) where detailed information about every item is found. Now I want to scrape the details pages too, but the spider would be different, there are different information to be found in different places. Is it possible to tell scrapy to look for links in a particular place and then scrape those pages linked with another spider I am going to define? I hope my explanation was clear enough. Thanks
Scrape follow link with different scraper
0
0
1
185
18,827,379
2013-09-16T11:54:00.000
0
0
0
0
python,mongodb,nosql
18,980,345
1
false
0
0
Not sure about python, but in Java you can use frameworks like PlayORM for this purpose which supports Csasandra, HBase and MongoDb.
1
3
0
I'm looking into the software architecture for using a NoSQL database (MongoDB). I would ideally want to use a database independent ORM/ODM for this, but I can't find any similar library to SQLAlchemy for NoSQL. Do you know any? I do find a lot of wrappers, but nothing that seems to be database independent. If there's none, is it because all the NoSQL databases out there have different use cases that a common ORM/ODM wouldn't make sense like it does in the SQL case ?
NoSQL database independent ORM/ODM for Python
0
1
0
943
18,827,760
2013-09-16T12:13:00.000
3
0
1
0
python,.net,ironpython
18,834,374
1
true
0
1
The assemblies in the root (C:\Program Files (x86)\IronPython 2.7\) are for ipy.exe to use. Apps that embed IronPython should use the ones in the appropriate Platform directory (or NuGet, which will pick the right ones automatically). For now, the assemblies in the root are identical to the ones in Platforms\net40, but that is not in any way guaranteed and will almost certainly change in the future.
1
1
0
I want to add a reference to IronPython.dll to my project. I found this dll in C:\Program Files (x86)\IronPython 2.7\ and different dll's for different Net versions in C:\Program Files (x86)\IronPython 2.7\Platforms\ . What is the difference between these dll's and which one shall I use? The same story with Microsoft.Scripting.dll & Co.
IronPython.dll - which one to use?
1.2
0
0
86
18,827,905
2013-09-16T12:20:00.000
0
0
0
0
python,django
18,838,114
2
false
1
0
If you're not looking for single sign-on, then likely you want to either do the work in the view, and if you need the session to persist, store it in the (local django) session object; or outsource it to something like celery, and again, keep anything you need to keep track of in the session object.
1
0
0
I'm developing a backend in Django and I need to log in to another server backend with a simple POST method. So I would need to create a session object or something like that to handle that login. Any Ideas on how to do that?
create a session to another site
0
0
0
61
18,832,897
2013-09-16T16:26:00.000
1
0
1
1
python,subprocess,7zip
18,833,072
1
true
0
0
Set shell=False . Set the output directory to be '-o%s' % directory. You are prepending a space before the directory on the 7z command line.
1
1
0
I try extract my archive using the subprocess: subprocess.call(['7z', 'x', '-r', '-y', '-o %s' % os.path.normpath("C:/temp"), archivePath], shell = True) but I get an error: 7-Zip [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18 Processing archive: \172.16.0.30\TestFarm\testdata\testdata.7z Error: Can not create output directory C:\temp\ System error: The filename, directory name, or volume label syntax is incorrect. 2 How can I do it? Why it happens? If I use command line console it work perfect.
python 7z extraction with subprocess
1.2
0
0
2,735
18,835,940
2013-09-16T19:37:00.000
4
0
0
0
python,pygame,resolution
18,950,357
1
true
0
1
Blit everything to a separate surface (not directly to the screen), let's call it main_surface. Then blit the main_surface to the screen. Call pygame.transform.scale(main_surface, (width, height) with the new width and height that you get from the VIDEORESIZE event to scale the main_surface and everything in it to the new window dimensions.
1
3
0
I've been using pygame to make a game recently, and have ran into a little problem... Basically, I would like to be able to stretch the whole image that is on the screen (all the things that I have blitted to it) into the resolution which the user has resized the window to. I've searched a lot on the documentation for pygame and on stack overflow and I can't seem to find an answer... Is it even possible? The source code for my game is pretty big atm, if anyone needs it I'll be willing to post it. Sorry if I made my question a little unclear :)
Is there a way to stretch the whole display image to fit a given resolution?
1.2
0
0
2,471
18,836,983
2013-09-16T20:44:00.000
0
0
0
0
python,skype4py
20,256,552
1
false
0
0
I do not want to give you all the answer so that you can improve your coding skills but I will give you some clues: 1)Use boolean values for being activated and deactivated 2)Set a command that activates and deactivates 3) set a value that if reaceived or sent chat and true/false then reply. Gave you a lot of clues! Good look!.
1
2
0
I would like to be able to read messages from a specific user in skype using skype4py then send an automated response based upon the message back to the skype chat window. That way a user could message me and get an automated response saying that I'm currently busy or whatever. I really just need to know how to read and send skype chat using skype4py in python. Thanks for your time.
Using python and skype4py to receive and send chat
0
0
1
2,426
18,838,451
2013-09-16T22:36:00.000
6
0
0
0
python,sockets,python-2.7,python-3.x
18,838,540
1
true
0
0
Use AF_INET if you want to communicate using Internet protocols: TCP or UDP. This is by far the most common choice, and almost certainly what you want. Use PF_PACKET if you want to send and receive messages at the most basic level, below the Internet protocol layer, for example because you are implementing the protocol yourself. Your process must run as root (or with a special capability) to use PF_PACKET. This is a very advanced option. If you have to ask this question, you want AF_INET, not PF_PACKET.
1
5
0
i just want to know that in python socket programming, when to use socket_PF_PACKET and when to use socket.AF_INET, what are difference between both of them?
difference between socket.PF_PACKET and socket.AF_INET in python
1.2
0
1
8,603
18,838,922
2013-09-16T23:20:00.000
4
0
0
1
python,docker
18,839,033
1
true
0
0
Docker is indeed very suitable for this kind of usage. However, please note that docker is NOT yet ready for production usage. I would recommend to create a new container and give non-root privileges to your users to this container. One container per user. This way, you can prepare your docker image and prepare the environment and control precisely what your users are doing :)
1
3
0
I'm thinking about building a web app that would involve users writing small segments of python and the server testing that code. However, this presents a ton of security concerns. Would docker be a good isolation tool for running this potentially malicious code? From what I've read, checking system calls with ptrace is a possibility, but I would prefer to use a preexisting tool.
Arbitrary Code Execution with Docker
1.2
0
0
745
18,839,464
2013-09-17T00:29:00.000
2
0
0
0
python,tkinter,scrollbar
18,840,373
2
true
0
1
Do you want to do something like turning off automatic scrolling, or is that actually what you want to do? If you want to turn automatic scrolling on or off, just check the position of the text before inserting text. If it's at the end, add the text and autoscroll. If it's not at the end, add the text but don't scroll. This will work even if they scrolled by some other mechanism such as by using the page up / page down keys. You can check a couple of different ways. I think the way I've always done it (not at my desktop right now to check, and it's been a few years...) is to call dlineinfo on the last character. If the last character is not visible, this command will return None. You can also use the yview command to see if the viewable range extends to the bottom. It returns two numbers that are a fraction between zero and one, for the first and last visible line. While the user can't turn auto-scrolling on or off by clicking a button, this is arguably better because it will "just happen" when they scroll back to see something.
1
1
0
[Edits noted:] I want to hook into the ScrolledText widget so that when a user clicks anywhere in the scrollbar (or even scrolls it with the mouse wheel, hopefully) I can have it call a callback function where I can [Add: "set a flag, and then return to let the ScrolledText widget do its thing."] [Delete: " do something first (like turn off automatic scrolling) before the scrolling action takes place."] Is this possible? Thanks
python tkinter - want to call a function when scrollbar clicked in ScrolledText widget
1.2
0
0
1,968
18,840,292
2013-09-17T02:21:00.000
0
0
1
1
python,windows,python-2.7,pyodbc,anaconda
20,306,856
6
false
0
0
I suggest trying "conda install" + PackageName. If it fails installing using conda, it may automatically jump to using pip with success.
1
3
0
[New Note: I cannot install through binstar or anaconda. Why can't I install in python, outside of anaconda? Is there a way to get my computer to stop using the anaconda install of python when I don't luanch it specifically through the continuum launcher?] I have an install of Python 2.7 on a windows machine. I just recently installed Anaconda, in addition. I just tried to install a new module for my Python install. I opened a command prompt in an unzipped folder for a python module and ran: python setup.py install However, I experienced an error at build line: building 'pyodbc' extension The ultimate error line reads: error: command 'gcc' failed with exit status 1 It appears to have looked for and not found several files or directories. For example, I received several (7) lines of error like: gcc.exe: error: /Wall: No such file or directory I have a wild hunch that the install of Anaconda is upsetting my PATH variables (or something), but it's just a hunch. Thanks kindly.
Cannot Install Python Modules after Installing Anaconda
0
0
0
10,802
18,846,024
2013-09-17T09:25:00.000
6
0
0
0
python,logging
18,846,310
7
true
0
0
As you are only reading values, logging._levelNames looks an appropriate solution to me. Keep going with logging.addLevelName for setting new values though.
1
10
0
In my application, I'm using python.logging for logging. Now I want to control the loglevel interactively, so i created a combobox hat lets the user select "ERROR", "WARN", "INFO",... What I don't really like is that currently the values in the combobox are hardcoded. Instead,Ii would like to have a list of all "named" loglevels (e.g. both the system defaults, but also those added via logging.addLevelName; but not the fake generated loglevels like "Level 42") The best I have come up with so far is to use the logging._levelNames dictionary. But then this seems to be a private member, and I somehow have a bad feeling accessing it directly. So my question is: what's the proper way to list all currently defined "named" loglevels in Python.
get list of named loglevels
1.2
0
0
5,759
18,847,420
2013-09-17T10:30:00.000
0
0
1
0
python,debugging,user-interface
18,848,928
1
false
0
0
I don't think it can be done. However, you may consider structuring your package to cache intermediate results on disk and to allow resume from thoses cached results. This has benefits outside the bug case you describe. Fixing a bug "on the fly" seems potentially dangerous to me as it is so easy to overlook possible side-effects of the bug. It breaks the link between execution flow and code and make it extremely difficult to debug subsequent issues, etc ...
1
1
0
I have a GUI driven image analysis package in IDL that needs some serious rewriting. Python has been suggested to be as an alternative to IDL (with benefits of cost and some nice libraries among other things). I've poked around now with PyQT4 and it looks like it should work nicely. However, one of the best things about IDL (being interpreted) is that if the code hits a bug, you can correct it on the fly, type 'retall' and then continue with your work. If you are hours into some analysis and have lots of datafiles open, etc., this is a HUGE improvement over having to exit, then change and restart the program. Not only that but we can quickly try some things on the command line, then if it looks good, code up a routine, put it in the menu structure, and then 'retall' and we are back with the new functionality, all without ever having to restart the program. My question is, is this possible with Python? A little googling makes the answer seem like no but since it is an interpreted language I don't understand why not. If the answer really is no I'd strongly urge someone to think about implementing this -- it is probably the feature that made me most happy about IDL over the past decade. Thanks in advance, Eric
Python equivalent of IDL retall?
0
0
0
209
18,857,206
2013-09-17T18:26:00.000
0
1
0
1
python,ssh-keys,pexpect,apscheduler,beagleboneblack
18,880,805
1
false
0
0
As a temporary workaround, I found I could schedule pulling from the server using APScheduler and pexpect.run along with scp. This is less than ideal, as I prefer to have the always running processes on the beaglebones, rather than the server, but it will suffice until I can schedule enough time to switch to Ubuntu. Still, if anyone has suggestions on how to get blackbear working, I would much like to hear them. Bit_Pusher
1
0
0
I am working on having APScheduler upload a data file periodically using pexepect.run('scp ...'). The scp command works fine from the command line without password authentication (keys have been shared). However, when running in a python script on Beaglebone Black (started from a remote machine using pexpect), scp fails because blackbear (which replaces openssh on the BBB) doesn't load the private key properly. When I add -i ~/.ssh/id_rsa, then I get an error from /usr/bin/dbclient: Exited: String too long; dbclient is part of blackbear and this appears to be bug. When trying to convert my private key using >dropbearconvert openssh dropbear id_rsa id_rsa.db, I get the error: Error: Ciphers other than DES-EDE3-CBC not supported. I tried to install openssh, but this didn't work due to a conflict with blackbear. Just before I give up on Angstrom and go to Ubuntu, are there any suggestions.? I have already added a lot to Angstrom so changing operating systems at this time is painful. Thanks. Bit_Pusher
Automating scp upload without password
0
0
0
641
18,857,355
2013-09-17T18:34:00.000
0
1
1
0
python,python-3.x,python-module
47,367,621
4
false
0
0
The modules like math, time, gc are not written in python and as rightly said in above answers that they are somewhere (written or moduled) within python interpreter. If you import sys and then use sys.builtin_module_names (it gives tuple of module names built into this interpreter). math is one such module in this list. So, we can see that math comes from here and is not separately written in library or any other folder as a python code.
2
17
0
I found all the other modules in Python33/Lib, but I can't find these. I'm sure there are others "missing" too, but these are the only ones I've noticed. They work just fine when I import them, I just can't find them. I checked sys.path and they weren't anywhere in there. Are they built-in or something?
Where are math.py and sys.py?
0
0
0
17,367
18,857,355
2013-09-17T18:34:00.000
2
1
1
0
python,python-3.x,python-module
18,857,424
4
false
0
0
These modules are not written in Python but in C. You can find them (at least on linux) in a subfolder of the lib-folder called lib-dynload. The math module is then in a file math.cpython-33m.so (on windows probably with .dll instead of .so). The cpython-33m part is my python version (3.3).
2
17
0
I found all the other modules in Python33/Lib, but I can't find these. I'm sure there are others "missing" too, but these are the only ones I've noticed. They work just fine when I import them, I just can't find them. I checked sys.path and they weren't anywhere in there. Are they built-in or something?
Where are math.py and sys.py?
0.099668
0
0
17,367
18,857,599
2013-09-17T18:48:00.000
0
1
0
1
python,youtube-api
18,857,640
1
false
0
0
Most google api code snippet require the user to input their personal API KEY. Please be sure you have appropriately updated the code snippet to use your api key.
1
0
0
I have cut and pasted a python code sample from Google API to access You Tube video viewing data of my companies videos. The application will be scheduled and get usage data then write to a database on the server ( CENTOS ). I have tried both Simple API and installed application types. Is there a solid samle type that you know of or anyone else having issues with the API calls? My latest error is that the JSON file is not organized correctly ( which I got from the API page unaltered ).
You Tube API Calls
0
0
0
63
18,858,421
2013-09-17T19:35:00.000
1
0
1
0
python,ide
18,858,522
4
false
1
0
I've been using PyCharm for the past year and am very satisfied with the features included, coming from Visual Studio myself. It has got what you are requesting and more. It is not free, but there is a trial version. In my opinion money well spent. It has also got some Django support, but I've never used it myself.
3
1
0
Is there a python IDE that allows you to right click on any method being used and select Go to Declaration or Find References, just like in Visual Studio with all of the .net languages? I am finding it difficult to navigate another developer's django project without this ability.
Python IDE that allows for right-click -> Go to Declaration and Find References?
0.049958
0
0
192
18,858,421
2013-09-17T19:35:00.000
2
0
1
0
python,ide
18,858,551
4
false
1
0
PyDev (an Eclipse Plugin) allows you to press F3 and you'll go to the highlighted element's definition. And PyDev comes with some nice django support features like running manage.py commands, setting up apps, etc.
3
1
0
Is there a python IDE that allows you to right click on any method being used and select Go to Declaration or Find References, just like in Visual Studio with all of the .net languages? I am finding it difficult to navigate another developer's django project without this ability.
Python IDE that allows for right-click -> Go to Declaration and Find References?
0.099668
0
0
192
18,858,421
2013-09-17T19:35:00.000
0
0
1
0
python,ide
18,858,642
4
false
1
0
In windows I've been using PyScripter and like it. It brings up the function definition on mouseover and allows you to jump to it if you want. Also has 'find references' in the right click context menu. It's free.
3
1
0
Is there a python IDE that allows you to right click on any method being used and select Go to Declaration or Find References, just like in Visual Studio with all of the .net languages? I am finding it difficult to navigate another developer's django project without this ability.
Python IDE that allows for right-click -> Go to Declaration and Find References?
0
0
0
192
18,861,896
2013-09-18T00:15:00.000
0
0
0
0
python,django
18,862,232
1
false
1
0
generally each of your django PROJECTS (django-admin startproject) has its own server, project is the directory containing all your apps and contains the manage.py. each app ususally doesn't because a project is composed of apps. For developing django includes a development server you can run using runserver python manage.py runserver 0.0.0.0:8080
1
0
0
I am just starting out on Django. I have successfully set it up and have it it talking to mysql and am ready to code . Inside of the eclipse IDE; should each app you are working have its own distinct server ( i.e my site) instance? Does it matter ?
Should each Django app have its own server instance?
0
0
0
60
18,863,293
2013-09-18T03:18:00.000
0
0
0
0
python,django,git,web-deployment
18,863,822
1
false
1
0
As a Django developer i can assure you that it grows on you and becomes easier to understand the development environment. You should remember that settings.py is probably going to be where your thoughts will be for quite a while in the start; the good part is that its only once, after you got it up and running you'll only touch settings.py to add new modules or change some configuration but its unlikely. I believe there are hosts that integrate with git so that should not be a problem since you will probably just git clone your project's url into the host (and not forget to enable/configure wsgi) To leave the settings.py out of the mess, you will tell git not to track the file with: git rm file; and then when you add your files for commit you do it with git add -u so it refers only to your tracked files. I'm not sure if i was clear enough. (probably not) But, i hope i could help you in some way.
1
1
0
I'm a freelance editor and tutor as well as a fiction writer and artist looking to transition to the latter on a full-time basis. Naturally, part of that transition involves constructing a website; a dynamic site to which new content in various forms can be added with ease. Now, I've always intended to learn how to program, and I simply haven't the money to hire someone else to do it. So, having had a good experience with my brief dabblings in Python, I decided I'd go with Django for building my site. I set up a Fedora Virtualbox for a development environment (as I didn't want to jump through hoops to make Windows work) and went to town on some Django tutorials. Everything went swimmingly until life intervened and I didn't touch the project for three weeks. I'm in a position to return to it now, but I've realized two things in the process. First, I'm having to do a fair bit of retracing of my steps just to find where certain files are, and second, I don't know how I'd go about deploying the site after I'm done building it. My intention is to get the cheapest Linode and host off that until some theoretical point in the future where I required more. I suspect that re: the file organization issue, that's just something I'll become more familiar with over time, though if there are any tricks I should be aware of to simplify the structure of my overall Django development space, I'm eager to know them. However, what about deployment? How viable is it to, with sufficient knowledge, automate the process of pushing the whole file structure of a site with Git? And how can I do that in such a way that it doesn't tamper with the settings of my development environment?
Simplifying development process for Django
0
0
0
121