Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
20,539,915
2013-12-12T09:44:00.000
0
0
1
0
python,numpy,spyder
54,549,756
2
false
0
0
This Spyder looks wery nice IDE, but I'd prefer real gray dashed indent lines. I saw these were already done for a dev version, are there some issues why not to add this feature also to the offical version?
1
3
0
I am using Spyder for some Numpy work currently and python's indentation mechaninc is confusing me a little. It would be really helpful if I could have some color coding for each indentation level or some dotted lines (like in notepad++). Is there a way to turn such a feature on, or any plugins I can use?
Colorcode the indents level / visual indication in Spyder
0
0
0
800
20,543,708
2013-12-12T12:34:00.000
0
0
1
0
python,security,token,m2crypto,pkcs#11
41,831,467
1
false
0
0
using PKCS#11, the only way to store 'home made' data, it through the use of a CKO_DATA object type. Like any object, it can be persistent on the token (not lost when the token is powered off) or it can be a memory object (lost when the session to the token is closed). To create a CKO_DATA object is similar to any other object creation: open a r/w session on the slot if the object is to be protected by user authentication (CKU_USER) then Login as user create the object template with mandatory attributes such as CKA_CLASS etc. (refer to the PKCS#11 specifications for details) set the CKA_TOKEN to TRUE if the object is to be persistent, or FALSE if it is a memory object set the CKA_PRIVATE to TRUE* if you want this object to be read/writen only upon successfull user authentication or set it to **FALSE if anybody can access it. set a CKA_LABEL and CKA_APPLICATION attributes with values you want to help you find the object next time set the CKA_VALUE attribute to the value you want (your integer) Call C_CreateObject, using this template will create the desired object. HTH,
1
2
0
Is it possible to save a value in a security token memory by using PyKCS11 and M2Crypto? I need to save an integer to token memory, so that the value can be carried out with the token I know how to create objects, but is it possible to create attributes in a token, so whenever I read that attribute I will know the status of that token.
How to save a temporary value in a security token?
0
0
0
203
20,544,190
2013-12-12T12:55:00.000
1
0
0
0
python,user-interface,pyqt4
20,544,425
2
false
0
1
You could have two TextEdit's, one that is set to readonly that displays the full text and a smaller one below it for entering text to append to the one above.
2
1
0
I am working on a personal project in Python that involves creating a user-interface. For that purpose, I have chosen to go with PyQt4. As part of the GUI code, I need a widget in which we can append text but the requirement is that it should not allow to be edited. A QtGui.QTextEdit would solve the purpose of appending text but would not satisfy the second requirement. What widget can I use that satisfies both the requirements? Thanks
What is the ideal widget in PyQt4 in which we can append text but cannot edit it after the text gets populated?
0.099668
0
0
69
20,544,190
2013-12-12T12:55:00.000
1
0
0
0
python,user-interface,pyqt4
20,544,331
2
true
0
1
Edited by who ? If you don't want that a user can edit it, I think you just need to set the QTextEdit as readonly with QTextEdit.setReadOnly(True) If you don't want to edit again from the code, I think you just need to check if the control has some text in it: if no, add the text otherwise return an error (or whatever you need)
2
1
0
I am working on a personal project in Python that involves creating a user-interface. For that purpose, I have chosen to go with PyQt4. As part of the GUI code, I need a widget in which we can append text but the requirement is that it should not allow to be edited. A QtGui.QTextEdit would solve the purpose of appending text but would not satisfy the second requirement. What widget can I use that satisfies both the requirements? Thanks
What is the ideal widget in PyQt4 in which we can append text but cannot edit it after the text gets populated?
1.2
0
0
69
20,546,307
2013-12-12T14:34:00.000
2
0
0
0
python,ios,push-notification,apple-push-notifications
20,547,161
1
false
0
0
First you should realize there's a difference between invalid tokens (tokens that were never valid in the current push environment) and tokens belonging to devices from which your app was uninstalled. If all the device tokens in your DB were received from the APN service and you didn't mix production and sandbox tokens in the same DB, then all your tokens should be valid. In that case, you can send a notification to all of the device tokens and run the feedback service to find out which of them belong to devices that uninstalled your app.
1
3
0
I have 10,000,000 device tokens for APNs. But the token where collected from 2011, and some people probably deleted the app since then. Therefore many tokens may have become invalid. I want to filter these invalid tokens. How can I do that? I tried the pyapns==0.4.0 , but check is slow, maybe there's a bug. I use the print push.disconnections(app_id = 'aphid', environment = 'production') but only get one invalid token. So I think I should try a simple python code for this work.
How to quickly test the validity of device token for APNs?
0.379949
0
0
2,388
20,547,049
2013-12-12T15:08:00.000
0
0
1
0
python,debugging,console,interactive
34,922,501
6
true
0
0
You can do all this in the iPython Notebook. Use the magic command %pdb to stop on error.
1
9
0
I've recently moved from Matlab to Python. Python is a much better language (from the point of view of a computer scientist), but Python IDEs all seem to lack one important thing: A proper interactive debugger. I'm looking for: The ability to set breakpoints graphically by clicking next to a line of code in the editor. The ability to run ANY CODE while stopped in the debugger, including calling functions from my code, showing new windows, playing audio, etc. When an error occurs, the debugger should automatically open an interactive console at the error line. Once done with the interactive console, you can resume normal execution. Matlab has all these features and they work incredibly well, but I can't find them anywhere in Python tools. I've tried: PyCharm: the interactive console is clunky, often fails to appear, and crashes all the time (I've tried several different versions and OSs). IPython: can't set breakpoints -Launching a Python console programatically: you have to stop your code, insert an extra line of code, and run again from the beginning to do this. Plus, you can't access functions already imported without re-importing them. Being able to debug and fix problems THE FIRST TIME THEY APPEAR is very important to me, as I work in programs that often take dozens of minutes to re-run (computational neuroscience). CONCLUSION: there is NO way to do all of these in Python at the moment. Let us hope that PyLab development accelerates.
In Python, how do I debug with an interactive command line (and visual breakpoints?)
1.2
0
0
4,947
20,552,495
2013-12-12T19:30:00.000
6
0
1
0
python,math,floating-point
20,552,706
4
true
0
0
A correctly implemented ceil returns the exact mathematical value of ceil(x), with no error. When IEEE 754, or any reasonable floating-point system, is in use, ceil is not subject to rounding errors. This does not prevent adverse effects from sources other than the ceil function. For example, ceil(1.000000000000000000000000000000001) will return 1 because 1.000000000000000000000000000000001 is converted to a floating-point value before ceil is called, and that conversion rounds its result. Similarly, a conversion from double to float followed by a call to ceil may yield a value that is not the ceiling of the original double value. The conversion of the result of ceil to int of course relies on the range of int. As long as the value is in range, the conversion should not change the value.
3
3
0
If I have a floating point number x, which is within the range of [0, 106], is it guaranteed in Python that int(ceil(x)) will be rounded up correctly? It seems possible from what little I know that ceil may be rounded down leading to an incorrect result. Something like: x = 7.6, ceil(x)=7.999.., int(ceil(x))=7. Can that happen?
Is int(ceil(x)) well-behaved?
1.2
0
0
337
20,552,495
2013-12-12T19:30:00.000
0
0
1
0
python,math,floating-point
20,552,779
4
false
0
0
It's often the case that there's rounding errors in floating point numbers, but that's only because not all numbers that are reprensentable in decimal are perfectly represented in binary. Those numbers that can be represented exactly won't have any rounding applied. That is the case for integer values up to 2**53, so with only 6 digits you will be safe. The lowest positive integer value that can't be represented exactly in a float is 9007199254740993.
3
3
0
If I have a floating point number x, which is within the range of [0, 106], is it guaranteed in Python that int(ceil(x)) will be rounded up correctly? It seems possible from what little I know that ceil may be rounded down leading to an incorrect result. Something like: x = 7.6, ceil(x)=7.999.., int(ceil(x))=7. Can that happen?
Is int(ceil(x)) well-behaved?
0
0
0
337
20,552,495
2013-12-12T19:30:00.000
3
0
1
0
python,math,floating-point
20,552,869
4
false
0
0
Python's guarantee about the floating-point format for float isn't very strict. I think all it says is that it uses double, and in the case of CPython that's whatever the C compiler calls double. For numbers up to a million you're fine. No floating-point format in practical use loses precision for integers that small. The C standard requires that double is OK up to 10 decimal digits. What you've probably observed is that due to floating-point rounding int(sum([1.1] * 10)) is 10, not 11. That's because sum([1.1] * 10) is 10.999999999999998, not 11.0. The result of ceil is always exactly an integer, so it will never be rounded down by int (or if you like it will be rounded down, but doing so doesn't change it's value!)
3
3
0
If I have a floating point number x, which is within the range of [0, 106], is it guaranteed in Python that int(ceil(x)) will be rounded up correctly? It seems possible from what little I know that ceil may be rounded down leading to an incorrect result. Something like: x = 7.6, ceil(x)=7.999.., int(ceil(x))=7. Can that happen?
Is int(ceil(x)) well-behaved?
0.148885
0
0
337
20,557,627
2013-12-13T01:36:00.000
-1
0
1
0
python
20,557,641
2
false
0
0
I don't recommend trying this in batch file. Learning the basics of a scripting language such as Python or Perl is much better for this type of work.
1
0
0
I have a list of fruits example: "banana", "apple", "grape", "strawberry" and I want to create files with the text: "This fruit is %fruit name%, its delicious" and save as %fruit name%.fruits how i do that? cam be in any language
create files, with a template and a list of name and save on files
-0.099668
0
0
41
20,557,751
2013-12-13T01:51:00.000
2
0
1
0
python,amazon-web-services,amazon-ec2
20,612,680
1
false
0
0
You should consider an EC2 instance as you would any other virtual machine- Multi-threaded processes are limited by the number of CPUs that the operating system can use. Autoscaling allows you to spin up more instances on AWS based on certain parameters. It is not intended to increase the capacity of an existing one.
1
2
0
I have a Python app that creates threads via a for loop and I have a few questions regarding multithreading on EC2: I would like to know if there is a limit on the number of concurrent threads on an EC2 instance? What would happen if the maximum number of threads is reached or exceded? Is there a way to limit the number of concurrent threads in my python app via a setting like MAX_THREADS or something like that? Does autoscaling allow unlimited threads on EC2? Thanks in advance. Note: The instance type is a General purpose 'm1.large' 64-bit
Python multithreading on EC2?
0.379949
0
0
532
20,561,820
2013-12-13T08:08:00.000
0
0
0
0
python,wxpython,wxwidgets
20,561,947
1
false
0
1
I need to set wx.TAB_TRAVERSAL as a style on the ScrolledWindow. Apparently it's not set by default.
1
0
0
I had a page that was working fine with respect to tab order, but since I added a scrolled panel to the page, you can only tab between the first control in the ScrolledPanel and the navigation buttons outside the ScrolledPanel.
wxWidgets ScrolledPanel tab order doesn't work
0
0
0
119
20,571,568
2013-12-13T16:41:00.000
0
0
1
0
python,python-2.7,windows-server-2008-r2
20,571,773
2
true
1
0
Yes, this is possible. However, you need to decouple execution of your programme from your web interface. Probably the simplest setup is to use supervisor to run both your program and your web interface. How your program and your web interface interact is up to you. You could even have your web interface control supervisor, or see if there is a third party web interface for supervisor.
1
0
0
I have made this python (2.7) program that takes a few hours to complete, looping through stuff all the time. I also have a windows 2008 server which I want to use to run this program, I can run it just fine on its own and leave it alone for a while, but I want to use a web interface to achieve the same effect. I currently use cherrypy and made a form that submits to another page and executes the code there, which works somewhat, but when I close the browser the execution stops. What I really want to do is create a form that provides a function with some arguments and start a standalone python script from there, and possibly also include a stop button to stop the execution. Is this in any way possible?
Run python program on windows 2008 server with web gui
1.2
0
0
143
20,572,706
2013-12-13T17:41:00.000
7
0
0
0
python,statsmodels
20,573,996
1
false
0
0
There should be a variation on this in any textbook, without the weights. For OLS, Greene (5th edition, which I used) has se = s^2 (1 + x (X'X)^{-1} x') where s^2 is the estimate of the residual variance, x is vector or explanatory variables for which we want to predict and X are the explanatory variables used in the estimation. This is the standard error for an observation, the second part alone is the standard error for the predicted mean y_predicted = x beta_estimated. wls_prediction_std uses the variance of the parameter estimate directly. Assuming x is fixed, then y_predicted is just a linear transformation of the random variable beta_estimated, so the variance of y_predicted is just x Cov(beta_estimated) x' To this we still need to add the estimate of the error variance. As far as I remember, there are estimates that have better small sample properties. I added the weights, but never managed to verify them, so the function has remained in the sandbox for years. (Stata doesn't return prediction errors with weights.) Aside: Using the covariance of the parameter estimate should also be correct if we use a sandwich robust covariance estimator, while Greene's formula above is only correct if we don't have any misspecified heteroscedasticity. What wls_prediction_std doesn't take into account is that, if we have a model for the heteroscedasticity, then the error variance could also depend on the explanatory variables, i.e. on x.
1
5
1
wls_prediction_std returns standard deviation and confidence interval of my fitted model data. I would need to know the way the confidence intervals are calculated from the covariance matrix. (I already tried to figure it out by looking at the source code but wasn't able to) I was hoping some of you guys could help me out by writing out the mathematical expression behind wls_prediction_std.
Mathematical background of statsmodels wls_prediction_std
1
0
0
2,096
20,574,073
2013-12-13T19:04:00.000
0
0
1
0
python,django,debian-packaging
20,574,224
1
false
1
0
How to install not only Django project, but also Python and Django with it? What and where and how should I write the script? If you created a deb file, as it gets interpreted, you should write the python dependency in the debian/control file. This project demand diffrent additions, such as grappelli, tinymce, filebrowser. Should I do anything with it? If this packages are in any repository (or your repository) then you can either, put them as recommendation or as suggestion in the control file.
1
0
0
I have a Django project needed to be installed in Debian. I make packages via stdeb. I do not understand two things, on which I can`t find answers: How to install not only Django project, but also Python and Django with it? What and where and how should I write the script? This project demand different additions, such as grappelli, tinymce, filebrowser. Should I do anything with it?
Distribution Python package to Debian package with installing additional things
0
0
0
98
20,576,166
2013-12-13T21:17:00.000
0
0
1
0
python,setup.py
20,603,812
3
false
0
0
Do you have default options in ~/.pydistutils.cfg?
1
1
0
I have a python project using setuptools via setup.py for installation. When I provide no arguments to python setup.py install things install into the standard --user directory in ~/.local. However, when I provide --prefix ~/opt/myproject, that gets ignored by the install command and things still get pushed into ~/.local. It seems that whatever directories I specify, the 'user' scheme is selected. I'm not doing anything special in my setup.py, but I can post whatever code is relevant to help debugging.
Why would python setup.py ignore --prefix?
0
0
0
864
20,577,682
2013-12-13T23:23:00.000
2
1
0
0
php,python,curl,cross-language
20,577,730
3
false
0
0
If you need to be able to distribute the PHP code and the Python code independently, deploy them on separate servers, etc., then this is reasonable—you make the Python code a web service, make the PHP code call that web service, and you're done. But if the Python script is always going to be running locally, it's usually easier just to run it as a program—pass it command-line arguments and/or standard input, and retrieve its standard output. PHP has a few different ways to do this—system, popen, exec, passthru—which all have different upsides and downsides. For example, with exec, you just call escapeshellarg on each argument, put them together into a space-separated string with the path to the script, call exec, and you get the result back.
1
0
0
Im writing a program in php which does almost everything it needs to do, but at one point i need to use a function that is written in python using the mechanize library, both these scripts will be hosted on the same server. My initial thoughts are to make a CURL call containing any arguments from the php script to the python script, and then return the results of the function again back to the php script using CURL. Im quite new to programming and not sure of the best conventions when doing something like this, is my setout workflow using CURL the usual way this would be done or is there another way ?
Calling a function in another lagnauge
0.132549
0
0
46
20,578,757
2013-12-14T01:42:00.000
9
0
0
1
python,google-app-engine,localhost,google-cloud-datastore
20,578,822
4
false
1
0
This can happen if you're running multiple instances of dev_appserver without giving them distinct datastore files/directories. If you need to be running multiple instances, see dev_appserver.py --help and look at the options for specifying paths/files.
2
7
0
Right now I get a blank page when localhost runs, but the deployed app is fine. The logs show the "database is locked". How do I "unlock" the database for localhost?
How do I unlock the app engine database when localhost runs?
1
0
0
3,369
20,578,757
2013-12-14T01:42:00.000
0
0
0
1
python,google-app-engine,localhost,google-cloud-datastore
59,824,595
4
false
1
0
So with your command to start the server which should be start_in_shell.sh -f -p 8xxx -a 8xxx do include a -s flag after the -f as below: start_in_shell.sh -f -s -p 8xxx -a 8xxx Sometimes some unanticipated error somewhere causes this issue. Remember to keep only one of the instances with this flag(-s) and others should be started as you do usually. This should make it work.
2
7
0
Right now I get a blank page when localhost runs, but the deployed app is fine. The logs show the "database is locked". How do I "unlock" the database for localhost?
How do I unlock the app engine database when localhost runs?
0
0
0
3,369
20,579,140
2013-12-14T02:48:00.000
0
0
1
0
python,windows
20,579,158
3
false
0
0
Right click on the title bar of the command prompt. Go to properties. You can change all you want from Font & Color tabs there?
1
0
0
I am a beginner from a non programming background. I am using windows OS. Whenever I write a program in Python and run it, it opens in MS Dos with a black colored screen. I just want to change the background and the font color. How can I do that?
Python Beginner - How to customize the output window
0
0
0
521
20,579,179
2013-12-14T02:55:00.000
0
0
0
0
python,neural-network,normalization
20,579,228
1
false
0
0
One way to do this is to bit-encode your input; have one neuron per bit of the maximal length input string; and feed 0 as -1, and 1 as 1. If you desire a bit-string as output, then interpret positive outputs as 1 and negative outputs as 0.
1
3
1
I am having problem to understand normalization concept in artificial neural networks. If you could explain how it works. For example if I want input basketball score 58-72 or if I want input word “cat” (as a natural language word). How it works if the range is [-1,1]. Be aware that I am very new with ANN and normalization concept.
Normalization in the artificial neural network
0
0
0
756
20,582,543
2013-12-14T11:12:00.000
2
0
1
0
python,sha
20,582,584
1
true
0
0
How do I select proper data-types to do this in Python ? You could use Python strings (str) for both the input and the output. If you do, you'll be able to use hashlib.sha1() directly, without needing any datatype conversions.
1
2
0
I am working on a project related to Data deduplication. I need to design a fingerprint calculation module (to calculate the fingerprint of file chunk) which will take two inputs and give an output. Input : some_module(unsigned char*buffer, uint32 buffer_length) output: unsigned char* fingerprint I have been asked to design a class to implement above the module. I will use hashlib but my question is How do I select proper data-types to do this in Python ?
Calculate SHA1 fingerprint of (unsigned char *) in python
1.2
0
0
331
20,587,888
2013-12-14T20:26:00.000
0
0
0
0
python,sql,postgresql,sqlalchemy
20,589,295
1
true
0
0
You could use a schema change management tool like liquibase. Normally this is used to keep your database schema in source control, and apply patches to update your schema. You can also use liquibase to load data from CSV files. So you could add a startup.csv file in liquibase that would be run the first time you run liquibase against your database. You can also have it run any time, and will merge data in the CSV with the database.
1
1
0
I tend to start projects that are far beyond what I am capable of doing, bad habit or a good way to force myself to learn, I don't know. Anyway, this project uses a postgresql database, python and sqlalchemy. I am slowly learning everything from sql to sqlalchemy and python. I have started to figure out models and the declarative approach, but I am wondering: what is the easiest way to populate the database with data that needs to be there from the beginning, such as an admin user for my project? How is this usually done? Edit: Perhaps this question was worder in a bad way. What I wanted to know was the possible ways to insert initial data in my database, I tried using sqlalchemy and checking if every item existed or not, if not, insert it. This seemed tedious and can't be the way to go if there is a lot of initial data. I am a beginner at this and what better way to learn is there than to ask the people who do this regularly how they do it? Perhaps not a good fit for a question on stackoverflow, sorry.
Sqlalchemy, python, easiest way to populate database with data
1.2
1
0
995
20,588,259
2013-12-14T21:04:00.000
0
0
0
0
python,qt,user-interface,pyside
23,973,285
2
false
0
1
My solution to nearly the same exact problem was simply that it needed to be installed with admin privileges, so that might be your issue also.
1
2
0
I have a 32-bit Windows 7 OS. Today, I tried downloading the PySide setup program. However, after I try running the downloaded file, I get the following error: "PySide Setup program invalid or damaged." Why am I getting this? I have recently started a course on building GUI applications with Python using the Qt framework, and need PySide for the same. I use Python 2.7 btw.
Pyside Installation Error
0
0
0
1,660
20,590,704
2013-12-15T02:53:00.000
10
0
1
1
python,linux,bash
20,590,779
1
true
0
0
Just add them manually to os.listdir() result. result = [os.curdir, os.pardir] + os.listdir(path). Most modern filesystems no longer create the actual hardlinks but all APIs include the names explicitly anyway.
1
9
0
I am new to python and am working on writing bash ls command in python, I am stuck on ls -a option which (according to the manpage): Include directory entries whose names begin with a dot (`.') I am aware of os.listdir() but it does not list special entries '.' and '..' From the docs: os.listdir(path): Return a list containing the names of the entries in the directory given by path. The list is in arbitrary order. It does not include the special entries '.' and '..' even if they are present in the directory. I need help in listing these special entries through python, I would appreciate if someone can help me out here a little. Thanks all for your patience.
Use python to reproduce bash command 'ls -a' output
1.2
0
0
17,199
20,595,525
2013-12-15T14:20:00.000
0
1
0
0
python,http
20,595,734
3
false
0
0
Sending HTTP request is pretty simple, I don't think it could be a block issue for most real world applications. If you really want to send request very fast, you can consider to use multi processes, not waste your time on choosing a faster library(Which likely to be helpless).
2
0
0
As the title says, I'm looking for information about the best library to send HTTP requests in Python really fast. Do you know which one is the fastest and/or consume less CPU time/memory ? urllib2 httplib2 requests Thanks
Best performance HTTP library in Python
0
0
1
1,475
20,595,525
2013-12-15T14:20:00.000
2
1
0
0
python,http
20,595,601
3
false
0
0
urllib2 might be better for performance, but requests is much simpler to use.
2
0
0
As the title says, I'm looking for information about the best library to send HTTP requests in Python really fast. Do you know which one is the fastest and/or consume less CPU time/memory ? urllib2 httplib2 requests Thanks
Best performance HTTP library in Python
0.132549
0
1
1,475
20,597,590
2013-12-15T17:37:00.000
14
0
0
0
python,sql,database,nosql,rethinkdb
20,600,546
1
true
1
0
I'm working at RethinkDB, but that's my unbiased answer as a web developer (at least as unbiased as I can). Flexible schema are nice from a developer point of view (and in your case). Like you said, with something like PostgreSQL you would have to format all the data you pull from third parties (SoundCloud, Facebook etc.). And while it's not something really hard to do, it's not something enjoyable. Being able to join tables, is for me the natural way of doing things (like for user/userArtist/artist). While you could have a structure where a user would contain artists, it is going to be unpleasant to use when you will need to retrieve artists and for each of them a list of users. The first point is something common in NoSQL databases, while JOIN operations are more a SQL databases thing. You can see RethinkDB as something providing the best of each world. I believe that developing with RethinkDB is easy, fast and enjoyable, and that's what I am looking for as a web developer. There is however one thing that you may need and that RethinkDB does not deliver, which is transactions. If you need atomic updates on multiple tables (or documents - like if you have to transfer money between users), you are definitively better with something like PostgreSQL. If you just need updates on multiple tables, RethinkDB can handle that. And like you said, while RethinkDB is new, the community is amazing, and we - at RethinkDB - care a lot about our users. If you have more questions, I would be happy to answer them : )
1
11
0
I am building the back-end for my web app; it would act as an API for the front-end and it will be written in Python (Flask, to be precise). After taking some decisions regarding design and implementation, I got to the database part. And I started thinking whether NoSQL data storage may be more appropriate for my project than traditional SQL databases. Following is a basic functionality description which should be handled by the database and then a list of pros and cons I could come up with regarding to which type of storage should I opt for. Finally some words about why I have considered RethinkDB over other NoSQL data storages. Basic functionality of the API The API consists of only a few models: Artist, Song, Suggestion, User and UserArtists. I would like to be able to add a User with some associated data and link some Artists to it. I would like to add Songs to Artists on request, and also generate a Suggestion for a User, which will contain an Artist and a Song. Maybe one of the most important parts is that Artists will be periodically linked to Users (and also Artists can be removed from the system -- hence from Users too -- if they don't satisfy some criteria). Songs will also be dynamically added to Artists. All this means is that Users don't have a fixed set of Artists and nor do Artists have a fixed set of Songs -- they will be continuously updating. Pros for NoSQL: Flexible schema, since not every Artist will have a FacebookID or Song a SoundcloudID; While a JSON API, I believe I would benefit from the fact that records are stored as JSON; I believe the number of Songs, but especially Suggestions will raise quite a bit, hence NoSQL will do a better job here; for SQL: It's fixed schema may come in handy with relations between models; Flask has support for SQLAlchemy which is very helpful in defining models; Cons for NoSQL: Relations are harder to implement and updating models transaction-like involves a bit of code; Flask doesn't have any wrapper or module to ease things, hence I will need to implement some kind of wrapper to help me make the code more readable while doing database operations; I don't have any certainty on how should I store my records, especially UserArtists for SQL: Operations are bulky, I have to define schemas, check whether columns have defaults, assign defaults, validate data, begin/commit transactions -- I believe it's too much of a hassle for something simple like an API; Why RethinkDB? I've considered RehinkDB for a possible implementation of NoSQL for my API because of the following: It looks simpler and more lightweight than other solutions; It has native Python support which is a big plus; It implements table joins and other things which could come in handy in my API, which has some relations between models; It is rather new, and I see a lot of implication and love from the community. There's also the will to continuously add new things that leverage database interaction. All these being considered, I would be glad to hear any advice on whether NoSQL or SQL is more appropiate for my needs, as well as any other pro/con on the two, and of course, some corrections on things I haven't stated properly.
How suitable is opting for RethinkDB instead of traditional SQL for a JSON API?
1.2
1
0
2,835
20,598,229
2013-12-15T18:43:00.000
1
0
0
0
python,doxygen
20,598,413
1
true
0
0
Found a simple fix: \htmlonly \endhtmlonly Seem to to the job of inserting a single blank line in doxygen generated page.
1
1
0
In the fragment @code some code @endcode How do I get a closing blank line in the output?
Insert blank line in doxygen code fragment
1.2
0
0
234
20,601,473
2013-12-15T23:57:00.000
1
0
0
0
python,line,pygame,placement
20,601,985
1
true
0
1
There is no way using pygame that I know of that will allow you to do this. You will instead have to use plain old-fashioned algebra. You are going to be using the equation of the line that the squares are on. Ax, Ay, Bx, and By will be representing the coordinates of the different squares. All you have to do is get a y coordinate for an inputted x coordinate by using y=(Ay-By)/(Ax-Bx)*(x-Ax)+Ay, or get an x coordinate for an inputted y coordinate by using x=(Ay-By)/(Ax-Bx)*(x-Ay)+Ax. You will have to figure out which y or x coordinates to enter manually. To find the distance between the squares, simply use the distance formula: math.sqrt((Ax-Bx)**2+(Ay-By)**2)
1
0
0
I have a program that generates a game board in Pygame. It draws the background, then randomly assigns Start and End squares, then randomly places three squares in between them. I need to fill in the space in between the five placed squares with other squares on a line. How would I go about doing this? I need the length of the line, and a list of points on the line to correctly place the squares. Thanks, Adam
How to get a list of all points in a line in Pygame
1.2
0
0
1,185
20,601,898
2013-12-16T00:55:00.000
0
0
0
0
python,web,wsgi
20,601,999
2
false
1
0
It will be hard, between python2 and python3 there is a lot of incompatibility, and somehow the developers of the most python frameworks somehow won't understand, why they should port their software to the newer version of the language. The simplest way if you use python 2. The best way were to start an independent, python 3 fork of your most loved python framework. EDIT: newer django supports python3, thus it should work.
1
2
0
I am going through a tutorial on building a website with django. It suggests using mod_python but I have heard to stay away from that and use wsgi instead. Problem is I am running python 3.3 (and apache 2.4.2 for that matter, everything seems to be compatible with apache 2.2). Is there any way to get all of this working on 3.3? Or is my best bet to go back to python 2.7? Thanks. Edit: I am on Windows, so that seems to be another roadblock.
Web page building with python 3.3?
0
0
0
104
20,601,977
2013-12-16T01:09:00.000
0
0
1
0
python,class
20,614,034
3
false
0
0
I know, this is not an answer to your question, but you should really consider rethinking your object structure since you are violating the common principles of object oriented programming.
1
2
0
I remember that there is some sort of syntax that will make a function of a class only callable by that class and not by any subclasses that inherit the functions of that class, but I can't recall exactly what it is and google has been of no help to me. Can anyone remind me how I would go about doing this?
How to make a function usable by a class but not by it's subclasses in python
0
0
0
141
20,603,994
2013-12-16T05:22:00.000
1
0
1
1
python,linux,macos,tornado
20,604,080
2
false
0
0
It's safe to set it quite high. Usually the default for desktop OS is quite low. The main disadvantage is the extra memory that is allocated
1
2
0
I use Tornado(a python framework) to develope websites, but it often pops me the OSError - too many open files error during high concurrency tests. One way to solve this is to set the FD limit to a higher number. What are the cons, or disadvantages of setting a high FD limit? Can I set it arbitrarily to, like 99999999?
What are the cons if I set the file descriptor limit to a huge number?
0.099668
0
0
1,001
20,605,161
2013-12-16T07:03:00.000
1
0
1
0
python,beanstalkd,beanstalkc
20,633,227
1
true
0
0
No, you can't search a queue (Beanstalkd calls them 'tubes'), only get a job (though you can PEEK, to see what the next job would be). Since you can watch more than one tube at a time though (and get a job from any one of them, depending on priority and age), you could create a tube that would only every contain a type-b job. Then it turns into a simple count - stats-tube [tube-name(eg 'type-b-jobs')]. If that tube has a job in it, it's a typ-b job, and so you can go on with the type-a jobs.
1
0
0
I have a use case which requires placing two different types of jobs in a beanstalk queue, say type a and type b. I put type a job whenever a new one arrives, but for type b I want that there should be a single job of type b in a queue at a time (there should never be two or more jobs of type b in queue). When I go for inserting type b job in queue, I first want to check if there is any type b job already in queue? If it is, then delay that job and don't insert a new one. If there is no type b job in queue, insert a new one. So is it possible to search a job in beanstalk queue?
Search in beanstalk queue
1.2
0
0
416
20,605,189
2013-12-16T07:05:00.000
0
0
1
1
ubuntu,desktop,ipython-notebook
20,605,837
1
false
0
0
There is no reason for a website (here in particular IPython slideshow) to be able to have any effect on the OS. If this happens, then it is probably due to en error somewhere else (wild guess error in graphic card that both crashed the slideshow and gnome). So more a hidden common denominator than cause/effect. Also you should'nt need to reinstall Linux totally if something like that happen again. The gnome (?) desktop is just a package amoung other that you can reinstall. Of course you need to know a little about using the terminal and apt-get.
1
0
0
I had an ugly experience when I tried to launch the ipython-nb presentation in ubuntu 10.04. I could managed to see the presentation under chrome but with errors (slides where one over the other). But the worst thing was that once I restarted my pc, the genome(?) desktop was gone. I had to reinstall the entire linux. I would like yo know if someone has experienced a similar crash under ubuntu12.04LTS.
danger to crash desktop by using ipython-nb slides in ubuntu 12.04
0
0
0
42
20,605,544
2013-12-16T07:31:00.000
2
1
1
0
python,amazon-web-services,amazon-elastic-beanstalk
20,607,574
1
true
0
0
by including it in the requirements.txt, you can include only the packages you are calling. Pip then takes care of installing the dependencies and checking the versions. This has the additional advantage of when you are changing or upgrading your project, you can specify a new version of the library you are using and all the dependent libraries will also be updated.
1
1
0
I was wanting to use some non-standard python packages in my project and was wondering how to add them. What is the benefit of using the AWS eb config files (.ebextensions and requirements.txt) rather than just downloading and including the package in my actual project under a lib directory like you would with a java application?
Adding external packages to elastic beanstalk python app
1.2
0
0
289
20,611,656
2013-12-16T13:04:00.000
1
0
0
0
python,django
20,622,467
2
true
1
0
Errors are shown for the "data" parameter ( or instance if it's a modelform ), not "initial". You need to pass the original values as the data parameter if you want to display errors. And then just use the usual validation methods ( raising ValidationError in clean_* methods , field specific validation, etc. ). I wouldn't mess up with the error dictionary, it's an internal API.
1
0
0
When editing an already existing element in Django, I would like to show ValidationErrors for fields that are not good (because somewhere else something changed). I've tried to overload the __init__ of the form and set form_obj.errors['__all__']="something" but it doesn't display any errors. How should I approach this?
Validation when displaying already existing element
1.2
0
0
31
20,614,349
2013-12-16T15:20:00.000
2
0
1
0
c++,python,parallel-processing,scheduling,directed-acyclic-graphs
20,619,386
1
true
0
0
This is a pretty common problem. It also shows up in hardware design. There has been a lot of work on algorithms to solve it. If you are going to write something yourself, start by checking out "Hu's Algorithm". If you just want a solution, these functions are built into architectural synthesis programs. Look at the Wikipedia pages on high level synthesis and logic synthesis. There are several professional tools that can handle this, if you can get access to them through school or work. There are university programs you can often get for free that can also handle this problem. I'm not up-to-date on what is currently available. An very old one is MIS II from Berkeley. It's scripting language was Tcl, not Python.
1
4
0
I have a graph of the dependencies of all tasks, and the costs of each task. Now I want to calculate a scheduling for a given amount of CPUs. I've found many papers on scheduling algorithms, optimal schedulers seem to be too expensive for my problem size (around 100 nodes) as it's an NP-hard problem. I'd settle for a heuristic, preferably one that has a bound how close it gets to the optimum. My problem now is: do I really have to code it myself?? This should have been solved many times before, it can be easily applied to project management, maybe there something exists? If you happen to know a library in python that'd be perfect or the next best thing would be C++, otherwise i'd settle for anything else.
finding static scheduling of DAG for multiprocessors - library?
1.2
0
0
606
20,617,337
2013-12-16T17:48:00.000
1
1
0
0
python
20,617,564
1
true
0
0
Well, you're in for a learning curve here, but multiprocessing.Pool() will create a pool of any number of processes you specify. Use the initializer= argument to specify a function each process will run at the start. Then there are several methods you can use to submit work items to the processes in the pool - read the docs, play with it, and ask questions if you get stuck. One caution: "extremely lightweight processes" is impossible. By definition, processes are "heavy". "How heavy" is up to your operating system, and has approximately nothing to do with the programming language you use. If you're looking for lightweight, you're looking for threads.
1
0
0
My application polls an API repeatedly and spawns processes to parse any new data resulting from these calls, conditionally making an API request based on those data. The speed of that turnaround time is critical. A large bottleneck seems to be related to the setup of the actual spawned processes themselves -- a few module imports and normal instantiation code, which take up to 0.05 seconds on a middling Amazon setup†. It seems like what it would be helpful to have a batch of processes with those imports/init code already done††, waiting to process results. What is the best approach to create/communicate with a pool (10-20?) of warm, reusable, and extremely lightweight processes in Python? † - yes, I know throwing better hardware at the problem will help, and I'll do that too. †† - yes, I know doing less will help, and I'm working on making the code as streamlined and minimal as possible
Pool of warm, reusable Python processes
1.2
0
0
442
20,617,587
2013-12-16T17:58:00.000
8
0
0
0
python,random,numpy
20,617,702
1
false
0
0
If they were indistinguishable from to true random begin with, they will be indistinguishable from true random afterwards. The reason is that any correlation or bias that exists among the remaining numbers would also constitute a correlation or bias among the complete set. Therefore if the complete set is good then the subset is good. Of course, this would not necessarily be the case if you deleted the numbers selectively based on their value, rather than based solely on their position in the sequence. Also, if the numbers are not good to begin with then they might conceivably be worse afterwards than before. For an extreme example, consider a sequence that consists of 9 zeros followed by the result of a coin toss, 9 zeros and another coin toss, etc. This data source has some entropy (1 bit per 10 values), but if you remove every 10th element then it has none (the remaining output is known in advance).
1
1
1
I am currently using numpy.random.random_sample to compute a large set of random numbers. If I delete, say, every 10th of these numbers, is the result still going to be as random as before? Or would I introduce some sort of skew by doing this? EDIT: As pointed out this boils down to how good my RNG is. How can I find out if I can trust a RNG, or how would I spot a potential skew?
Will random numbers still be random if I systematically delete every nth one?
1
0
0
101
20,618,523
2013-12-16T18:50:00.000
0
0
0
0
python,pandas,hdf5,pytables
29,225,626
2
false
0
0
Okay, so I don't have much experience with oracle databases, but here's some thoughts: Your access time for any particular records from oracle are slow, because of a lack of indexing, and the fact you want data in timestamp order. Firstly, you can't enable indexing for the database? If you can't manipulate the database, you can presumably request a found set that only includes the ordered unique ids for each row? You could potentially store this data as a single array of unique ids, and you should be able to fit into memory. If you allow 4k for every unique key (conservative estimate, includes overhead etc), and you don't keep the timestamps, so it's just an array of integers, it might use up about 1.1GB of RAM for 3 million records. That's not a whole heap, and presumably you only want a small window of active data, or perhaps you are processing row by row? Make a generator function to do all of this. That way, once you complete iteration it should free up the memory, without having to del anything, and it also makes your code easier to follow and avoids bloating the actual important logic of your calculation loop. If you can't store it all in memory, or for some other reason this doesn't work, then the best thing you can do, is work out how much you can store in memory. You can potentially split the job into multiple requests, and use multithreading to send a request once the last one has finished, while you process the data into your new file. It shouldn't use up memory, until you ask for the data to be returned. Try and work out if the delay is the request being fulfilled, or the data being downloaded. From the sounds of it, you might be abstracting the database, and letting pandas make the requests. It might be worth looking at how it's limiting the results. You should be able to make the request for all the data, but only load the results one row at a time from the database server.
1
12
1
I am working with an Oracle database with millions of rows and 100+ columns. I am attempting to store this data in an HDF5 file using pytables with certain columns indexed. I will be reading subsets of these data in a pandas DataFrame and performing computations. I have attempted the following: Download the the table, using a utility into a csv file, read the csv file chunk by chunk using pandas and append to HDF5 table using pandas.HDFStore. I created a dtype definition and provided the maximum string sizes. However, now when I am trying to download data directly from Oracle DB and post it to HDF5 file via pandas.HDFStore, I run into some problems. pandas.io.sql.read_frame does not support chunked reading. I don't have enough RAM to be able to download the entire data to memory first. If I try to use cursor.fecthmany() with a fixed number of records, the read operation takes ages at the DB table is not indexed and I have to read records falling under a date range. I am using DataFrame(cursor.fetchmany(), columns = ['a','b','c'], dtype=my_dtype) however, the created DataFrame always infers the dtype rather than enforce the dtype I have provided (unlike read_csv which adheres to the dtype I provide). Hence, when I append this DataFrame to an already existing HDFDatastore, there is a type mismatch for e.g. a float64 will maybe interpreted as int64 in one chunk. Appreciate if you guys could offer your thoughts and point me in the right direction.
Reading a large table with millions of rows from Oracle and writing to HDF5
0
1
0
5,171
20,619,533
2013-12-16T19:44:00.000
1
0
0
0
python,html,google-app-engine,geolocation,gps
20,619,667
1
false
1
0
How precise HTML5 geolocation is depends entirely on what the user's browser supports. On a phone, it may have access to the phone's idea of the user's location (based on GPS plus cell and WiFi triangulation); on a desktop machine, there's little to go on besides IPs, so it can't do any better than you could do yourself. But either way, the user may have disabled or limited it (or it may be disabled or limited by default for him). Or may be using a browser that doesn't support HTML5 locations. Or may be using an add-on that fuzzes or flat-out lies about location. So: is that precise enough to pinpoint buildings? It can be. Or is building my own map with customized coordinates a better option? How would that help? If you don't know the coordinates the user is at, you have nothing to look up on the map.
1
0
0
I'm building a web app with google app engine and python. I've read that html5 geolocation is much more precise than IP geolocation, but is that precise enough to pinpoint buildings? Or is building my own map with customized coordinates a better option?
I want to locate the specific building the user is in. Should I use html5 geolocation or build my own custom map?
0.197375
0
1
213
20,620,109
2013-12-16T20:19:00.000
2
0
0
0
python,pygame
68,090,814
3
false
0
1
You can change the transparency of the entire surface using a individual alpha value. Use surf=pygame.Surface((size,pygame.SRCALPHA). Now if you change the alpha value by setting surf.set_alpha(alpha) of the screen, the surface around the text wont be black anymore.
1
7
0
I am using pygame.font.Font.render() to render some text. I'd like the text to be translucent, ie have an alpha value other than 255, so I tried passing a color argument with an alpha value (eg (255, 0, 0, 150)) as the color argument for pygame.font.Font.render() but it didn't have any effect. I also tried using pygame.Surface.convert_alpha() on the resulting Surface object, but that didn't do anything either. Any ideas?
How to render transparent text with alpha channel in PyGame?
0.132549
0
0
6,139
20,623,847
2013-12-17T00:43:00.000
0
0
1
0
python,arrays,multidimensional-array,indexing,tuples
20,623,886
4
false
0
0
If you have a 1 dimensional array, that is simply an array of values like you mentioned where the array could equal [2013, 12, 16, 1, 10]. You access individual items in the array by using array[index]. However, there are actually 3 parameters used for getting array values: array[start:end:step] array[0, 1] is invalid, as the syntax is using colons not commas. 0,1 evaluates to a tuple of 2 values, (0, 1). If you want to get the value of 12, you need to say array[1]
1
2
1
I have a 1D array. each element holds a unique value IE [2013 12 16 1 10] so array[0,0] would be [2013] array[0,1] would be [12]. array[0,0:2] would be [2013 12]. When I try array.index(array[0,0:5]). It creates error and says that list indicies must be integers, not tuple. find the index of a specific element if the element is [2013 12 16 1 10] a tuple...?
Python, find the index of 1D array that is filled with arrays of tuple
0
0
0
979
20,624,211
2013-12-17T01:18:00.000
3
1
1
0
python,data-structures,import
20,624,583
3
true
0
0
If your util.py file contains functions you're using in a lot of different projects, then it's actually a library, and you should package it as such so you can install it in any Python environment with a single line (python setup.py install), and update it if required (Python's packaging ecosystem has several features to track and update library versions). An added benefit is that right now, if you're doing what the other answers suggested, you have to remember to manually have put util.py in your PYTHONPATH (the "dirty" way). If you try to run one of your programs and you haven't done that, you'll get a cryptic ImportError that doesn't explain much: is it a missing dependency? A typo in the program? Now think about what happens if someone other than you tries to run the program(s) and gets those error messages. If you have a library, on the other hand, trying to set up your program will either complain in clear, understandable language that the library is missing or out of date, or (if you've taken the appropriate steps) automatically download and install it so things are ready to roll. On a related topic, having a file/module/namespace called "util" is a sign of bad design. What are these utilities for? It's the programming equivalent of a "miscellaneous" folder: eventually, everything will end up in it and you'll have no way to know what it contains other than opening it and reading it all.
1
4
0
This might be a more broad question, and more related to understanding Python's nature and probably good programming practices in general. I have a file, called util.py. It has a lot of different small functions I've collected over the past few months that are useful when doing various machine learning tasks. My thinking is this: I'd like to continue adding important functions to this script as I go. As so, I will want to use import util.py often, now and in the future, in many unrelated projects. But Python seems to feel like I should only be able to access the code in this file if it lives in my current directly, even if the functions in this file are useful for scripts in different directories. I sense some reason behind the way that works that I don't fully grasp; to me, it seems like I'll be forced to make unnecessary copies often. If I should have to create a new copy of util.py every time I'm working from within a new directory, on a different project, it won't be long until I have many different version / iterations of this file, scattered all over my hard drive, in various states. I don't desire this degree of modularity in my programming -- for the sake of simplicity, repeatability, and clarity, I want only one file in only one location, accessible to many projects. The question in a nutshell: What is the argument for Python to seemingly frown on importing from different directories?
What is the argument for Python to seemingly frown on importing from different directories?
1.2
0
0
83
20,625,440
2013-12-17T03:34:00.000
1
1
1
0
python,unit-testing,runtime
20,625,573
1
false
0
0
I agree with all the comments. Don't do this. Your function/class/component should NOT behave differently under testing.
1
2
0
Does anyone know the least hacky way of determining if Python code is being run by a unit test? Thanks!
How to tell at runtime if you're inside of a unit test in Python?
0.197375
0
0
42
20,625,621
2013-12-16T20:26:00.000
5
0
1
0
python,eclipse,pycharm
20,626,531
1
true
0
0
Ctrl-Tab switches in most-recently-used order (like Alt-Tab for the desktop). Ctrl-E will show recent files. You should look at Help > Default Keymap Reference, under the navigation heading, for more helpful shortcuts.
1
4
0
I'm coming to PyCharm from Eclipse, and something that is annoying me is how I switch between open files in the editor. In Eclipse, I had a hotkey set up to open the previous editor. A menu would appear with the files in order of most-to-least recently viewed. If I hit the key once quickly, it would switch to the last file viewed. Whatever I had been working on recently would always be readily available. In PyCharm, the files are listed in the editor in seemingly random order. Control+left (or right) takes you to the next file in the listing, which may be near a file unrelated in any way. I can use the mouse to select a file, but I'm not used to this, and it makes me stop and think about what the file name was, what module it was in, etc. Natural, quick, efficient, minimum of thought -- this is what I'm looking for in navigating between open files in the PyCharm editor. Does anyone know of a way I can achieve this? Thanks!
Optimal switching between file editors in PyCharm
1.2
0
0
805
20,628,825
2013-12-17T07:58:00.000
0
0
0
0
python,html
20,629,430
2
false
1
0
What is a valid character in HTML depends on your definition for “HTML” and “valid”. Different HTML versions have different rules for formally valid characters, and they may have characters that are valid but not recommended. Moreover, there are general policies such as favoring Normalization Form C; though not part of HTML specifications, such policies are often regarded as relevant to HTML, too. What is rendered (and how) depends on the browser, the style sheets of the HTML document, and available fonts in the user’s computer. Moreover, not all characters are rendered as such. For example, in normal HTML content, any contiguous sequence of whitespace characters is treated as equivalent to a single space character. So the answer is really “It depends.” Consider asking a much more targeted practical question to get a more targeted answer.
1
1
0
Some characters, such as ordinal 22 or 8, don't display in html (using chrome, for example when copy-and-pasting them into this 'Ask question' editor; I am assuming utf-8). How do I determine which characters are valid html, and of the valid, which are rendered? A table/reference would be helpful (I couldn't find one by google-ing), but preferably I need a set of rules or a solution that can be implemented in python.
How to I determine whether a character is valid-for/rendered-in html?
0
0
1
120
20,629,734
2013-12-17T08:55:00.000
3
0
1
0
python,perforce
20,630,214
1
true
0
0
You can safely delete the .pyc files and then ignore them from here on. Python will automatically generate new .pyc files when the modules are imported/updated. Generating the .pyc files is quite fast too, so there's no real noticeable performance difference even when python generates new files.
1
1
0
I've started working on a Python project using Perforce for VCS that previously only had a single developer. Currently all the pyc files are in source control which is making merges a pain. I've seen I can add P4ignore files to keep pyc files out of VCS as carry on working but I need a way to remove them from perforce without removing them from disk. Or (and this has only occurred to me as I ask the question), as a new-to-python person, can I just delete all the pyc files from VCS (and so from disk) and then let my p4ignore file stop newly created pyc files from getting back into VCS?
How to remove existing files from Perforce and ignore them (without deleting them)
1.2
0
0
316
20,634,301
2013-12-17T12:35:00.000
1
0
1
0
python,blender-2.67
20,701,809
1
true
0
0
I finally got the solution to the problem. 1. Invoke blender through command line blender.exe --background --python yourFile.py 2. In your python file, you could use the modules provided by blender such as import_ply (....Blender/2.68/scripts/addons/import_ply), etc. Just go through the module and you will be able to use the function written inside and manage to write code according to your need.
1
2
0
I want to apply a modifier to large number of meshes stored in different .ply files. I wish to do this through command line so that the process can be automated. I know the basic of blender python API like how to write the modifier in python. But that required me to first import .ply file in blender using UI and then run my python script. However, I want to automate the process of loading ply file, do the required operations and save back the result in ply format so that all the files can be processed one by one with minimum manual work.
How can I load a .ply file in blender-2.68 and apply modifier to it through command line/script?
1.2
0
0
1,414
20,637,439
2013-12-17T14:59:00.000
-2
0
0
0
python,pandas,csv,readfile
55,944,660
6
false
0
0
skip[1] will skip second line, not the first one.
1
121
1
I'm trying to import a .csv file using pandas.read_csv(), however, I don't want to import the 2nd row of the data file (the row with index = 1 for 0-indexing). I can't see how not to import it because the arguments used with the command seem ambiguous: From the pandas website: skiprows : list-like or integer Row numbers to skip (0-indexed) or number of rows to skip (int) at the start of the file." If I put skiprows=1 in the arguments, how does it know whether to skip the first row or skip the row with index 1?
Skip rows during csv import pandas
-0.066568
0
0
269,629
20,646,723
2013-12-17T23:22:00.000
1
0
1
0
python,python4delphi
20,648,897
1
true
0
1
You don't need to include the pyc files, no. Assuming, that is, you are adding your zip file to sys.path and importing from that (the question could use more details along those lines), the Python zipimporter will gladly compile the bytecode for you on the fly. You should have a little faster startup time if you include the pyc files (or for a stand-alone application you might consider including .pyo files instead, which you can make in various ways, such as running python with the -O flag). That said this bytecode compilation is pretty fast, so depending on how many modules your application imports it might not make a noticeable difference. Try it for yourself and see.
1
0
0
I make Python3+Delphi app. I packed all files+dirs from Libs folder into python32.zip. It works. Do I need PYC files (and __pycache__ dirs) in that zip? If I pack PYC files, will Py3k work fasteer?
PythonNN.zip: are PYC files needed
1.2
0
0
284
20,646,758
2013-12-17T23:25:00.000
3
0
0
0
python,database,django,model-view-controller,analytics
20,646,874
1
true
1
0
There isn't a right answer to this question. Different programmers use different structures. We can, however, provide advice. Your project will definitely have a core application, in charge of users and account management. All common functionality should be provided by this app. Login/logout/recovery functions fall into this category, and so do high-scores, history, friends, etc. Do that first. When it's working, you'll most likely want to implement leagues as different apps. The logic driving each sport is different, so it makes sense to keep it separate, and mount it in separate URL paths. All of these apps will, and should, depend heavily on the core. Fair warning: you'll probably find yourself repeating patterns when implementing each app. Some of these patterns you can migrate safely to the core, but some others will look the same but be just different enough to warrant their own code. Implement first, find common ground, abstract last. This architecture will let you enable and disable leagues, as well as push league-specific upgrades, with little to no hassle. You can work on different leagues without the fear of breaking stuff elsewhere. In other words, you have the right initial approach. Work on it, make mistakes, learn, refactor, abstract. Trying to get it perfect on the first try will only slow you down.
1
2
0
I'm looking to make an analytics fantasy sports site and I need a little help thinking how to structure the site as a whole. It's most definitely been done before but I'm doing this for educational purposes and because this is a hobby of mine! The idea for the site is: Users can create an account It'll use the Yahoo/ESPN API to draw their individual league data (rosters, stats, etc) Each user account can be linked with multiple leagues across many sports (i.e. 2x bball leagues, 1x football, 1x baseball) The website will perform certain analytics based on the sport, and type of league they are playing in (h2h or roto). Going through the django tutorial, I realized that they not only use a mvc approach but also make a distinction between project and app. My question is: How do I structure the backend of the website? Does each sport get it's individual app? What about each type of league? And finally, does the "log-in / account creation" get an app of it's own as well? It spans across all the fantasy sports. Just a little confused, as it's my first time creating a website like this. Similarly, I understand that I should just take it one step at a time, but I just want to get a good understanding of the overall vision
How to structure backend of a fantasy sports analytics website on Django?
1.2
0
0
1,392
20,647,274
2013-12-18T00:10:00.000
2
0
1
0
python,size,queue
47,827,360
2
false
0
0
queue.qsize() doesn't return the number of bytes in the queue. It returns the number of "things" placed in the queue. If you put 5 byte-arrays of 100 bytes in the queue, the qsize() will be 5, not 500.
2
17
0
I've seen instances where qsize() and len() has been used to compute the size of the queue. What is the distinction between the two?
python queue get size, use qsize() or len()?
0.197375
0
0
19,413
20,647,274
2013-12-18T00:10:00.000
19
0
1
0
python,size,queue
20,647,322
2
true
0
0
For most containers, you'll want len, but Queue.Queue doesn't actually support len. This may be because it's old or because getting the length of a queue in a multithreaded environment isn't particularly useful. In any case, if you want the (approximate) size of a Queue, you want qsize.
2
17
0
I've seen instances where qsize() and len() has been used to compute the size of the queue. What is the distinction between the two?
python queue get size, use qsize() or len()?
1.2
0
0
19,413
20,648,128
2013-12-18T01:34:00.000
0
0
0
0
python,django,usergroups
20,653,824
1
false
1
0
You can make an many-to-many relationship between users, groups and classrooms (additional model that will point to these three models, giving group for user in specified classroom)
1
2
0
A little background: I am building a discussion board app aimed at educators. It's essentially a reddit-style discussion board where responses to a prompt are 'up-voted' or 'down-voted'. The content creators are anonymous to everyone except an admin/teacher user. So far it works for a single "classroom"; i.e., there can be one or more admins, one or more students, but so far there is only one "classroom". All the students will see the same discussion threads. My Problem: I want to expand it so that there can be 'unlimited' classrooms and that any teacher who is interested in using the service can create a group and add students (manually, or by way of some token, etc). I also want students to be able to participate in one or more groups. It seems that refactoring the models slightly to have a foreign key to django.contrib.auth.models.Groupshould make this possible, but I am not sure if this is what Group is designed to do. At first blush Group and the related Permission look a lot more like the Unix groups metaphor. What I am looking for is a way to have a teacher-user to create a group and to have a student-user join this group and view and create group-specific content. Am I headed in the right direction? Is there a much better way to accomplish what I am trying to do?
Is using django auth Group the right way to solve this?
0
0
0
144
20,651,529
2013-12-18T06:55:00.000
0
0
1
0
python,scrapy,python-2.6
22,181,034
2
false
1
0
easy_install Scrapy==0.18.4 worked for me
1
2
0
I am using CentOS and my default Python version is 2.6. I installed Scrapy using pip install Scrapy and I executed the code below from scrapy.selector import Selector And I got this message: Scrapy 0.20.2 requires Python 2.7. I can't just upgrade my current version of Python because I have lots of programs that depends on it. If possible, I would like to install the correct version of Scrapy on my Python 2.6.
Install Scrapy for Python 2.6
0
0
0
1,711
20,654,812
2013-12-18T09:55:00.000
1
0
1
1
python,python-2.7,canopy,python-venv
20,655,950
1
true
0
0
Well, I found it, canopy_cli.exe exactly at the specified location, so the directory is correct. So... Maybe you try to reinstall the software? Btw, what version you are using? How about updating as the same time?
1
0
0
I am working with Canopy for Python, which is a software bundle of Python 2.7 and the most important data analysis packages. It also includes IPython and makes working in a live console very easy. Instead of using virtualenv, Canopy makes use of a Python-2.7-backported version of "venv". To initially setup a new environment, they want me to use canopy_cli myProjectFolder. Unfortunately, I do not find a canopy_cli.exe in my C:\Users\Me\AppData\Local\Enthought\Canopy\App on Windows. Did I do anything wrong or is the file located elsewhere? Thanks!
Using Canopy as a Python distribution, where is "canopy_cli.exe" located?
1.2
0
0
147
20,656,322
2013-12-18T11:01:00.000
1
1
0
0
c++,python,profiling,swig
20,661,702
1
false
0
0
It's been a while since I've built anything on Linux, but from memory you can build your C++ lib with the profiling switches on, run the script via the profiler on python.exe, and the profile data will be captured for your lib only, not for the whole process. You can then view your profile data just as you would any other application. You might need the debug version of python, I can't remember. Sorry I can't be more specific, maybe post more info about your dev env.
1
4
0
I'm writing Python code and use a library that provides a Python interface through SWIG; the library itself is written in C++, and everything is run in Linux. I would now like to profile my code and not only get information about which if my library calls are taking the most time, but also what the situation is inside the library. (I'm suspecting a performance problem there.) The library is open-source and if necessary I could build it with profiling flags enabled. What are my options?
Profiling SWIG Python code
0.197375
0
0
257
20,660,579
2013-12-18T14:17:00.000
0
0
1
0
python,regex,pyqt4,qregexp
32,671,451
2
false
0
0
Just to post an answer on this side of things... In most cases I've played with, the DOTALL "(.*?)" approach doesn't seem to match anything when it comes to QRegExp(). However, here's something I use that works in almost all my cases for matching triple-quoted strings: single: "[^#][uUbB]?[rR]?(''')[^']*(''')" double: '[^#][uUbB]?[rR]?(""")[^"]*(""")' The only cases that don't work: ''' ' ''' and """ " """. best thing I could find for now... EDIT: if I don't have those as the stop characters, it continues to match through the rest of the document.
1
1
0
Recently I've been working on a PyQt program. In the beginning I used python re module to process the regex, but the transformation between python string and QString makes me confused. So I tried to change QRegExp. However, I want to use the IGNORECASE, MULTILINE, DOTALL of the python re. I've found the QRegExp.setCaseSensitivity() to replace the re.I, but I can't find the rest two features. Can anybody help? Or tell me how to transform the QString to python string? Both the regex pattern and data are input by the user, so their types are QString.
Can QRegExp do MULTILINE and DOTALL match?
0
0
0
607
20,661,142
2013-12-18T14:44:00.000
7
0
0
0
python,machine-learning,scipy,k-means
20,661,301
1
false
0
0
Based on the documentation, it seems kmeans2 is the standard k-means algorithm and runs until converging to a local optimum - and allows you to change the seed initialization. The kmeans function will terminate early based on a lack of change, so it may not even reach a local optimum. Further, the goal of it is to generate a codebook to map feature vectors to. The codebook itself is not necessarily generated from the stoping point, but will use the iteration that had the lowest "distortion" to generate the codebook. This method will also run kmeans multiple times. The documentation goes into more specifics. If you just want to run k-means as an algorithm, pick kmeans2. If you just want a codebook, pick kmeans.
1
10
1
I am new to machine learning and wondering the difference between kmeans and kmeans2 in scipy. According to the doc both of them are using the 'k-means' algorithm, but how to choose them?
What's the difference between kmeans and kmeans2 in scipy?
1
0
0
3,357
20,666,947
2013-12-18T19:40:00.000
1
0
0
0
android,python-2.7,kivy
22,365,294
1
true
0
1
There is no way to do it from the build.py. However, you can change manually the templates/AndroidManifest.xml.tmpl and adapt it for your needs.
1
1
0
Is there a way to establish a "supports-screens" kind of configuration to make my application available only for normal and large screens android devices.is there a way to do this with the build.py script?(i have bets on --intent-filters option but not sure how it might be used)
screen support in kivy set to normal and large screens
1.2
0
0
84
20,668,574
2013-12-18T21:16:00.000
1
0
0
1
python
20,669,260
1
true
0
0
The only problems I would expect would be in: print stdin stdout raw_input
1
0
0
Easy quesiton here. I have a GUI which I run using a batch file. I want it to be displayed with out the terminal in the background, so I use the pythonw executable. However, I am not using the corresponding .pyw file, but a regular .py file instead. Are there any differences between python and pythonw that might cause strange behavior. The program gives strange behavior when I use the batch file, but not when within cmd so I suspect the culprit is some internal difference between python and pythonw. Could this be the case? Thanks in advance.
Can errors result from using .py files with pythonw?
1.2
0
0
121
20,669,240
2013-12-18T21:54:00.000
1
0
0
0
python,authentication
20,670,147
3
false
1
0
If you're using Unix, rely on the fact that it's a multi user system. That is, the user has already logged in using his own credentials, so you don't need to do anything, just use its home directory to store the data, taking care to block other users from accessing it by using permissions. You can improve this to provide encryption too. For global application data, you can specify a "manager" user or group, with its own directory, where the application can write. All this might be possible on Windows systems too.
3
1
0
I will keep it short. Can someone please point me in the right direction in: How to authenticate users in native applications written in Python? I know in web there are sessions, but I can't think of a way to implement authentication, that will 'live' for some time and on expiry I can logout the user? EDIT: I am referring to desktop type of apps, I am fairly happy with the implementation for Web based development in Twisted EDIT 2 The application I am thinking about will not authenticate against a server, but a self-contained application, an example the idea is a Cash Register/Point of Sale (my idea is kinda different, but parts of the functionality is the same), in which I need to authenticate the cashier, so I can log the transactions processed by him/her, print name on receipt and etc. All will be based in one single machine, no server communication or anything
Implementing session-like storage in python application
0.066568
0
0
375
20,669,240
2013-12-18T21:54:00.000
1
0
0
0
python,authentication
20,669,715
3
false
1
0
You seem to be very confused and fixated on "sessions" for some reasons, maybe because your background is in the web apps? Any-who you don't need "sessions" because with desktop application you have no trouble telling who is using the software without needing some elaborate tools. You don't need server, you don't need authentication tools, you don't need anything - just store that user within your single application. That is all really - a variable within your application called "user" and maybe some interface at the boot to pick one from available users. And if you need it to last between boots, just save it in a file and read from it.
3
1
0
I will keep it short. Can someone please point me in the right direction in: How to authenticate users in native applications written in Python? I know in web there are sessions, but I can't think of a way to implement authentication, that will 'live' for some time and on expiry I can logout the user? EDIT: I am referring to desktop type of apps, I am fairly happy with the implementation for Web based development in Twisted EDIT 2 The application I am thinking about will not authenticate against a server, but a self-contained application, an example the idea is a Cash Register/Point of Sale (my idea is kinda different, but parts of the functionality is the same), in which I need to authenticate the cashier, so I can log the transactions processed by him/her, print name on receipt and etc. All will be based in one single machine, no server communication or anything
Implementing session-like storage in python application
0.066568
0
0
375
20,669,240
2013-12-18T21:54:00.000
1
0
0
0
python,authentication
20,669,948
3
true
1
0
It’s not entirely clear what kind of security you are expecting. In general, if the end user has physical access to the machine and a screwdriver, you’re pretty much screwed—they can do whatever they want on that machine. If you take hardware security as a given, but want to ensure software security, then you’re going to have to do server communication within the machine’s boundaries. You have to separate the server and the client, and run the server in a security context that is inaccessible to the user. The server will then do both the authentication and whatever operations need authentication (printing out receipts etc.). For example, under a Unix-like OS, you would run a daemon under a dedicated system user or under root; on Windows, you would have a system service running as LOCAL SERVICE or whatever that’s called. In this way, the operating system’s built-in security features will ensure (given proper maintenance, like timely application of security hotfixes) that the user cannot influence the behavior of the software that does the sensitive operations. The protocol between the client and the server can be anything, and you can do authentication in much the same way as in HTTP—indeed, you may even use HTTP itself. Finally, if you’re certain that your users will not be tampering with your system at all—e.g. because they lack the technical skills, or are being watched by CCTV cameras—you can forget all that stuff and go with Puciek’s answer.
3
1
0
I will keep it short. Can someone please point me in the right direction in: How to authenticate users in native applications written in Python? I know in web there are sessions, but I can't think of a way to implement authentication, that will 'live' for some time and on expiry I can logout the user? EDIT: I am referring to desktop type of apps, I am fairly happy with the implementation for Web based development in Twisted EDIT 2 The application I am thinking about will not authenticate against a server, but a self-contained application, an example the idea is a Cash Register/Point of Sale (my idea is kinda different, but parts of the functionality is the same), in which I need to authenticate the cashier, so I can log the transactions processed by him/her, print name on receipt and etc. All will be based in one single machine, no server communication or anything
Implementing session-like storage in python application
1.2
0
0
375
20,671,719
2013-12-19T01:07:00.000
1
0
1
1
python,ubuntu,service,setuptools,upstart
20,672,084
1
true
0
0
Should setuputils even be responsible for creating the service on the machine, or rather should this be handled by an external package manager like dpkg/apt/rpm Almost certainly the latter. distutils/setuptools is not designed to handle things like this. There's some configuration information that's sufficient for installing site-packages, shared data, executables, and maybe a few other things in ways that make sense on your platforms. But there's nowhere near enough to handle things like installing init scripts. These tools are designed to handle not just slightly different early-2010s-era Ubuntu-like linux distros, but a wide variety of different platforms. On non-Ubuntu-like distros (and pre-lucid Ubuntu) there is no Upstart, but there is SysV-style init. On some other *nixes, there isn't even SysV-style init, but there is BSD-style init. On OS X, while SysV-style init does exist, it's heavily deprecated and launchd is used instead. On Windows, there isn't anything even remotely similar, but there are completely different ways to set up "services" and "run-at-startup" programs and related concepts. On top of that, on many platforms, the package manager wants to be able to own all startup scripts, and you don't want to violate that expectation on a user/sysadmin's behalf without him specifically asking for it. So, you need a platform-specific package for each platform. If you just create a PyPI package and a .deb for Ubuntu Precise or whatever you use, if some Fedora or Mac or Ubuntu Natty user gets jealous, they'll either do it themselves, or ask you.
1
0
0
I'm attempting to create a python package that when installed also creates an upstart service. Currently, my options are symlinking the service from the package directory to /etc/init, or copying the file to /etc/init. Either one works fine so long as I can unlink/delete the file upon uninstallation of the package. I saw another related question where a commenter expressed that this should not be the job of setuputils in the first place. So my question is as follows: Should setuputils even be responsible for creating the service on the machine, or rather should this be handled by an external package manager like dpkg/apt/rpm; if it is prudent, is there a way to somehow run a script upon uninstallation of a package or have setuputils remove the file from /etc/init without modifying SOURCES.txt in the egg after running sdist? Thanks!
Installing a Python package with an Upstart service using setuputils
1.2
0
0
411
20,673,260
2013-12-19T04:07:00.000
0
0
0
0
python,python-2.7,web.py
20,676,165
2
false
1
0
Assuming your application is served by a long running process (as opposed to something like plain CGI), AND your database is never updated, you always have the option to instantiate your Object_X at the module's top-level (making it a global), so it will only be created once per process. Now it means that you'll have some overhead on process start, and that you'll have to restart all your processes everytime your db is updated. The real question IMHO is why do you want to load all your db right from the start ? If that's for "optimization" then there might be better tools and strategies (faster db, caches etc).
1
3
0
I have an object which loads all the data from DB Object_X. This object has a few methods defined. I pass some parameter and based on parameter I call one of the functions in Object_X, it uses pre-populated data in object and parameter to get some result. I have created a web service which invokes any method defined in Object_X and returns a result. My problem is that for every request I am loading all the data from db again and again which is time consuming. Is there a way that I can load the data one time when I start a server and uses the same object for each subsequent request?
Prevent creating of expensive object on each request
0
0
1
56
20,682,797
2013-12-19T13:16:00.000
0
0
0
0
python-2.7,image-processing
20,689,518
1
false
0
1
You can use segmentation technique(seed based approach) in first image and then OpenCV library to place it onto another image by creating a mask.
1
0
0
I there any library in Python so that you can cut a certain part of an image (like a person) and paste it in other image?
Cut a certain part of an image and paste it in other image
0
0
0
266
20,684,509
2013-12-19T14:40:00.000
1
0
0
0
python,python-3.x
20,714,626
1
false
0
0
OK, after numerous hours on google I found out that .scd's are basically .zip's with a 0% compression rate. Try using the built in zipfile module on your file as though it were a .zip.
1
2
0
I am trying to make a program with python that downloads a large .scd file, unpacks it, and then installs it. It is not at all difficult for me to download it or install it, (which is pretty much just using urllib and moving a few files around) but unpacking it seems to be a problem. After a couple hours of Googling I can't seem to find any modules for Python capable of opening .scd archives. One idea is to try to convert is to a .zip file with Python, replace the .scd with that, and the just use zipfile.extractall(). I am fine with this if someone can tell me how to do the conversion. The conversion/extraction MUST be automated. EDIT: It is OK with me if a use 3rd party software, but I still would like the following things: the process must be totally automated, (the user does not have to hit an extract button or anything along those lines) the 3rd party software must have a license that allows me to use it as part of my Python program, (and distribute it as part of my program's package to the general public) and the software is compatible with Windows.
Open .scd files in python?
0.197375
0
0
459
20,689,760
2013-12-19T19:08:00.000
1
0
0
0
python,user-interface,canvas,tkinter,tcl
20,692,285
1
false
0
1
It's normal, even if not entirely what you want. The scale method just changes the coordinate lists, but text items only have one of those so they just get (optionally) translated. This also applies to image and bitmap items. And features of other items like the line width; they're not scaled.
1
0
0
Is it normal that Tkinter's Canvas' create_text 's font size doesn't change when I change the Canvas' scale with canvas.scale ? I thought that, as it is a high level GUI management system, I wouldn't have to resize manually the text done with create_text after a zooming. Isn't this strange, or am I wrong ?
Tkinter, canvas, create_text and zooming
0.197375
0
0
500
20,690,850
2013-12-19T20:16:00.000
-6
0
1
0
python,python-3.3
20,691,247
2
true
0
0
python3 is a symbolic link to python3.3 python3.3 is a hard link to python3.3m And as @nneonneo 's answer indicating, The m suffix means specifically a "pymalloc" build of Python. Then the links do what they do.
1
35
0
What's the difference between python 3.3 and 3.3m I'm using Ubuntu 13.04 Raring and on my system I have python2.7 and python3.3 (I know the differences between 2 and 3) But I also have installed python3.3m (and it's not a symlink to 3.3). So what does the m stand for?
whats the difference between python 3.3 and 3.3m
1.2
0
0
9,494
20,691,258
2013-12-19T20:40:00.000
1
1
0
1
python,python-2.7
20,691,487
7
false
0
0
First of all, processes don't run on ports - processes can bind to specific ports. A specific port/IP combination can only be bound to by a single process at a given point in time. As Toote says, psutil gives you the netstat functionality. You can also use os.kill to send the kill signal (or do it Toote's way).
1
13
0
Is it possible in python to kill a process that is listening on a specific port, say for example 8080? I can do netstat -ltnp | grep 8080 and kill -9 <pid> OR execute a shell command from python but I wonder if there is already some module that contains API to kill process by port or name?
Is it possible in python to kill process that is listening on specific port, for example 8080?
0.028564
0
0
19,468
20,692,146
2013-12-19T21:35:00.000
0
0
0
0
python,sphero-api
21,339,422
1
false
0
0
User3120784, in certain cases when you are using Sphero with the leveling system enabled, this can be seen. I personally just had an issue where the ball would do the brightness discrepancy that you are describing here, and the solution was to just check and then uncheck "Level up" in the advanced settings in the Sphero app. To address your question in your comment, the difference is that when the Sphero is idling, it is being controlled solely by firmware, whereas the setRGB of the API is using application level code, and telling the firmware to set a brightness, which it may or may not do depending on the "Level up" setting that I mentioned earlier.
1
1
0
I am just curious as I notice when the sphero is blinking while it is idling waiting for a connection, it is much brighter than when I set the colour using the setRGB functionality. Am I missing something to adjust the brightness as well. I can't seem to find anything in the documentation.
How to control LED max brightness using Sphero API
0
0
1
354
20,693,168
2013-12-19T22:45:00.000
0
0
0
0
python,google-sheets
50,629,830
3
false
1
0
Yes, it is possible and this is how I am personally doing it so. search for "doGet" and "doPost(e)
1
0
0
I dont even know if this is possible. But if it is, can someone give me the broadstrokes on how I can use a Python script to populate a Google spreadsheet? I want to scrape data from a web site and dump it into a google spreadsheet. I can imagine what the Python looks like (scrapy, etc). But does the language support writing to Google Drive? Can I kick off the script within the spreadsheet itself or would it have to run outside of it? Ideal scenario would be to open a google spreadsheet, click on a button, Python script executes and data is filled in said spreadsheet.
Is this possible - Python script to fill a Google spreadsheet?
0
1
0
3,149
20,693,528
2013-12-19T23:13:00.000
0
0
1
0
c++,python,input,keyboard,background-process
20,693,595
1
true
0
0
You need to search for Keyboard Hooks. Most systems provide a away for you to ask to see any events that go by, either all events, or a subset (keyboard/mouse/etc). This can then be used to respond to them. Its used by Macro software such as AutoHotKey, and by Keyloggers, that try to capture peoples passwords and such. If you search for keylogger software, you will probably find some simple examples of use.
1
0
0
I want to make a program where when you press a keyboard key it plays the next sound in a list of sounds (preferably using C++ or python), but I want this to work in any program (Microsoft Word etc.) and just be running in the background. I have no idea how to do this or even where to look. Also, if anyone knows a good link for learning how to make a program read midi files, that would be nice too. Thanks a lot.
How do I run a keyboard program in the background?
1.2
0
0
231
20,694,492
2013-12-20T00:48:00.000
7
1
1
0
python,hashtable,binary-search-tree
20,694,525
3
true
0
0
A hash table takes O(1) time to look up any given entry (i.e. to check whether or not it is in the data structure), whereas a binary search tree takes O(logn) time. Therefore, a hash table will be a more efficient option in terms of response speed. Binary search trees are more useful in scenarios where you need to display things in order, or find multiple similar entries.
1
5
0
I am attempting to write a program that determines whether a specific license plate is one of 10,000 that I have stored. I want to write a fast response algorithm first and foremost, with memory usage as a secondary objective. Would a balanced binary search tree or a hash table be more sufficient in storing the 10,000 license plate numbers (which also contain letters)?
Best Data Structure for storing license plates and searching if a given license plate exists
1.2
0
0
1,592
20,695,601
2013-12-20T03:00:00.000
1
1
1
0
python,unit-testing,python-imaging-library,pillow
23,596,144
1
false
0
0
Does it have to be exactly 150kb, or just somewhere comfortably over 100kb? One approach would be to create a JPEG at 100% quality, and insert lots of (random) text into all the available EXIF and IPTC headers. Including a large thumbnail image will also push the size up. (And like Bo102010 suggested, you could also use random RGB values to minimise the compression.)
1
5
0
I'm trying to work out a method for dynamically generating image files with PIL/Pillow that are a certain file size in order to better exercise certain code paths in my unit tests. For example, I have some image validation code that limits the file size to 100kb. I'd like to generate an image dynamically that is 150kb to ensure that the validation works. It needs to be a valid image and within given dimensions (ie 400x600). Any thoughts on how to add sufficient "complexity" to an image canvas for testing?
Dynamically generate valid image of a certain filesize for testing
0.197375
0
0
652
20,699,380
2013-12-20T08:36:00.000
2
0
1
0
python,function,input,output,terminology
20,737,400
1
false
0
0
I remember reading back in the day that subroutines which return values were referred to as functions (from the mathematical term) and those that don't were called subroutines or procedures. Arguments or parameters are the things passed into a function. What it returns is called a return value or simply the value of the function.
1
0
0
Very simple question but always wondered what the answer was; In the same way that we have that the terminology for the input into a function being the argument, what is the terminology for the output received??? Also, if a function simply performs a process without directly passing data back out, such as arranging a file in the way you wish for it to be done, then what is the terminology for this??? Cheers
Python Terminology Regarding Function Input and Output
0.379949
0
0
52
20,700,738
2013-12-20T09:50:00.000
0
1
0
0
python,http,webserver,cgi
20,701,061
1
true
1
0
To use it as CGI you must move your script into cgi-bin or similar directory of HTTP server. Then point your browser into http://127.0.0.1/cgi-bin/my_scipt.py and see results. In case of problems see at HTTP server error log. In case of strange errors show us what HTTP server and OS you use, example "Apache 2.2 on WinXP".
1
0
0
Am using python2.7. My task is to create a web page say login screen in browser using python. I tried CGI. My code is showing HTML file(all the html code from to ) in CMD screen. Whereas i want them in browser as web page. Any help?.
CGI with Python2.7
1.2
0
0
326
20,701,971
2013-12-20T10:52:00.000
4
1
0
0
c++,python,protocol-buffers
20,713,817
1
true
0
1
Yes, I believe this is expected. The pure-Python implementation stores all the fields in a dict. To construct a new message, it essentially just creates an empty dict, which is very fast. The C++ implementation actually initializes a C++ DynamicMessage object under the hood and then wraps it. DynamicMessage actually initializes all of the fields upfront, so even though it's implemented in C++, it's "slower" -- but this upfront initialization makes later operations faster. I believe you can improve performance further by compiling C++ versions of your protobuf objects and loading them in as another extension. If I recall correctly, the C++-backed Python protobuf implementation will then automatically used the compiled versions rather than DynamicMessage.
1
2
0
I am using Google Protobuf in my Python application. Experimenting with the protobufs I found that Protobuf Message Creation is much slower in CPP based python implementation as compared to Python based Python implementation. Message creation with PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp was upto 2-3 times slower as compared to pure python based Protobuf Message Creation. Is this expected? I found that SerializeToString and ParseFromString both are faster in the cpp verison. The difference in each case widens as the size of the Message increases. I am using python's standard timeit module to time these tasks. (Using Version 2.4.1 of google protobuf)
Performance of C++ based python implementation of Google Protobuf
1.2
0
0
4,148
20,711,941
2013-12-20T21:06:00.000
1
0
1
0
python,cython,hdf5,pytables
20,714,204
1
true
0
0
If you use the h5py package, you can use numpy.asarray() on the datasets it gives you, then you have a more familiar NumPy array that you already know how to deal with. Please note that h5py had a bug related to this until a couple years ago which caused disastrously slow performance when doing asarray() but this was solved so please don't use a very old version if you're going to try this.
1
3
1
I develop a library that uses Cython at a low level to solve flow problems across 2D arrays. If these arrays are numpy arrays I can statically type them thus avoiding the Python interpreter overhead of random access into those arrays. To handle arrays of sizes so big they don't fit in memory, I plan to use hd5file Arrays from pytables in place of numpy, but I can't figure out if it's possible to statically type a CArray. Is it possible to statically type hd5file CArrays in Cython to avoid Python interpreter overhead when randomly accessing those arrays?
Can I statically type a h5file array in Cython?
1.2
0
0
100
20,712,174
2013-12-20T21:24:00.000
1
0
0
0
python,mysql,database,django,amazon-ec2
20,712,349
2
true
1
0
Your column is only 2 chars wide, but you are trying to store the strings 'HIGH', 'MEDIUM', 'LOW' from your TEMP choices (the first value of each tuple is saved in the database). Increase max_length or choose different values for choices, e.g. TEMP = ( ('H', 'High'), ('M', 'Medium'), ('L', 'Low'), ). It worked fine in SQLite because SQLite simply ignores the max_length attribute (and other things).
1
0
0
i have deployed a simple Django application on AWS. The database i use is MySQL. Most parts of this application runs well. But there happens to be a problem when i submitted a form and store data from the form into a model. The error page presents Data truncated for column 'temp' at row 1. temp is a ChoiceField like this: temp = forms.ChoiceField(label="temperature", choices=TEMP), in the model file the temp is a CharField like this temp = models.CharField(max_length=2, choices=TEMP). The error happens at .save(). How can i fix this problem? Any advice and help is appreciated. BTW, as what i have searched, the truncation problem happens because of data type to be stored in database. But i still cannot figure out how to modify my code.
Data truncated for column 'temp' at row 1
1.2
1
0
2,106
20,721,145
2013-12-21T16:35:00.000
1
0
0
0
python,scrapy
20,721,196
1
true
1
0
Use str.format to insert variable value into xpath expression: sel.select('//a[contains(@href, "{0}")]/@href'.format(url_type)).extract()
1
0
0
Instead of "rss", I want to add a global variable to it. So that I don't have to change it again and again. sel.select('//a[contains(@href, "rss")]/@href').extract() to something like this: sel.select('//a[contains(@href, url_type)]/@href').extract()
Add variable in HtmlXPathSelector select function
1.2
0
1
145
20,723,009
2013-12-21T20:01:00.000
1
0
1
0
python
20,723,423
3
false
0
0
Epydoc: --parse-only, --introspect-only By default, epydoc will gather information about each Python object using two methods: parsing the object's source code; and importing the object and directly introspecting it. Epydoc combines the information obtained from these two methods to provide more complete and accurate documentation. However, if you wish, you can tell epydoc to use only one or the other of these methods. For example, if you are running epydoc on untrusted code, you should use the --parse-only option.
1
2
0
I have a pretty large project with lot of packages, modules and dependencies. I want to generate the API doc of a few modules in the project. The documentation is already added as doc-strings. I tried using sphinx but I am plagued with import errors. And the configuration required to avoid these import errors is just too much for my need. Is there a doc generator that would take a module, parse the doc strings in it and produce the output, either in markdown, rst or html in a good readable format?
Any simple python module documentation generator that does not try to run/import code?
0.066568
0
0
492
20,723,086
2013-12-21T20:10:00.000
1
0
1
0
python,ipython
20,723,262
2
false
0
0
I had to apt-get install ipython3, on Linux Mint, which is similar to Debian and Ubuntu. If you're on a Redhat-like OS, you may have something similar for yum.
1
9
0
I have python 2.7 and python3 installed on my machine along with ipython. I wanna use Ipython with python3 by default its taking python 2.7. Whats the Process to use ipython with python 3.
Switching to ipython for python3?
0.099668
0
0
3,389
20,727,189
2013-12-22T07:21:00.000
0
1
0
1
python,linux,raspberry-pi,raspbian
20,727,288
1
false
0
0
Don't try to open a terminal window from python. Just use the os.system() command to run the three commands you show, if you insist on using python. Even easier would be a bash script into which you can write the three commands just as you have written them above. Even better, and to get rid of the need to somewhere type the sudo password, add the three commands without sudo to /etc /rc.local just before the exit 0.
1
1
0
The following commands (to get a small screen working) execute just fine if I type them in from the LXTerminal window while running Raspian on a raspberry Pi once my desktop is loaded: sudo modprobe spi-bcm2708 sudo modprobe fbtft_device name=adafruitts rotate=90 export FRAMEBUFFER=/dev/fb1 startx I'm new to Pi and Python, and after piecing together several forum posts, the best way I thought to do this would be to run a python script from the /etc/xdg/lxsession/LXDE/autostart config file- I just don't know what the python script should say to automaticlaly open a LXTerminal window and type in the commands? Any help would be much appreciated, thanks!
Raspbian Run 4 commands from terminal window after desktop loads Python script?
0
0
0
1,022
20,728,436
2013-12-22T10:32:00.000
0
0
0
1
google-app-engine,python-2.7,ubuntu
22,313,654
1
false
1
0
reset your proxy environment variables (http_proxy and https_proxy) while running the app server locally. You need them only when you are deploying your app to actual google servers.
1
1
0
I have been trying to run a small app using google app engine (python) on 8080. I am behind my college proxy which requires a username and password to login here is what i get INFO 2013-12-22 10:16:19,516 sdk_update_checker.py:245] Checking for updates to the SDK. INFO 2013-12-22 10:16:19,518 init.py:94] Connecting through tunnel to: appengine.google.com:443 INFO 2013-12-22 10:16:19,525 sdk_update_checker.py:261] Update check failed: WARNING 2013-12-22 10:16:19,527 api_server.py:331] Could not initialize images API; you are likely missing the Python "PIL" module. INFO 2013-12-22 10:16:19,529 api_server.py:138] Starting API server at: >localhost:35152 INFO 2013-12-22 10:16:19,545 dispatcher.py:171] Starting module "default" running at: >localhost:8080 INFO 2013-12-22 10:16:19,552 admin_server.py:117] Starting admin server at: >localhost:8000 but when i go to my browser to go to 8080...i get: HTTPError() HTTPError() Traceback (most recent call last): File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate req.respond() File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond self.server.gateway(self).respond() File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond response = self.req.server.wsgi_app(self.env, self.start_response) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call return app(environ, start_response) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware response_body = iter(application(environ, wrapped_start_response)) INFO 2013-12-22 10:22:05,095 module.py:617] default: "GET / HTTP/1.1" 500 - File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call self._flush_logs(response.get('logs', [])) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response) File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall return stubmap.MakeSyncCall(service, call, request, response) File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall rpc.CheckSuccess() File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl self.request, self.response) File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall self._MakeRealSyncCall(service, call, request, response) File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall encoded_response = self._server.Send(self._path, encoded_request) File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send f = self.opener.open(req) File "/usr/local/lib/python2.7/urllib2.py", line 410, in open response = meth(req, response) File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response 'http', request, response, code, msg, hdrs) File "/usr/local/lib/python2.7/urllib2.py", line 448, in error return self._call_chain(*args) File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain result = func(*args) File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 403: Forbidden Traceback (most recent call last): File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate req.respond() File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond self.server.gateway(self).respond() File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond response = self.req.server.wsgi_app(self.env, self.start_response) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call return app(environ, start_response) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware response_body = iter(application(environ, wrapped_start_response)) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call self._flush_logs(response.get('logs', [])) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response) File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall return stubmap.MakeSyncCall(service, call, request, response) File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall rpc.CheckSuccess() File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl self.request, self.response) File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall self._MakeRealSyncCall(service, call, request, response) File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall encoded_response = self._server.Send(self._path, encoded_request) File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send f = self.opener.open(req) File "/usr/local/lib/python2.7/urllib2.py", line 410, in open response = meth(req, response) File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response 'http', request, response, code, msg, hdrs) File "/usr/local/lib/python2.7/urllib2.py", line 448, in error return self._call_chain(*args) File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain result = func(*args) File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 403: Forbidden HTTPError() HTTPError() Traceback (most recent call last): File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate req.respond() File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond self.server.gateway(self).respond() File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond response = self.req.server.wsgi_app(self.env, self.start_response) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call return app(environ, start_response) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware response_body = iter(application(environ, wrapped_start_response)) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call self._flush_logs(response.get('logs', [])) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response) File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall return stubmap.MakeSyncCall(service, call, request, response) File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall rpc.CheckSuccess() File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl self.request, self.response) File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall self._MakeRealSyncCall(service, call, request, response) File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall encoded_response = self._server.Send(self._path, encoded_request) File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send f = self.opener.open(req) File "/usr/local/lib/python2.7/urllib2.py", line 410, in open response = meth(req, response) File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response 'http', request, response, code, msg, hdrs) File "/usr/local/lib/python2.7/urllib2.py", line 448, in error return self._call_chain(*args) File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain result = func(*args) File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 403: Forbidden Traceback (most recent call last): File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate req.respond() File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond self.server.gateway(self).respond() File "/home/yash/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond response = self.req.server.wsgi_app(self.env, self.start_response) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in call return app(environ, start_response) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware response_body = iter(application(environ, wrapped_start_response)) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in call self._flush_logs(response.get('logs', [])) File "/home/yash/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response) File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall return stubmap.MakeSyncCall(service, call, request, response) File "/home/yash/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall rpc.CheckSuccess() File "/home/yash/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl self.request, self.response) File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall self._MakeRealSyncCall(service, call, request, response) File "/home/yash/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall encoded_response = self._server.Send(self._path, encoded_request) File "/home/yash/google_appengine/google/appengine/tools/appengine_rpc.py", line 409, in Send f = self.opener.open(req) File "/usr/local/lib/python2.7/urllib2.py", line 410, in open response = meth(req, response) File "/usr/local/lib/python2.7/urllib2.py", line 523, in http_response 'http', request, response, code, msg, hdrs) File "/usr/local/lib/python2.7/urllib2.py", line 448, in error return self._call_chain(*args) File "/usr/local/lib/python2.7/urllib2.py", line 382, in _call_chain result = func(*args) File "/usr/local/lib/python2.7/urllib2.py", line 531, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 403: Forbidden INFO 2013-12-22 10:22:05,141 module.py:617] default: "GET /favicon.ico HTTP/1.1" 500 - I have set my proxy connections (with username and password) as environment variables in apt.conf files and my terminal works fine with it... i use ubuntu 12.04
Running google app engine locally tbehind a proxy
0
0
0
426
20,728,942
2013-12-22T11:34:00.000
1
0
0
0
python,imshow
20,729,020
1
true
0
0
try setting those values to np.nan ( or float('nan')); you may also want to pass interpolation='nearest' to imshow as an argument.
1
1
1
I am producing an imshow plot in Python. In the final plot, I have strips/columns of data, between which there is no data at all - kind of like a barcode. At the moment, where I have no data, I have just set all values to zero. The color of these regions of no data is therefore whatever colour represents zero in my colorbar - green in my case. What I really want is for these columns/strips just to be white, and to make to really clear that these are regions of NO data. I realise that I could change the colorbar so that the zero is white, but I really want to distinguish the regions of no data from any zeros that might be in the data. Thank you.
highlight regions of no data in a Python imshow plot
1.2
0
0
622
20,740,602
2013-12-23T09:32:00.000
2
0
1
0
python,ipython,ipython-notebook
20,749,381
1
true
0
0
Start another server if you are using 1.0/1.1. this will be solved in 2.0 that allow you to browse your hard-drive.
1
0
0
I know that the IPython Notebook expects to have a notebook directory and that this can be set at startup with --notebook-dir. But is it possible to run a notebook stored at any location without first importing/copying it to this directory?
How to run IPython Notebooks stored at any location
1.2
0
0
142
20,744,160
2013-12-23T13:01:00.000
0
0
1
1
python,windows,file,concurrency
20,744,641
1
false
0
0
You can open the file in 'r+b' mode. You would then have a single file object which could be accessed by the two different processes. Doing so requires some communication between processes (or careful handling of the processes) to about the current state of the file. Overall, this seems a better approach then over-riding OS / file system locking to create duplicate file objects, which seems like the sort of thing that can't possibly end well. You could also simply have the writer process open/close the file every time it accesses it, then the same with the reader process, assuming this is feasible for your program.
1
3
0
I have a Python script, which appends content to a large file a few times a second. I also need a second process, which occasionally opens that large file, and reads from it. How do I do that in Windows? In C++ I could simply open a file with _SH_DENYNO, but what is the equivalent in Python?
Concurrent file access in Python
0
0
0
686
20,748,526
2013-12-23T17:43:00.000
1
0
1
0
python,square-bracket
20,748,576
2
false
0
0
Square brackets are just a convention used to indicate optional arguments.
1
1
0
I have not understood why they write round(x[,n]) in syntax, but in codes they write round(10.6987,12) without square brackets before comma i.e, round(10.6987[,12])
In Python, why round(x[,n]) is written instead of round(x,n)?
0.099668
0
0
151
20,748,906
2013-12-23T18:10:00.000
1
0
1
0
image,matplotlib,inline,ipython
20,749,407
1
false
0
0
No it is not possible without saving the image. Using slide mode you can exclude some cells. is is also possible to use slide-mode and not to show the code if you are using custom templates.
1
2
0
Is it possible to create an inline png image, e.g. using matplotlib, and re-using it in a markdown cell (via html) without saving it on hard disk first? IPython notebook saves inline images in the ipynb file, so the data is available, I wonder if it is also accessible? One idea is to generate images for pretty slides (cell mode) and to to suppress the slides for image creation.
Reuse inline image in IPython notebook
0.197375
0
0
156
20,749,102
2013-12-23T18:25:00.000
1
0
0
0
python,selenium,automation,web-scraping,capybara
20,749,267
3
true
1
0
You may want to check out CasperJS. I use Python to fire CasperJS scripts to do web scraping and return data to Python to parse further or store to a database etc... Python itself has BeautifulSoup and Mechanize but the combination is not great with Ajax based sites. Python and CasperJS is perfect.
1
2
0
I play on chess.com and I'd like to download a history of my games. Unfortunately, they don't make it easy: I can access 100 pages of 50 games one at a time, click "Select All" and "Download" and then they e-mail it to me. Is there a way to write a script, in python or another language, that helps me automate any part of the process? Something that simulates clicking a link? Is Capybara useful for things like this outside of unit testing? Selenium? I don't have much experience with web development yet. Thanks for your help!
Automating web tasks?
1.2
0
1
406
20,749,341
2013-12-23T18:41:00.000
0
0
1
0
python,pycharm
20,749,817
1
false
0
0
I'm using PyCharm 3.0.2 and Ctrl+Click works for links that starts with http://. But for the links that don't start with http://(i.e. www.example.com), I had to keep the cursor on the link(you don't have to highlight) and the context menu had an option called Open in Browser using which you can navigate to that link in the default browser. Now, when in comments, the Ctrl+Click is not working even for links with http://. But the second method(Open in Browser from right click context menu on the link) still works so you can still get the job done.
1
2
0
I'm working with project, where previous developer leave many comments with the links on documentation. It's would be very useful, if PyCharm may to follow the links directly from source code. I know that in Visual Studio this feature exist, the links open by CTRL+Click over them. What about PyCharm? I'm using PyCharm Community Edition 3.0.2.
Can PyCharm follow the link from source editor?
0
0
0
655
20,749,840
2013-12-23T19:21:00.000
0
1
0
0
python,imap
20,751,370
1
false
0
0
SEARCH SINCE. Only works at day resolution though.
1
2
0
How do I write an IMAP query to get all messages with INTERNALDATE higher than a given datetime?
IMAP query to get messages with `INTERNALDATE` higher than given datetime
0
0
0
620
20,750,119
2013-12-23T19:42:00.000
1
0
0
0
jquery,python,ajax,django
20,750,187
2
false
1
0
This is a frequent mistake when writing JavaScript. You haven't disabled the default actions on click or submit. This means that the JS execute, calling the ajax, but then immediately the normal browser submit is also executed, causing a refresh. voteBehavior should accept an event parameter, and you should call event.preventDefault() at the start of the function.
1
0
0
I'm a newbie here, and much of what I have learned about django and python have come from this website. Thank you all for being so helpful! This is my first question post. I've got 2 problems as I try to extend what I've learned from the Django tutorial (1.6) and try to get the Polls app to load via AJAX. I want to use the main mysite app as a home page, and pull in content from other apps in the mysite project using ajax. The tutorial doesn't really cover integrating content from different apps on a single page. I have 2 ajax elements already working on the main mysite page (a "trick or treat" button that retrieves some silly text, and a small dns lookup form/button) but those are part of the mysite app, so all of the logic is handled using the mysite app urlconf, views and templates. There is another div on the page which is for a "Featured App" that will get pulled in, also via ajax. Basically, mysite.views builds a list of apps that have a 'ajaxFeaturedAppView', and then chooses one at random to display in the "Featured App" section on the mysite page. This is my novice attempt at decoupling the mysite app from the other apps as much as possible. Problem 1) The initial poll question and choices and vote button all appear correctly on page load, but the vote button just loads another poll question. It should display poll results. Problem 2) The other ajax elements on the page get triggered when I hit the Vote button, also. I think this is because the Vote button action triggers the document ready() event, which initializes the ajax elements. But the other ajax elements don't do that; they do not trigger the document ready() event. I think that it may be one problem with two symptoms, actually. So, how do I get the vote button to not trigger a document ready event, and will that allow me to see the poll results? Or am I doing something else wrong? EDIT: Okay, there were a few problems with that pieced-together code. Thanks for the help.
AJAX and Django using the polls app from the tutorial: 2 problems
0.099668
0
0
356
20,753,215
2013-12-24T00:24:00.000
0
0
1
0
python,32bit-64bit,py2exe
53,809,976
2
false
0
0
You can install Python 32 bit and 64 bit on your 64 bit system.you need to install needed packages and py2exe for both 32 and 64 bit version of python separately. after you installed 32 bit, you must change Windows path of python to 32 bit to use it..after success 32 bit compile you can change windows path to 64 bit again. A Note about 32 and 64 bit: If your app require more that 2GB RAM( like scientific calculating,etc) use 64 bit version .else use 32 bit version. Also you maybe do not need to use 64bit since windows will emulate 32bit programs using wow64. But using the (64bit) in 64 bit OS will give you more performance. also note that some options of py2exe are not support on 64bit like bundle_files
2
5
0
I've been researching a bit, but need a bit of advice specifically for Py2exe apps. I'm on a 64 bit machine, but from what I understand I can compile in 32 bit format, and run on both 32 and 64 bit. Now my question. To make an exe, I'm using py2exe. So from what I understand, you don't need a 32 bit environment, just a 32 bit compiler which means 32 bit Py2exe? So if I delete Py2exe and reinstall the 32 bit version of py2exe, it will run on both? Are there any other precautions I need to take? Thanks so much.
Py2exe - Compile exe for 32 bit and 64 bit
0
0
0
6,865
20,753,215
2013-12-24T00:24:00.000
7
0
1
0
python,32bit-64bit,py2exe
20,756,528
2
true
0
0
Technically 32x/86x can be used on 64x, but when I used to fiddle around with running 64x on 32x/86x, it crashed the computer, so nothing really just use the 32x/86x compiler. So, 64x programs are one way but 32x/86x are not: 64x program -> 64x computer 32x/86x program -> 32x/86x computer 32x/86x program -> 64x computer Long story short: just use Py2Exe 32x/86x.
2
5
0
I've been researching a bit, but need a bit of advice specifically for Py2exe apps. I'm on a 64 bit machine, but from what I understand I can compile in 32 bit format, and run on both 32 and 64 bit. Now my question. To make an exe, I'm using py2exe. So from what I understand, you don't need a 32 bit environment, just a 32 bit compiler which means 32 bit Py2exe? So if I delete Py2exe and reinstall the 32 bit version of py2exe, it will run on both? Are there any other precautions I need to take? Thanks so much.
Py2exe - Compile exe for 32 bit and 64 bit
1.2
0
0
6,865
20,756,069
2013-12-24T06:34:00.000
3
0
1
0
python,import
20,756,104
1
true
0
0
It is only bad to multiply import module M from modules N1, N2, N3 etc if M holds some global state that is then modified by some of the N* modules. Then, side effects occur depending on import order. Usually in clean code, this is not the case, and if you only hold class definitions and functions, as well as global vars that are not modified/modifyable, then you should be completely fine.
1
1
0
I know this question has been asked/answered a lot, but I'm curious about my specific case -- and I haven't seen an answer that I fully understand. Is it bad to have several modules that import the same module? So, say I have some generic utilities module that I reuse a lot, and modulex.py and moduley.py both import it. But then say modulex needs to import moduley. At this point, am I introducing bad juju?
Python 2 or 3 circular import clarification
1.2
0
0
72