Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
32,412,936 | 2015-09-05T12:15:00.000 | 1 | 0 | 1 | 0 | python,python-sphinx,docstring,pydoc | 32,412,982 | 2 | false | 0 | 0 | None of them mean anything by themselves. Various programs will scan a docstring and interpret certain pieces (or tags) specially for formatting, linking, etc. By convention (starting with javadoc?), such tags often begin with :. Beyond that, the specific meaning depends on the program parsing the docstring, and there is no defined standard for what tags should be used. Some programs use :return to document the return value of a function, others use :rtype.
The only real answer to your question is, consult the documentation for the program you expect to process your docstrings. | 1 | 2 | 0 | Can someone tell me the differences between the following docstring parameters?
:type and :param
I've seen both being used to specify the type of method arguments, but I don't think they do exactly the same. Is one of them for the programmer and the other for the IDE or something like that?
:rtype, :return and :returns
Especially :return and :returns seem very similar, so which are to use in which situation? | Python Docstring: What do these docstring parameters mean exactly? | 0.099668 | 0 | 0 | 128 |
32,413,486 | 2015-09-05T13:15:00.000 | 6 | 0 | 1 | 0 | python,macos,sketch-3 | 32,423,858 | 3 | true | 0 | 0 | I don't have any experience with Sketch, but I downloaded a sample .sketch file and it turned out to be an "SQLite 3.x database". This means you can open it with python's sqlite3 module.
As it happens I had some code lying around that I wrote to inspect another sqlite database, so I took a look: It contains a metadata table and a "payload" table (called metadata and payload, respectively), both of which have just name and value columns. However, the payload table has just one row, and in both tables the values seem to be containers in some other format I don't recognize. So although sqlite3 is the file format, it appears that it is just the outer layer of the onion. | 1 | 0 | 0 | How can I parse .sketch files generated by sketch - mac application?
I know that psd_tools can be used to parse .psd files generated in Adobe Photoshop. | How to parse .sketch files in Python | 1.2 | 0 | 0 | 2,417 |
32,416,955 | 2015-09-05T19:32:00.000 | 2 | 0 | 1 | 0 | python,google-app-engine,datetime,google-cloud-datastore,app-engine-ndb | 32,419,523 | 2 | true | 1 | 0 | The only way would be to store the time of day as a separate property. An int will be fine, you can store it as seconds. You could do this explicitly (ie set the time property at the same time you set the datetime, or use a computedproperty to automatically set the value. | 1 | 1 | 0 | I need to query items that where added at some time of day, ignoring which day. I only save the DateTime of when the item was added.
Comparing datetime.Time to DateTimeProperty gives an error, and DateTimeProperty does not have a time() method. | NDB query by time part of DateTimeProperty | 1.2 | 0 | 0 | 519 |
32,417,694 | 2015-09-05T21:09:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 32,417,794 | 2 | true | 0 | 0 | If you are running on Linux or some other Unix-like OS, try ulimit (man ulimit) and the coresponding Python module resource. | 1 | 1 | 0 | I am working on creating some super basic virtual machine software. I have just started working on the project. Essentially what I have in mind is creating a folder with a / file system inside, and a certain amount of RAM allocated to the process (set by the user). Is there a way to set how much RAM a python script (or, for that matter, a thread specifically) is allowed to use?
I have done some research but no other questions answer how to specifically set the value before hand...they mostly deal with MemoryError errors.
Thanks! | Set Maximum Memory Usage? | 1.2 | 0 | 0 | 712 |
32,418,404 | 2015-09-05T22:50:00.000 | 0 | 0 | 1 | 0 | python,string | 32,418,473 | 1 | true | 0 | 0 | lower is a string method, that is, a function built in to the string object itself. It only applies to string objects.
len is a built in function, that is, a function available in the top namespace. It can be called on many different objects (strings, lists, dicts) and isn't unique to strings. | 1 | 0 | 0 | I'm new to Python and I have a question about the string operations. Is there an over-arching reason that I should understand as to why the lower operation is written as 'variable.lower()' while another one, say length, would be written as 'len(variable)'? | lower versus length syntax in python? | 1.2 | 0 | 0 | 248 |
32,419,015 | 2015-09-06T00:53:00.000 | 0 | 1 | 0 | 1 | c#,python,windows,serial-port | 32,430,151 | 1 | false | 0 | 0 | This question has been asked numerous times on SO and many other forums for the last 10 years or so. The generally accepted answer is to use sysinternals to find the process using the particular file handle. Remember, a serial port is really just a file as far as the win32 api is concerned.
So, two answers for you:
Use sysinternals to find to offending application. I don't think this approach will work via python but you might hack something with .NET.
Use the NtQuerySystemInformation in a getHandles function. Take a look at the structures and figure out which fields are useful for identifying the offending process.
os.system("taskkill blah blah blah") against known serial port using apps. More on this idea at the end.
The 2nd idea sounds fun, however I just don't think the juice is worth the squeeze in this case. A relatively small number of processes actually use serial ports these days and if you are working in a specific problem domain, you are well aware of what the applications are called.
I would just run taskkill (via os.system) against any applications that I know 1) can be safely closed and 2) might actually have a port open. With this approach you'll save the headache of enumerating file handles and get back to focusing on what your application should really be doing. | 1 | 0 | 0 | How to go about to get the process id of a process blocking a certain COM Port on Windows 7 and/or later?
I would like to get the PID programmatically. If possible using Python or C# but the language is not really important, I just want to understand the procedure. | Get PID of process blocking a COM PORT | 0 | 0 | 0 | 367 |
32,420,853 | 2015-09-06T06:50:00.000 | 7 | 0 | 0 | 1 | python,macos,opencv,homebrew,opencv3.0 | 37,454,424 | 2 | false | 0 | 0 | It's weird that there is no concise instruction for installing OpenCV 3 with Python3. So, here I make it clear step-by-step:
Install Homebrew Python 3.5: brew install python3
Tap homebrew/science: brew tap homebrew/science
Install any Python3 packages using pip3. This will create the site-packages folder for Python3
For example:
pip3 install numpy
Then install OpenCV3 brew install opencv3 --with-python3
Now you can find the site-packages folder created in Step 2. Just run the following command to link Opencv3 to Python3:
echo /usr/local/opt/opencv3/lib/python3.5/site-packages >> /usr/local/lib/python3.5/site-packages/opencv3.pth
You may have to change the above command correpondingly to your installed Homebrew Python version (e.g. 3.4). | 2 | 4 | 0 | When I install OpenCV 3.0 with Homebrew, it gives me the following directions to link it to Python 2.7:
If you need Python to find bindings for this keg-only formula, run:
echo /usr/local/opt/opencv3/lib/python2.7/site-packages >>
/usr/local/lib/python2.7/site-packages/opencv3.pth
While I can find the python2.7 site packages in opencv3, no python34 site packages were generated. Does anyone know how I can link my OpenCV 3.0 install to Python 3? | Homebrew installation of OpenCV 3.0 not linking to Python | 1 | 0 | 0 | 7,461 |
32,420,853 | 2015-09-06T06:50:00.000 | 4 | 0 | 0 | 1 | python,macos,opencv,homebrew,opencv3.0 | 32,510,430 | 2 | false | 0 | 0 | You need to install opencv like brew install opencv3 --with-python3. You can see a list of options for a package by running brew info opencv3. | 2 | 4 | 0 | When I install OpenCV 3.0 with Homebrew, it gives me the following directions to link it to Python 2.7:
If you need Python to find bindings for this keg-only formula, run:
echo /usr/local/opt/opencv3/lib/python2.7/site-packages >>
/usr/local/lib/python2.7/site-packages/opencv3.pth
While I can find the python2.7 site packages in opencv3, no python34 site packages were generated. Does anyone know how I can link my OpenCV 3.0 install to Python 3? | Homebrew installation of OpenCV 3.0 not linking to Python | 0.379949 | 0 | 0 | 7,461 |
32,421,064 | 2015-09-06T07:25:00.000 | 0 | 0 | 1 | 0 | java,python,jython | 32,422,544 | 1 | false | 0 | 0 | That error indicates that jythonc is not on the PATH. You will need to set the PATH environment variable so that it contains the Jython bin folder. | 1 | 0 | 0 | I've been trying to compile Jython .py files into Java .class files, yet whenever and wherever I try to run "jythonc". Here's how my environment variables are set:
JYTHON_HOME is set in where jython.jar is (the install directory)
JYTHONPATH is the install directory's bin folder
I'm using Jython 2.7.0 and Python 3.4.3. | jythonc is not recognized as an internal or external command | 0 | 0 | 0 | 763 |
32,422,484 | 2015-09-06T10:22:00.000 | 2 | 0 | 1 | 0 | python-2.7,sorting,sublist | 32,437,929 | 1 | true | 0 | 0 | I got this by doing following-a.sort(key=lambda tup:tup[0])
a.sort(key=lambda tup:tup[1]) | 1 | 1 | 0 | I have a=[[1,2],[2,1],[3,2],[5,1],[4,1]]
I want to get the result sorted list as
[[2,1],[4,1],[5,1],[1,2],[3,2]] | How to sort list of list on the basis of two elements, in Python? | 1.2 | 0 | 0 | 141 |
32,423,519 | 2015-09-06T12:23:00.000 | 1 | 1 | 0 | 0 | python-3.x,gunicorn,aiohttp | 32,440,342 | 1 | false | 1 | 0 | At least for now aiohttp is a library without reading configuration from .ini or .yaml file.
But you can write code for reading config and setting up aiohttp server by hands easy. | 1 | 1 | 0 | I implemented my first aiohttp based RESTlike service, which works quite fine as a toy example. Now I want to run it using gunicorn. All examples I found, specify some prepared application in some module, which is then hosted by gunicorn. This requires me to setup the application at import time, which I don't like. I would like to specify some config file (development.ini, production.ini) as I'm used from Pyramid and setup the application based on that ini file.
This is common to more or less all python web frameworks, but I don't get how to do it with aiohttp + gunicorn. What is the smartest way to switch between development and production settings using those tools? | Configuring an aiohttp app hosted by gunicorn | 0.197375 | 0 | 0 | 273 |
32,425,245 | 2015-09-06T15:39:00.000 | -1 | 0 | 0 | 0 | python,python-2.7,scrapy,scrapy-spider | 32,700,014 | 1 | true | 1 | 0 | As @alecxe recommended in the comments, removing the .pyc files followed by re-running the scrapy crawl crawler-name recompiles the python code and creates the new, update .pyc files. | 1 | 1 | 0 | I'm using Scrapy 1.0.3 with Python 2.7.6. I've placed print statements in a file under the /spiders directory for debugging purposes. However, I've more recently added new print statements but scrapy isn't throwing it onto the console. Finding this suspicious, I removed the previous print statements to see if scrapy would update the output accordingly. However, the output from the previous working code still remains the same.
I'm suspecting that scrapy caches the working codes and found .Python to be a suspecting file which I've removed but the issue remains.
Some google-fu didn't help either and I was wondering if anyone could enlighten me if the issue lies with python or scrapy? | Updating Scrapy Spider does not reflect changes | 1.2 | 0 | 0 | 610 |
32,427,064 | 2015-09-06T18:49:00.000 | 0 | 0 | 0 | 1 | python,blogs,pelican | 32,427,155 | 1 | true | 0 | 0 | The problem was with the PAGE_PATHS value in the settings file.
Turned out that it cannot be set to [""].
Changed it to pages | 1 | 0 | 0 | Everything was working fine.
But now when I do pelican content, nothing happens. Literally. Command Line is just stuck.
What could be the reason? | Pelican stopped generating the site | 1.2 | 0 | 0 | 55 |
32,428,052 | 2015-09-06T20:35:00.000 | 1 | 0 | 0 | 0 | rethinkdb,rethinkdb-python,rethinkdb-javascript | 32,465,192 | 1 | false | 0 | 0 | As mentioned elsewhere, you can avoid the port conflict by passing in the -o 1 argument. This shifts the ports that the proxy uses by an offset of 1. | 1 | 1 | 0 | we have two client machines, how do we connect both of them using proxy server? As you said earlier:
"To start a RethinkDB proxy on the client:
rethinkdb proxy -j -j ..."
only of the clients can connect in this way, since the ports will already be in use. | How to setup rethinkdb proxy server | 0.197375 | 0 | 0 | 282 |
32,428,313 | 2015-09-06T21:06:00.000 | 0 | 0 | 1 | 0 | python,numpy,ptvs | 32,428,486 | 1 | false | 0 | 0 | You can add your own exception types in the Exceptions dialog (in Debug -> Exceptions), and then check them to have it break on them. | 1 | 1 | 1 | I'm using PTVS to debug some code I've written. I'd like to get it to break into the debugger whenever a numpy exception is raised. Currently, it only breaks into the debugger when a standard Python exception is raised; whenever a numpy exception is raised, all I get is a traceback print out. | Break into debugger on numpy exception in PTVS | 0 | 0 | 0 | 92 |
32,433,005 | 2015-09-07T07:09:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 32,433,837 | 4 | false | 0 | 0 | Also, to prevent None from appearing at the end, you must have return at the end of your definition. This happens if you make another variable "equal to" (=) your definition. This was shown in the other posts, but I'm stating this just to highlight things out. | 1 | 2 | 0 | I am trying to write a programme in Python 3which calculates the mean of the absolute differences between successive values. | Mean of Absolute differences in Python | 0 | 0 | 0 | 236 |
32,434,810 | 2015-09-07T09:02:00.000 | 0 | 0 | 0 | 0 | python,nltk | 32,554,802 | 2 | false | 0 | 0 | Your question confuses the nltk itself with nltk_data. You can't really download just part of the nltk (though you could manually trim it down, carefully, if you need to save space). But I think you're trying to avoid downloading all of the nltk data. As @barny wrote, you can see the IDs of different resources when you open the interactive nltk.download() window.
To use the treebank pos tagger, you need its pickled training tables (not the treebank corpus); you'll find them in the "Models" tab under the ID maxent_treebank_pos_tagger. (Hence: nltk.download("maxent_treebank_pos_tagger").
The FreqDist class doesn't have or need any trained model.
Neither does word_tokenize, which takes a sentence as a single string and breaks it up into words. However, you'll probably need the model for sent_tokenize, which breaks up a longer text into sentences. That's handled by the "Punkt" sentence tokenizer, and you can download its model with nltk.download("punkt").
PS. For general-purpose use, I recommend downloading everything in the "book" collection, i.e. nltk.download("book"). It's only a fraction of the total, and it lets you do most things without scrambling every so often to figure out what's missing. | 1 | 0 | 1 | I want to use word_tokenize, pos_tag, FreqDist. I don't want to download all nltk as default. I want to use nltk.download(info_or_id=''). What options I should put in info_or_id to get the POS tagging and its frequency. POS tagging - Penn Treebank POS. | POS tagging - NLTK- Python | 0 | 0 | 0 | 283 |
32,438,646 | 2015-09-07T12:20:00.000 | 0 | 1 | 0 | 0 | python,email,parsing | 32,442,153 | 1 | false | 0 | 0 | So you have 1500 .eml files and want to identify mails from mailer-daemons and which adress caused the mailer-daemon message?
Just iterate over the files, then check the from: line and see if it is a mailer-daemon message, and then get the adress that caused the error out of the text.
There is no other way than iterating over them line by line. | 1 | 0 | 0 | I have something like 1500 mail messages in eml format and I want to parse them na get e-mail addresses that caused error and error message (or code).
I would like to try to do it in python.
Someone have any idea how to do that except parsing line by line and searching for line and error code (or know software to do that)?
I see nothing about errors in mail headers which is sad. | Parse mailer daemons, failure notices | 0 | 0 | 1 | 165 |
32,440,656 | 2015-09-07T14:09:00.000 | 2 | 0 | 1 | 0 | python,list,search,dictionary,set | 32,440,930 | 2 | true | 0 | 0 | Why not use a dictionary of dicts?
The top-level dictionary could be addressed by the username and the other dicts can be the same you have now.
A dictionary can also be scanned threw (d.values()) -- the only disadvantage is, that you can't depend on the ordering.
Of course that is not a DB behavior, but in most cases good enough and very fast -- the access via dict is O(1).
Of course you could use sqlite -- but when you just want to access via username and scan the entries, you are much faster (both in development- and runtime speed). | 1 | 1 | 0 | I would like to store a lot of instances of some data in python. Each record has the following fields:
username, address, salary etc...
The username should be unique. I do a lot of searches.
Currently I am using a list of dictionaries, but when I insert a new item I iterate the list and check the username of each dictionary, which is O(n). Searching is O(n) too. How could I achieve that there is an index on usernames, and make the search time O(logn)? | Python data structure to mimic relational databases | 1.2 | 0 | 0 | 1,181 |
32,442,067 | 2015-09-07T15:33:00.000 | 1 | 0 | 0 | 0 | python,bitnami,osqa | 32,460,787 | 1 | true | 1 | 0 | When you install a Bitnami Stack the files of the application, OSQA in this case, are in /installdir/apps/osqa/htdocs, just changing installdir with the directory where you installed the stack. For instance on windows it is installed by default on C:\Bitnami\osqa\apps\osqa\htdocs.
On the \installdir\osqa directory you will find on htdocs the application files like .css, .py ... and on conf the configuration files of apache, so if you want to add more subdirectories or change any directives you should take a look here. If you want to edit any feature of the application you should go to the htdocs directory and edit the python files in order to achieve your developments. | 1 | 0 | 0 | I've been trying to reach to OSQA pages to modify them. I've installed it to my pc with bitnami and I cannot find the files of the pages. I couldn't find anything on the wiki and readme files.
Is there a way to edit the pages? Not just css but also I'm going to add more stuff to it.
Thank you very much. | OSQA Reaching to Page Files (Bitnami) | 1.2 | 0 | 0 | 36 |
32,442,196 | 2015-09-07T15:42:00.000 | 0 | 0 | 1 | 0 | python,matlab,visual-studio,debugging | 32,442,514 | 2 | false | 0 | 0 | I really should have just spent 5 minutes looking on my own. In VS it's called quick-watch, can be accessed with shift f9. | 1 | 0 | 0 | Is there an equivalent to the F9 key in Matlab for Visual Studio when debugging Python? It allows one to evaluate any expression one comes across, as long as the values have already been calculated with of course.
I.e. if I'm debugging something in Matlab, and I come across the statement x = a+b+c, and I'm not sure what a+b is, I can highlight it, press F9, and get the answer.
This is really nice for complicated formulas and checking piece by piece whether it all works out, instead of splitting everything up and creating unnecessary variables to assign the results too | Does the Visual Studio debugger for Python have equivalent to F9 key matlab | 0 | 0 | 0 | 236 |
32,444,402 | 2015-09-07T18:38:00.000 | 4 | 1 | 1 | 0 | python,pytest | 49,825,909 | 3 | false | 0 | 0 | You can also create an empty pytest.ini file in the current directory.
The config file discovery function will try to find a file matching this name first before looking for a setup.cfg. | 1 | 15 | 0 | I have a setup.cfg file that specifies default parameters to use for pytest. While this is great for running tests on my whole package, I'd like to be able to ignore the setup.cfg options when running tests on individual modules. Is there a way to easily do this? | Is there an option for pytest to ignore the setup.cfg file? | 0.26052 | 0 | 0 | 4,936 |
32,444,840 | 2015-09-07T19:16:00.000 | 2 | 0 | 1 | 0 | ipython,jupyter | 65,136,451 | 4 | false | 0 | 0 | You can switch the cell from 'Code to 'Raw NBConvert' | 4 | 34 | 0 | Is it possible to comment out whole cells in jupyter?
I need it for this case:
I have a lot of cells, and I want to run all of them, except for a few of them. I like it that my code is organized in different cells, but I don't want to go to each cell and comment out its lines. I prefer to somehow choose the cells I want to comment out, then comment them out in one go (so I could later easily uncomment them)
Thanks | jupyter - how to comment out cells? | 0.099668 | 0 | 0 | 42,040 |
32,444,840 | 2015-09-07T19:16:00.000 | 39 | 0 | 1 | 0 | ipython,jupyter | 52,076,374 | 4 | false | 0 | 0 | Mark the content of the cell and press Ctrl+ /. It will comment out all lines in that cell. Repeat the same steps to uncomment the lines of your cell. | 4 | 34 | 0 | Is it possible to comment out whole cells in jupyter?
I need it for this case:
I have a lot of cells, and I want to run all of them, except for a few of them. I like it that my code is organized in different cells, but I don't want to go to each cell and comment out its lines. I prefer to somehow choose the cells I want to comment out, then comment them out in one go (so I could later easily uncomment them)
Thanks | jupyter - how to comment out cells? | 1 | 0 | 0 | 42,040 |
32,444,840 | 2015-09-07T19:16:00.000 | 16 | 0 | 1 | 0 | ipython,jupyter | 38,432,841 | 4 | false | 0 | 0 | If you switch the cell to 'raw NBConvert' the code retains its formatting, while all text remains in a single font (important if you have any commented sections), so it remains readable. 'Markdown' will interpret the commented sections as headers and change the size and colour accordingly, making the cell rather messy.
On a side note I use this to interrupt the process if I want to stop it - it seems much more effective than 'Kernel --> Interrupt'. | 4 | 34 | 0 | Is it possible to comment out whole cells in jupyter?
I need it for this case:
I have a lot of cells, and I want to run all of them, except for a few of them. I like it that my code is organized in different cells, but I don't want to go to each cell and comment out its lines. I prefer to somehow choose the cells I want to comment out, then comment them out in one go (so I could later easily uncomment them)
Thanks | jupyter - how to comment out cells? | 1 | 0 | 0 | 42,040 |
32,444,840 | 2015-09-07T19:16:00.000 | 23 | 0 | 1 | 0 | ipython,jupyter | 32,458,871 | 4 | false | 0 | 0 | I think the easiest thing will be to change the cell type to 'Markdown' with M when you don't want to run it and change back to 'Code' with Y when you do. In a short test I did, I did not lose my formatting when switching back and forth.
I don't think you can select multiple cells at once. | 4 | 34 | 0 | Is it possible to comment out whole cells in jupyter?
I need it for this case:
I have a lot of cells, and I want to run all of them, except for a few of them. I like it that my code is organized in different cells, but I don't want to go to each cell and comment out its lines. I prefer to somehow choose the cells I want to comment out, then comment them out in one go (so I could later easily uncomment them)
Thanks | jupyter - how to comment out cells? | 1 | 0 | 0 | 42,040 |
32,445,682 | 2015-09-07T20:32:00.000 | 3 | 0 | 0 | 0 | python,ios,linux,kivy | 32,484,689 | 1 | false | 0 | 1 | I think it's technically possible (though against apple's TOS) to use a virtual machine, though there are many problems you can encounter in setting this up.
It may also be possible to use some online provider, but I don't think I've seen an example of this with kivy in particular.
There's no way to do it natively on linux, due to apple's toolchain requirements. | 1 | 5 | 0 | I have created a .py and .kv file for my game, now I must package it. I, however, do not own a mac. I have a linux and a windows computer, I prefer linux. Is it possible for me to make an Iphone app without using a Mac? | How can I package a Kivy IOS app while on linux? | 0.53705 | 0 | 0 | 1,543 |
32,446,240 | 2015-09-07T21:27:00.000 | 4 | 0 | 1 | 0 | python,python-2.7,oop,python-3.x | 32,446,309 | 5 | false | 0 | 0 | according to Object Oriented Programming, every object created in Python must be an instance of a common parent class
This is not true. It happens that, in Objective-C, Java (and maybe C# too?), things tend to derive from a single superclass, but this is an implementation detail - not a fundamental of OO design.
OO design just needs a common-enough method to find the implementation of a method you wish to call on the object on which you wish to call it. This is usually fundamental to how the language works (C++, C#, Java, Objective-C, Python, etc all do it their own way that makes sense for their language).
In C++, this is done for static types by the linker and for dynamic types (through virtual inheritance) by a vector table -- no need for a common base class.
In Objective-C, this is done by looking up something in a hash-map on the object's class's structure, then calling a specific method to get the signature of the desired method. This code is nuanced, so everything generally derives from a single, common base-class.
Python technically shouldn't require this, but I think they've made an implementation choice to make everything be a class and every class derive from a common base class. | 1 | 3 | 0 | Everything in Python is an object, and almost everything has attributes and methods. Now, according to Object Oriented Programming, every object created in Python must be an instance of a common parent class. However, this logic just doesn't make sense to me.
Can someone clear this up for me? | Do Python objects originate from a common parent class? | 0.158649 | 0 | 0 | 266 |
32,450,413 | 2015-09-08T06:11:00.000 | 1 | 0 | 0 | 0 | python,ruby-on-rails,django,postgresql,ruby-on-rails-4 | 32,450,497 | 1 | false | 1 | 0 | I think you need to maintain migrations at one system (in this case, Rails), because it will be difficult to check migrations between two different apps. What you'll do if you'll haven't access to another app?
But you can store something like db/schema.rb for django tracked in git. | 1 | 0 | 0 | Is it bad practice to have Django perform migrations on a predominantly Rails web app?
We have a RoR app and have moved a few of the requirements out to Python. One of the devs here has suggested creating some of the latest database migration using Django and my gut says this is a bad idea.
I haven't found any solid statements one way or the other after scouring the web and am hoping someone can provide some facts of why this is crazy (or why I should keep calm).
database: Postgres
hosting: heroku
skills level: junior | Rails and Django migrations on a shared database | 0.197375 | 1 | 0 | 254 |
32,455,821 | 2015-09-08T10:50:00.000 | 2 | 0 | 0 | 0 | python,django,heroku,celery | 32,456,170 | 2 | true | 1 | 0 | I think Celery is a good approach. Not sure if you need Redis/RabbitMQ as a broker or you could just use MySQL - it depends on your tasks. Celery workers could be runned on the different servers, so Celery supports distributed queues.
Another approach - implement some queue engine with python, database as a broker and a cron for job executions. But it could be a dirty way with a lots of pain and bugs.
So I think that Celery is a more nice way to do it. | 1 | 1 | 0 | I'm creating a Django web app which features potentially very long running calculations of up to an hour. The calculations are simulation models built in Python. The web app sends inputs to the simulation model and after some time receives the answer. Also, the user should be able to close his browser after starting the simulation and if he logs in the next day the results should be there.
From my research it seems like I can use Celery together with Redis/RabbitMQ as broker to run the calculation in the background. Ideally I would want to display progress updates using ajax, so that the page updates without a user refresh when the calculation is complete.
I want to host the app on Heroku, so the calculation will also be running on the Heroku server. How hard will it be if I want to move the calculation engine to another server? It might be useful if the calculation engine is on a different server.
So my question is, is my this a good approach above or what other options can I look at? | Django app with long running calculations | 1.2 | 0 | 0 | 359 |
32,458,158 | 2015-09-08T12:42:00.000 | 1 | 0 | 1 | 0 | autocomplete,ipython,pycharm | 51,340,397 | 2 | false | 0 | 0 | ctrl+space confuse with input language switching of Windows. need change setting of Keymap
File -> Setting -> Keymap -> Main menu -> Code -> Complete -> Basic | 1 | 5 | 0 | if I start ipython in a terminal, when I type 'im' and press TAB, the terminal will auto-complete it with 'import', but when I click python console button in the bottom of pycharm IDE, when the ipython environment shows, type 'im', press TAB, it will not give autocompletion.
In PyCharm, it use pydevconsole.py to create the ipython environment, but I do not know how to change it to enable the autocompletion. | pycharm python console autocompletion | 0.099668 | 0 | 0 | 5,218 |
32,460,120 | 2015-09-08T14:13:00.000 | 0 | 0 | 0 | 0 | python,xpath,sqlite,scrapy | 32,471,731 | 1 | true | 1 | 0 | The problem that you're experiencing is that SQLite3 wants a datatype of "String", and you're passing in a list with a unicode string in it.
change:
item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract()
to
item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract()[0].
You'll be left with a string to be inserted, and your SQLite3 errors should go away. Warning, though, if your ever wanting to deal with more than just one title, this will limit you to the first. You can use whatever method you want to persuade those into being a string, though. | 1 | 2 | 0 | Recently, I used Python and Scrapy to crawl article information like 'title' from a blog. Without using a database, the results are fine / as expected. However, when I use SQLalchemy, I received the following error:
InterfaceError:(sqlite3.InterfaceError)Error binding parameter 0
-probably unsupported type.[SQL:u'INSERT INTO myblog(title) VALUES (?)'] [PARAMETERS:([u'\r\n Accelerated c++\u5b66\u4e60
chapter3 -----\u4f7f\u7528\u6279\u636e \r\n '],)]
My xpath expression is:
item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract()
Which gives me the following value for item['title']:
[u'\r\n Accelerated c++ \u5b66 \u4e60 chapter3 -----\u4f7f\u7528\u6279\u636e \r\n ']
It's unicode, why doesn't sqlite3 support it? This blog's title information contains some Chinese. I am a tired of sqlalchemy. I've referred its documents, but found nothing, and I'm out of ideas. | InterfaceError:(sqlte3.InterfaceError)Error binding parameter 0 | 1.2 | 1 | 0 | 454 |
32,460,273 | 2015-09-08T14:22:00.000 | 0 | 0 | 1 | 0 | python,regex,xml,xml-parsing | 32,904,156 | 1 | false | 0 | 0 | No, in Python you can not change strings in place as Python strings are immutable. | 1 | 3 | 0 | I'm trying to directly edit an XML file's text. I'd prefer to find and remove a certain phrase potentially by using the "sub" function. For particular reasons I'd prefer not to return the edited strings and then find a way to replace the existing XML file test. Is there an easy way to do this? Thanks for any help. | Is it possible to use Regular Expression to alter a string directly instead of returning altered version of the string? | 0 | 0 | 1 | 75 |
32,464,697 | 2015-09-08T18:21:00.000 | 0 | 0 | 1 | 0 | python | 35,781,578 | 1 | true | 0 | 0 | use unidecode before put non ascii to corenlp | 1 | 1 | 0 | I use Dustin Smith's Python wrapper for Stanford Core NLP tools v3.4.1
I put the word 'Víctor' into corenlp.parse. 'Víctor' contains non-ascii character. I would like to get the lemma of 'Víctor'. But when I put corenlp.parse('Víctor'). It gives error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: ordinal not in range(128).
How can I change corenlp setting, so corenlp can handle non-ascii string? | how can corenlp(python wrapper) handle non ascii string | 1.2 | 0 | 0 | 63 |
32,467,535 | 2015-09-08T21:28:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,pip | 45,997,254 | 2 | false | 0 | 0 | rename your python.exe for python 3 to python3. Don't forget to put it inside your PATH environment. Just use python for python 2, python3 for python 3.
Their pip are separated, pip for python 2. pip3 for python 3. | 1 | 5 | 0 | I have two versions of python installed on my machine (Ubuntu 14.xx LTE) as well as two versions of pip (one for python 2 and one for python 3). When I run pip --version on the command line I get the following output: pip 1.5.4 from /usr/lib/python2.7/dist-packages (python 2.7). I looked into this directory and it has many other things in it. However I couldn't find pip.py in it. How do I run pip for python 3? Any help is appreciated. | how to run pip for python 3 and python 2 | 0 | 0 | 0 | 5,815 |
32,470,939 | 2015-09-09T04:31:00.000 | 0 | 0 | 1 | 0 | python-3.4 | 32,482,818 | 3 | false | 0 | 0 | You can use len([c for c in address if c.isalpha()]) Here I'm assuming that your string is named address. Here is the defininition of isalpha from the python 3.4 docs:
Return true if all characters in the string are alphabetic and there
is at least one character, false otherwise. Alphabetic characters are
those characters defined in the Unicode character database as
“Letter”, i.e., those with general category property being one of
“Lm”, “Lt”, “Lu”, “Ll”, or “Lo”. Note that this is different from the
“Alphabetic” property defined in the Unicode Standard
We perform this test for each one-character string in the address. Since python 3 strings are in Unicode, this test would also catch letters from other alphabets like Greek, Arabic, or Hebrew. I don't know if that's what you want, but if you only have letters from the English alphabet, it will work fine. | 2 | 0 | 0 | In Python, the len() function does provide the exact # amount of letters that make up a word in a string.
But when i have a string with multiple words, it doesn't display the correct # amount of letters because it is counting the spaces between the words.
what would be the correct command for the len() function to calculate the number of letters correctly for a string with multiple words ? | len function - counting the # of letters in words | 0 | 0 | 0 | 1,638 |
32,470,939 | 2015-09-09T04:31:00.000 | 0 | 0 | 1 | 0 | python-3.4 | 32,470,989 | 3 | false | 0 | 0 | Remove all spaces before counting length:
string = string.replace(' ', '') | 2 | 0 | 0 | In Python, the len() function does provide the exact # amount of letters that make up a word in a string.
But when i have a string with multiple words, it doesn't display the correct # amount of letters because it is counting the spaces between the words.
what would be the correct command for the len() function to calculate the number of letters correctly for a string with multiple words ? | len function - counting the # of letters in words | 0 | 0 | 0 | 1,638 |
32,473,596 | 2015-09-09T07:37:00.000 | 2 | 0 | 0 | 0 | design-patterns,python-2.x | 32,482,552 | 1 | false | 0 | 0 | You can create a class that implements the database repeating code and exposes query, delete and update methods like Java Spring JDBCTemplate.
Use this class in your methods to avoid database code duplication. | 1 | 0 | 0 | I am refactoring Python code that have 20 methods. Each of the method updates different database fields. All of these methods repeat some of the code that relates to opening database connection and committing changes to it. The parameters being passed to all of these methods have some common parameters and some not. The number of parameters being passed to these methods varies in numbers.
Each of the method builds appropriate SQL command (search/query) for itself, formats its database name, thatcan vary, and opens a database connection.
Some of the methods prototypes are listed below.
updatelogintime(app, session, request, ...)
disable_user_login(app, request)
getPgdbTableStruct(app, tablename, session)
getPgDbTables(app, session, userAdmin)
getPgData(app, tablename, session, thisOffset, filterResponse, limitToMax)
pg_delete(app, tablename, session, sqlCommand, theRow)
pg_insert(app, tablename, session, theValues)
...
My goal is to simplify the code and avoid repeating some of the code in each method. I wonder if command pattern is OK to use or not.
I am not clear on what will be Invoker and Receiver object. Where does the common code goes (in base command or concrete command)? Do I place building of each of the sql command string code, which is unique per method, in a Receiver object class?
I am also wondering if there is another solution that is elegant and simple.
Thanks. | Refactoring Python code using command pattern | 0.379949 | 0 | 0 | 215 |
32,474,399 | 2015-09-09T08:16:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,python-2.x,anaconda,conda | 32,488,976 | 3 | true | 0 | 0 | The Anaconda Python is independent of the other Python. They shouldn't affect each other. You may want to remove the other one from your PATH to make sure you don't accidentally open it. | 2 | 3 | 0 | I have installed Python 2.7.10 version initially. However, I could not install Scipy and PyMc using pip install. Hence, I resorted to installing Anaconda. Now, I find Anaconda environment perfect for my use and I use Spyder IDE and Anaconda command prompt to run my programs.
Now, I want to know, shall I uninstall the Python 2.7.10 which I installed initially [which consumes around 55.7 Mb of my resources]? Or both these are inter-related? | Should I uninstall Python 2.7.10? | 1.2 | 0 | 0 | 2,249 |
32,474,399 | 2015-09-09T08:16:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-2.x,anaconda,conda | 32,474,526 | 3 | false | 0 | 0 | Seems that you can use some lib correctly by python 2.7.10,but others not.
My advise is you should use one environment, or two python environment include python2 and python3. It can reduce you trouble in future. | 2 | 3 | 0 | I have installed Python 2.7.10 version initially. However, I could not install Scipy and PyMc using pip install. Hence, I resorted to installing Anaconda. Now, I find Anaconda environment perfect for my use and I use Spyder IDE and Anaconda command prompt to run my programs.
Now, I want to know, shall I uninstall the Python 2.7.10 which I installed initially [which consumes around 55.7 Mb of my resources]? Or both these are inter-related? | Should I uninstall Python 2.7.10? | 0 | 0 | 0 | 2,249 |
32,476,460 | 2015-09-09T09:57:00.000 | 2 | 0 | 1 | 0 | python,notepad++ | 32,476,507 | 1 | false | 0 | 0 | There's such an option.
Settings > Preferences > Tab Settings > Replace by space. | 1 | 1 | 0 | So I'm beginning to learn Python and I'm using powershell & Notepad++. The issue I'm having is I noticed when I'm making programs, I get a lot of indentation errors and then I have to go and convert all tabs to spaces, which solves the issue.
My question is, is there anyway to get around this? For instance when you hit tab, it just inserts 4 spaces instead? Because it's really annoying having to consistently do this. I have errors all the time when I have tabs instead of spaces.
Any help is appreciated, thanks guys! | Anyway to get around converting tabs to spaces in Notepad++ for Python? | 0.379949 | 0 | 0 | 479 |
32,478,432 | 2015-09-09T11:33:00.000 | 3 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,boto,boto3 | 32,478,697 | 1 | false | 0 | 0 | No. EBS volumes are accessible only on the EC2 instance they're mounted on. If you want to download a file directly from S3 to an EBS volume, you need to run your script on the EC2 instance. | 1 | 0 | 0 | I have to write a python script which will copy a file in s3 to my EBS directory, here the problem is I'm running this python script from my local machine. is there any boto function in which I can copy from s3 to EBS without storing in my local? | Copy file from S3 to EBS | 0.53705 | 1 | 1 | 830 |
32,480,423 | 2015-09-09T13:07:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,range | 32,480,515 | 4 | false | 0 | 0 | set((range(0,1))).issubset(range(0,4)) will do it. | 1 | 10 | 0 | How can I simply check if a range is a subrange of another ?
range1 in range2 will not work as expected. | How to check if a range is a part of another range in Python 3.x | 0.099668 | 0 | 0 | 5,088 |
32,488,301 | 2015-09-09T20:08:00.000 | 0 | 0 | 1 | 0 | python,operators | 32,488,330 | 3 | false | 0 | 0 | It assigns the value of the boolean comparison [True or False] to the LHS variable group_index. | 1 | 0 | 0 | I've got the following line:
group_index = apps["special_groups"] == group
From my understanding, group_index is being assigned the value in apps["special_groups"]. Then I see the == operator, but what does it do with the result? Or is it comparing apps["special_groups"] to group first? | How does Python process a statement with both assignment and a comparison? | 0 | 0 | 0 | 108 |
32,488,808 | 2015-09-09T20:41:00.000 | 1 | 0 | 0 | 0 | python,statistics,scipy | 32,510,602 | 3 | false | 0 | 0 | You can also use the histogram, piecewise uniform distribution directly, then you get exactly the corresponding random numbers instead of an approximation.
The inverse cdf, ppf, is piecewise linear and linear interpolation can be used to transform uniform random numbers appropriately. | 1 | 3 | 1 | Suppose I have a process where I push a button, and after a certain amount of time (from 1 to 30 minutes), an event occurs. I then run a very large number of trials, and record how long it takes the event to occur for each trial. This raw data is then reduced to a set of 30 data points where the x value is the number of minutes it took for the event to occur, and the y value is the percentage of trials which fell into that bucket. I do not have access to the original data.
How can I use this set of 30 points to identify an appropriate probability distribution which I can then use to generate representative random samples?
I feel like scipy.stats has all the tools I need built in, but for the life of me I can't figure out how to go about it. Any tips? | Use Histogram data to generate random samples in scipy | 0.066568 | 0 | 0 | 2,122 |
32,490,281 | 2015-09-09T22:40:00.000 | 1 | 0 | 0 | 0 | python-2.7,sockets,network-programming,ipv6 | 32,490,605 | 2 | false | 0 | 0 | or would the router recognize the IPv6 address as internal and just bounce it back as opposed to first sending it to some external node?
Yes | 2 | 0 | 0 | I'm currently able to connect a client computer to two servers on my local network (using Python sockets), but because I'm trying to emulate an external networking set-up, I'd like the client to access the machines externally, i.e. for the data to be routed over the internet as opposed to locally and directly. (This is for research purposes, so it's intentionally inefficient.)
Would using a machine's IPv6 address as the host be sufficient, or would the router recognize the IPv6 address as internal and just bounce it back as opposed to first sending it to some external node? | How to connect two computers on the same local network externally with Python sockets? | 0.099668 | 0 | 1 | 147 |
32,490,281 | 2015-09-09T22:40:00.000 | 1 | 0 | 0 | 0 | python-2.7,sockets,network-programming,ipv6 | 32,491,099 | 2 | false | 0 | 0 | If the client has at least two interfaces, you can assign one interface for local networking and the other one for Internet connection.
In addition, you can also try to use virtual interfaces + IP tunnel for the Internet connection. | 2 | 0 | 0 | I'm currently able to connect a client computer to two servers on my local network (using Python sockets), but because I'm trying to emulate an external networking set-up, I'd like the client to access the machines externally, i.e. for the data to be routed over the internet as opposed to locally and directly. (This is for research purposes, so it's intentionally inefficient.)
Would using a machine's IPv6 address as the host be sufficient, or would the router recognize the IPv6 address as internal and just bounce it back as opposed to first sending it to some external node? | How to connect two computers on the same local network externally with Python sockets? | 0.099668 | 0 | 1 | 147 |
32,490,561 | 2015-09-09T23:12:00.000 | 4 | 0 | 1 | 0 | python,excel,date,csv,import-from-csv | 32,490,655 | 2 | true | 0 | 0 | Option 3. Import it properly
Use DATA, Get External Data, From Text and when the wizard prompts you choose the appropriate DMY combination (Step 3 of 3, Under Column data format, and Date). | 1 | 0 | 1 | I have a CSV file where the date is formatted as yy/mm/dd, but Excel is reading it wrongly as dd/mm/yyyy (e.g. 8th September 2015 is read as 15th of September 2008).
I know how to change the format that Excel outputs, but how can I change the format it uses to interpret the CSV data?
I'd like to keep it to Excel if possible, but I could work with a Python program. | Change date format when importing from CSV | 1.2 | 0 | 0 | 1,588 |
32,492,550 | 2015-09-10T03:17:00.000 | 8 | 0 | 0 | 0 | python,machine-learning,scikit-learn,classification | 32,502,985 | 2 | false | 0 | 0 | sample_weight and class_weight have a similar function, that is to make your estimator pay more attention to some samples.
Actual sample weights will be sample_weight * weights from class_weight.
This serves the same purpose as under/oversampling but the behavior is likely to be different: say you have an algorithm that randomly picks samples (like in random forests), it matters whether you oversampled or not.
To sum it up:
class_weight and sample_weight both do 2), option 2) is one way to handle class imbalance. I don't know of an universally recommended way, I would try 1), 2) and 1) + 2) on your specific problem to see what works best. | 1 | 22 | 1 | I have class imbalance problem and want to solve this using cost sensitive learning.
under sample and over sample
give weights to class to use a modified loss function
Question
Scikit learn has 2 options called class weights and sample weights. Is sample weight actually doing option 2) and class weight options 1). Is option 2) the the recommended way of handling class imbalance. | What is the difference between sample weight and class weight options in scikit learn? | 1 | 0 | 0 | 12,531 |
32,495,332 | 2015-09-10T07:07:00.000 | 1 | 0 | 0 | 0 | python,treeview,ttk | 32,788,603 | 1 | false | 0 | 1 | The Treeview widget doesn't allow you to change the color of individual values. You can change the color of an entire row, but not just a single value within a row. | 1 | 2 | 0 | I'm using a ttk treeview like a table full of testdata and I'd like to highlight now the values which are out of range.
Therefore I'd like to color single values (or the background color of this value) in a item and not the whole item (row).
Is this possible?
I've found one example where this problem is solved with a treeview per column, but this is not possible, because I don't want to color the whole column, but just one value...
r | python ttk treeview: Different background or font color for values in item | 0.197375 | 0 | 0 | 1,417 |
32,495,377 | 2015-09-10T07:10:00.000 | 1 | 0 | 0 | 0 | python,django,django-testing,django-migrations | 32,499,038 | 1 | false | 1 | 0 | When you use django's TestCase, it has an explicit requirement that the database must be setup, which means all migrations must be applied. If you want to test things without the migrations happening, you cannot use TestCase.
Use a testing toolkit that doesn't depend on django, like pytest and write your own code to test. You can always import django models and settings explicitly.
Your tests would first run the explicit tests where database is not created, after which, the other tests can be run containing TestCase.
I'm not sure whether such a setup is possible with manage.py, but you can certainly create your own script (maybe using fabric or plain python) to run tests in your choice of order. | 1 | 1 | 0 | I am trying to create test cases for my migration functions (called with migrations.RunPython). My idea was to create a test case that doesn’t run migrations before starting, neither syncdb to create the database in one step. After this, I’m planning to run the first step, run associated tests, run the second step then its associated tests, etc. Is this possible somehow, or if not, is it possible to test migration functions in any way? | Django testcase without database migrations and syncdb | 0.197375 | 0 | 0 | 642 |
32,497,329 | 2015-09-10T08:56:00.000 | 0 | 1 | 0 | 1 | java,python-2.7,python-3.4 | 41,688,142 | 2 | false | 1 | 0 | You can create a new variable name, for example MY_PYTHEN=C:\Pythen34 . Then you need to add the variable name into system variable PATH such as,
PATH = ...;%MY_PYTHEN%
PATH is a Windows system default variable. | 1 | 0 | 0 | I have installed java and using it in internal command with variable name:PATH and variable value: C:\Program Files\Java\jdk1.8.0_60\bin . Now i want add python to internal command. What variable name do I give so that it works.I tried with Name: PTH and Value:C:\Python34; its not working. | Python and Java in internal command | 0 | 0 | 0 | 29 |
32,497,691 | 2015-09-10T09:12:00.000 | 4 | 0 | 1 | 0 | python,shell,python-idle | 32,498,911 | 4 | true | 0 | 0 | They are both the same thing but, IDLE is made to write python code so its better if you can to write on IDLE. You can also try Notepad++ its a pretty good program to write code on. | 3 | 5 | 0 | What are the key differences between Python's IDLE and its command line environment? IDLE looks nicer, of course, and has some kind of GUI...
Moreover, is IDLE treated the same as the shell? I mean, the shell is the middle layer between the user and Python's interpreter? | Difference Between Python's IDLE and its command line | 1.2 | 0 | 0 | 37,394 |
32,497,691 | 2015-09-10T09:12:00.000 | 5 | 0 | 1 | 0 | python,shell,python-idle | 32,518,688 | 4 | false | 0 | 0 | I am not sure what you question is, but here is a Windows-7 oriented answer of similarity and difference. In the start menu for Python x.y, you can select 'Python x.y (x bits)' to run python interactive in a text-line-oriented console window provided by Microsoft. The console handles key presses and mouse movements and clicks. When you hit , the console sends the line of text to python, which is waiting for input on sys.stdin. When Python processes the line, it sends output to sys.stdout or sys.stderr. This includes '>>> ' and '... ' prompts. The console displays the text for you to see.
In the start menu, you can instead select 'Idle ...'. Unless you have previously selected a different startup option, python run Idle code which uses the tkinter module which used tcl/tk to run a graphical user interface that somewhat imitates the console. The tkinter/tk gui handles key and mouse input and displays output. In both cases, some software besides the Python interpreter itself handles interaction between you and Python.
Some important differences:
Cut, copy, and paste work normally. The Windows console is crippled in this respect.
Idle colors input and output. The Windows console does not.
Idle can display all unicode BMP (the first 64K) chars. The Windows console is limited by code pages.
For 1, 2, and 3, the console of other OSes may do as well or better than Idle.
Idle lets you enter, edit, send, and retrieve complete statements. Interactive python with Windows console only works with physical lines.
Update, 2017/11:
Item 1 above: At least on current Win10, cut, copy, and paste work normally.
Item 3 above: At least on Win10, unicode works better in Command Prompt with 3.6+.
New item 5: The IDLE doc section, also available as Help => IDLE Help now has section '3.3. IDLE-console differences'. | 3 | 5 | 0 | What are the key differences between Python's IDLE and its command line environment? IDLE looks nicer, of course, and has some kind of GUI...
Moreover, is IDLE treated the same as the shell? I mean, the shell is the middle layer between the user and Python's interpreter? | Difference Between Python's IDLE and its command line | 0.244919 | 0 | 0 | 37,394 |
32,497,691 | 2015-09-10T09:12:00.000 | 0 | 0 | 1 | 0 | python,shell,python-idle | 34,975,192 | 4 | false | 0 | 0 | Python IDLE is where you write your program/s and Python Shell is where you run your program/s. | 3 | 5 | 0 | What are the key differences between Python's IDLE and its command line environment? IDLE looks nicer, of course, and has some kind of GUI...
Moreover, is IDLE treated the same as the shell? I mean, the shell is the middle layer between the user and Python's interpreter? | Difference Between Python's IDLE and its command line | 0 | 0 | 0 | 37,394 |
32,503,362 | 2015-09-10T13:32:00.000 | 0 | 0 | 1 | 0 | python,python-3.4,python-idle | 32,517,850 | 3 | false | 0 | 0 | You can select and copy a single statement in an Idle editor (or anywhere else, for that matter), switch to the Idle Shell, and paste on the line with the >>> prompt as the bottom. (You can hit to get a clean prompt.) Then hit return just as it you had entered into the shell directly. This works for multiline statements. Being able to do this with a menu selection, hotkey, or right-click selection is on my todo list, as you are not the first to ask about this. | 1 | 3 | 0 | In - Python, inside the the IDLE, in the File Editor Window,
How do you run just a selected single line of code in the script, without having the rest of the lines of the program being runned ? | Python - Running just a single line of code, and not the rest of the Multiple lines in the Script | 0 | 0 | 0 | 19,212 |
32,504,020 | 2015-09-10T14:02:00.000 | 0 | 1 | 0 | 0 | dronekit-python | 33,382,039 | 1 | false | 0 | 0 | And from DKPY2 (just released) there is no MAVProxy dependency, so this should no longer be an issue. | 1 | 0 | 0 | I am using Ardupilot in the plane and a Raspberry Pi running dronekit-python at the ground end across 3DR radio - with the Pi not controlling anything, just providing feedback to the pilot when they breach certain things like the boundary of a rectangular geofence (with increasing alarms the further they get out). So I am downloading only a few variables as frequently as I can (or as new data is available). Can anyone guide me on how to ask mavproxy not to automatically start downloading the whole tlog from the time it is started as I don't need it (other than for occasional debugging - but I can write my own specific log as needed)?
Edit: On digging further it appears to be invoked from lines 985 and 1031 of the mavproxy.py code (call functions set log directories, and write telemetry logs). Will comment them out and see what happens.
Further Edit: That works, once I worked out which version of Mavproxy was being loaded.
Gibbo | Dronekit-Python - Stop Mavproxy Downloading Logs | 0 | 0 | 0 | 175 |
32,508,107 | 2015-09-10T17:22:00.000 | 2 | 0 | 0 | 0 | python,django,django-templates | 32,508,275 | 3 | true | 1 | 0 | The issue isn't that the variables are static copies. It's just that the template language itself doesn't allow you to call methods which take arguments. It's still the same object under the hood you're accessing, you just have no way to express certain programmatic concepts (assignment, passing arguments, etc.) in the language.
To answer your update:
Yes the template layer could update models if the model had a method which modified the object and that method didn't take any arguments. But just because you can do a thing doesn't mean you should do a thing. Don't assume that because the developers of Django haven't absolutely prevented something means it's totally acceptable, but if that's what you really want to do, there's nothing to stop you. | 1 | 1 | 0 | I was asked this on an interview presumably to see if I understood the separation between the Template layer and the Model layer.
My understanding is that template variables are essentially:
A static copy of the instance (such that all the properties can be accessed)
An instance that has all of the method with arguments "hidden" (such that they can't be called, but methods without arguments can be called )
Therefore, if you had a model with only methods with no arguments and passed an instance into a template could you say that it was a static copy of the instance? Is this even a correct way to think about template variables?
UPDATE:
Is the template (view) layer able to update models (e.g. from a custom context processor)? If no, then how is this prevented by the Django framework if it's not making copies of the model instance? If yes, then wouldn't this be a major deviation from typical web framework MVC design where data only flows in one direction from Model to View? | Can Django template variables be model instances? | 1.2 | 0 | 0 | 826 |
32,508,415 | 2015-09-10T17:40:00.000 | 19 | 0 | 0 | 1 | python,apache-kafka,kafka-consumer-api,kafka-python | 32,510,798 | 1 | false | 0 | 0 | Kafka is a distributed immutable commit log. That said, there is no possibility to update a message in a topic. Once it is there all you can do is consume it, update and produce to another (or this) topic again | 1 | 11 | 0 | I am using Python Kafka topic.
Is there any provision producer that can update a message in a queue in Kafka and append it to the top of queue again?
According to spec of Kafka, it doesn't seems feasible. | Update message in Kafka topic | 1 | 0 | 1 | 7,086 |
32,509,598 | 2015-09-10T18:52:00.000 | 2 | 0 | 0 | 0 | python,angularjs,django,project,project-structure | 32,510,043 | 2 | false | 1 | 0 | If you're truly dealing with a scaling issue, you want to decouple every single component. That way you can pour resources into the part of your system under the heaviest load. That would involve things like spinning up multiple front-end web/cache servers, compute nodes, etc, etc.
That said, very few companies need to handle that kind of scale, and by the time you do, you'll have a team of developers to do all that for you. (As someone once said, "Scalability is a problem every developer wishes they had").
Until then, Have a front-end site and an Api. If you write the Api well, you'll be able to plug in desktop/mobile clients very easily at a later date. You may also consider making the api public (at least partially) in future to allow other developers to interact with your product. | 1 | 0 | 0 | I'm creating right now a new application which will be kind of start'up. User can register it and use many tools inside. I hope there will be at least thousands hits per day.
I'm sure that it will be using python & django because it's technology I work with. I'm not sure about project structure and communication in a projects like this.
I thought that i'll use django with tastypie as a backend to serv endpoints and another one app based by nodejs (using GULP for example) to host frontend only. (frontend will use AngularJS & ui router, it will be SPA).
Is it a better option to separate backend and frontend application or maybe i should keep whole frontend files (js, css, html) inside django as a static files?
Which solution is better for a potentially huge web application? Maybe both are bad ideas?
Thanks a lot for help! | Is it smart to use django as backend only? | 0.197375 | 0 | 0 | 903 |
32,509,768 | 2015-09-10T19:01:00.000 | 1 | 0 | 0 | 0 | ftp,pythonanywhere | 32,530,008 | 1 | true | 0 | 0 | PythonAnywhere dev here: we don't support regular FTP, unfortunately. If there was a way to tell BO to send the data via an HTTP POST to a website, then you could set up a simple Flask app to handle that -- but I'm guessing from what you say that it doesn't :-( | 1 | 1 | 0 | My organisation uses Business Objects as a layer over its Oracle database so that people like me (i.e. not in the IT dept) can access the data without the risk of breaking something.
I have a PythonAnywhere account where I have a few dashboards built using Flask.
Each morning, BO sends me an email with the cvs files of the data that I want. I then upload these to a MYSQL server, and go from there. There is also an option to send it to an FTP recipient...but that's pretty much it.
Is it possible to set up an FTP server on my (paid for) PythonAnywhere account? If I could have those files go to a dir like /data, I could then have a scheduled job to insert them into my DB.
The data is already in the public domain and not sensitive.
Or is there intact a better way? | Sending csv file via FTP to PythonAnywhere | 1.2 | 1 | 0 | 875 |
32,511,232 | 2015-09-10T20:40:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn,bayesian-networks | 46,094,876 | 1 | false | 0 | 0 | You can use Weka for classify the data using BayesNet in Python.You can train your data using Weka and save your model as XML then you can write prediction API's in python for that saved model. | 1 | 6 | 1 | I need to classify the data using BayesNet in Python. I have used scikit learn for other classifiers like Random Forests, SVM etc. I know it has Naive Bayes but I am looking for Bayesian Network alone. If anyone could help me with it it would be very helpful Also, if there is an implementation of it for reference that would be even more helpful.
Thanks | Does scikit-learn have Bayes Net ? If yes is there an implementation for reference | 0.197375 | 0 | 0 | 1,288 |
32,511,575 | 2015-09-10T21:05:00.000 | 4 | 0 | 0 | 0 | python,django,mkdocs | 32,511,724 | 3 | false | 1 | 0 | Django is just a framework you need to host your static files and serve them with something like Nginx or Apache etc. | 1 | 2 | 0 | I tend to write my API documentation in Markdown and generate a static site with MkDocs. However, the site I'd like to host the documentation on is a Django site. So my question is, and I can't seem to find an answer Googling around, is how would I go about hosting the MkDocs static generated site files at a location like /api/v1/docs and have the static site viewable at that URL?
UPDATE:
I should point out I do serve all static files under Apache and do NOT use runserver or Debug mode to serve static files, that's just crazy. The site is completely built and working along with a REST API.
My question was simply how do I get Django (or should I say Apache) to serve the static site (for my API docs) generated by MkDocs under a certain URL. I understand how to serve media and static files for the site, but not necessarily how to serve what MkDocs generates. It generates a completely static 'site' directory with HTML and assets files in it. | Hosting API docs generated with mkdocs at a URL within a Django project | 0.26052 | 0 | 0 | 1,356 |
32,513,828 | 2015-09-11T00:52:00.000 | 3 | 0 | 0 | 0 | python,django,django-templates | 32,513,994 | 1 | true | 1 | 0 | You cannot extend from multiple django templates.. it's a single line inheritance.
If you want /templates/index.html to be your base index template, and /templates/hello/index.html to be your index template for the hello part of your application.. then you should have /templates/hello/index.html start with {% extends 'index.html' %}.
The thing to understand with Django templates is that the base template.. the one that is 'extended', is THE template.. and everything in that template will be displayed whether it is within a block tag or outside one.
When you 'extend' a template, any blocks declared which match blocks in the template that was extended, will override the content of those blocks.
Most web sites/applications have a more or less consistent layout from page to page. So, a typical template setup would be to have a master template that contains blocks for all the various parts of the page, with divs and css to arrange the layout the way you want. Put as much as the common html.. the stuff that does not change often from one page to the next, in that base template, and make sure the base template contains blocks for anything you need to fill in when you extend the template. These blocks can contain default html which will be shown if the extending template does not override that block. Or they can be empty.
Then, for each new template variation that you need, extend the master template and override only those blocks that need to be filled in or overrriden.
Don't think of the extend as bringing the code of your base template into the template that is extending it.. Django templates do not work like that. Think of the base template as THE template which has all the basic building blocks of your page, and then the extension MODIFIES the blocks of the template that it extends.
If you have a different situation where the pieces of your page need to be defined in different templates and you wish to piece them together, then what you are looking for is the {% include 'templatename' %} tag. | 1 | 1 | 0 | I have my index.html file in /templates directory and I have another index.html located in /templates/hello.
I've created a file named templates.html in /templates/hello and it should extend index.html.
Can I make template.html extends both index.html files (from both directories) using {% extends index.html %} tag in it?
Thanks. | How does Django's extends work? | 1.2 | 0 | 0 | 716 |
32,514,000 | 2015-09-11T01:14:00.000 | 0 | 1 | 0 | 0 | python | 32,514,035 | 1 | false | 0 | 0 | You need to activate the Gmail API in your project on Google Developer Console to get the API key which will have separate billing cost involved. | 1 | 0 | 0 | My flow is such that I already have the access token available in my backend server. So basically I was using the REST Apis until now for getting all user messages. However, I would like to use the Gmail API batch requests to improve on performance. I see that it is non-trivial to use python requests to do so. The gmail api client for python on the other hand does not seem to have a option where I can use the access token to get the results. Rather I need to use the authorization code which is unavailable to me. Can someone help me solve this?
Thanks,
Azeem | Gmail Python API: Build service using access token | 0 | 0 | 1 | 100 |
32,519,166 | 2015-09-11T08:40:00.000 | 3 | 0 | 0 | 0 | python-2.7,scipy,cygwin,windows-10 | 32,912,189 | 2 | false | 0 | 0 | I suffered for days with the same issue. My final solution was to install scipy0.15.1: pip install scipy==0.15.1. Hope it works for you too. | 1 | 3 | 1 | Installed cygwin64, including Python 2.7, on my new computer running Windows10.
Python runs fine, adding modules like matplotlib or bitstream goes fine, but when trying to add scipy the build eventually, after about an hour, having successfully compiled lots of fortran and C/C++ files, fails with:
error: Setup script exited with error: Command "g++ -fno-strict-aliasing -ggdb -O2 -pipe -Wimplicit-function-declaration -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/build=/usr/src/debug/python-2.7.10-1 -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.x86_64/src/Python-2.7.10=/usr/src/debug/python-2.7.10-1 -DNDEBUG -g -fwrapv -O3 -Wall -I/usr/include/python2.7 -I/usr/lib/python2.7/site-packages/numpy/core/include -Iscipy/spatial/ckdtree/src -I/usr/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c scipy/spatial/ckdtree/src/ckdtree_query.cxx -o build/temp.cygwin-2.2.1-x86_64-2.7/scipy/spatial/ckdtree/src/ckdtree_query.o" failed with exit status 1
I've tried both pip install and easy_install, both result in the same error.
Greatful for any hints on what to try next. | Scipy installation cygwin64 Windows10 fails at late stage | 0.291313 | 0 | 0 | 620 |
32,524,226 | 2015-09-11T13:04:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,gmail,google-oauth,service-accounts | 32,524,341 | 2 | false | 1 | 0 | A service account isn't you its it's own user. Even if you could access Gmail with a service account which I doubt you would only be accessing the service accounts GMail account (Which I don't think it has) and not your own.
To my knowledge the only way to access Gmail API is with Oauth2.
Service accounts can be used to access some of the Google APIs for example Google drive. The service account his its own Google drive account files will be uploaded to its drive account. I can give it permission to upload to my google drive account by adding it as a user on a folder in Google drive.
You cant give another user permission to read your Gmail Account so again the only way to access the Gmail API will be to use Oauth2. | 1 | 0 | 0 | I can't find a solution to authorize server-to-server authentication using Google SDK + Python + MAC OSx + GMAIL API.
I would like testing GMail API integration in my local environment, before publishing my application in GAE, but until now I have no results using samples that I have found in GMail API or OAuth API documentation. During all tests I received the same error "403-Insufficient Permission" when my application was using GCP Service Account, but if I convert the application to use User Account everything was fine. | Google App Engine Server to Server OAuth Python | 0 | 0 | 1 | 163 |
32,525,307 | 2015-09-11T14:00:00.000 | 1 | 0 | 1 | 0 | python,scrapy | 40,071,747 | 6 | false | 1 | 0 | Don't delete __init__.py from any place in your project directory. Just because it's empty doesn't mean you don't need it. Create a new empty file called __init__.py in your spiders directory, and you should be good to go. | 3 | 1 | 0 | from scrapy.spiders import CrawlSpider
Rule is giving error
I am using ubuntu
I have Scrapy 0.24.5 and Python 2.7.6
I tried with tutorial project of scrapy
I am working on pycharm | ImportError: No module named spiders | 0.033321 | 0 | 1 | 4,316 |
32,525,307 | 2015-09-11T14:00:00.000 | 0 | 0 | 1 | 0 | python,scrapy | 32,531,370 | 6 | false | 1 | 0 | Make sure scrapy is installed. Try running scrapy when your terminal directory is python, or you can try to update scrapy.. | 3 | 1 | 0 | from scrapy.spiders import CrawlSpider
Rule is giving error
I am using ubuntu
I have Scrapy 0.24.5 and Python 2.7.6
I tried with tutorial project of scrapy
I am working on pycharm | ImportError: No module named spiders | 0 | 0 | 1 | 4,316 |
32,525,307 | 2015-09-11T14:00:00.000 | 0 | 0 | 1 | 0 | python,scrapy | 46,567,274 | 6 | false | 1 | 0 | Mostly the tutorial you are fallowing and your version is mismatched.
Simply replace this (scrapy.Spider) with (scrapy.spiders.Spider).
Spider function is put into spiders module. | 3 | 1 | 0 | from scrapy.spiders import CrawlSpider
Rule is giving error
I am using ubuntu
I have Scrapy 0.24.5 and Python 2.7.6
I tried with tutorial project of scrapy
I am working on pycharm | ImportError: No module named spiders | 0 | 0 | 1 | 4,316 |
32,526,329 | 2015-09-11T14:51:00.000 | 0 | 0 | 0 | 0 | python,windows,tkinter,exe,cx-freeze | 32,526,330 | 1 | false | 0 | 1 | Copying the .exe file from the directory and pasting it to the desktop will create a shortcut that has a default reference directory of its new location (the desktop), and this cannot be changed. A way around this is:
-Right click on the desktop
-Select "New > Shortcut"
-Browse for the .exe file or copy the directory into the field and add \PROGRAM.exe after it
-Name the shortcut
This shortcut will direct the execution to the parent file which remains in its necessary directory (C:\...\exe.win32-3.4) rather than trying to reference the desktop as the directory. | 1 | 0 | 0 | I am looking to take a .exe file I've built using cx_Freeze, move it to my desktop, and have the ability to execute it while allowing it to reference the necessary directory. When I copy and paste the application, it tries to find its necessary files on the desktop rather than in the original directory.
Currently, all my files (including the .exe file) for this program are in the directory C:\Users\my_name\PycharmProjects\PROGRAM_DIRECTORY\build\exe.win32-3.4. I would like to take the file PROGRAM.exe, move it to my desktop (for more accessible execution) while still permitting it to reference all of the necessary files in the C:\...\exe.win32-3.4 directory. Is this possible? | Python/ Tkinter/ cx_Freeze: Creating standalone .exe application outside directory | 0 | 0 | 0 | 146 |
32,529,082 | 2015-09-11T17:31:00.000 | 1 | 0 | 1 | 0 | python,pyinstaller,statsmodels | 33,741,486 | 1 | false | 0 | 0 | UPDATE: Ran into this once again and my fix did not work. To solve it, I changed the line
from .tools.sm_exceptions import (ConvergenceWarning, CacheWriteWarning,
IterationLimitWarning, InvalidTestWarning)
In \statsmodels\__init__.py at line 8 to :
from statsmodels.tools.sm_exceptions import (ConvergenceWarning, CacheWriteWarning,
IterationLimitWarning, InvalidTestWarning)
I think I ran in the same issue last week. In my case, I fixed it by adding
import statsmodels.api
in my main script.
The import was done within another module previously.
Hoping this helps. | 1 | 0 | 0 | When I run the .exe, it generates the following in the console output:
C:\Python27\Scripts\dist>SNAPpy279.exe
Traceback (most recent call last): File "<string>", line 26, in <module> File "C:\Python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line 270, in load_module exec(bytecode, module.__dict__) File "C:\Python27\Scripts\build\SNAPpy279\out00-PYZ.pyz\statsmodels.api", line 19, in <module> File "C:\Python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line 270, in load_module exec(bytecode, module.__dict__) File "C:\Python27\Scripts\build\SNAPpy279\out00-PYZ.pyz\statsmodels.__init__", line 8, in <module> ImportError: No module named tools.sm_exceptions
Any potentially easy solutions for this? Suggestions? | Using PyInstaller (console, onefile) on a .py file that includes statsmodels | 0.197375 | 0 | 0 | 496 |
32,529,454 | 2015-09-11T17:57:00.000 | 0 | 1 | 1 | 0 | python,character-encoding,hex,python-3.4 | 32,529,584 | 1 | false | 0 | 0 | Python doesn't print nicely printable characters as escape sequences, but rather as their ASCII counterpart. When you look into an ASCII table, you will see that \x7a or 0x7a corresponds to a lowercase z.
Not all bytes can be printed this way. For example all byte values below 0x20 are unprintable control bytes. | 1 | 1 | 0 | I have an AES256 encrypted message as a string. The message consists of the IV (16 bytes HEX numbers so total 32 characters in the string) and 64 bytes HEX payload (128 characters). Therefore its a single 160 character string consisting of HEX numbers 00, e0, f2 etc. Why is it string? Its received from an another device as a string.
Now I break up the encrypted message to the IV and payload using the code 'iv = encrypted[:16]'. The IV is only zeroes (for testing purposes). If I use iv = bytes.fromhex(iv) I can print the iv as b'\x00\x00\x00... which is what I expect.
But when I do the same for the payload message starting with 9ed57a..., I would expect to get b'\x9e\xd5\x7a... etc, instead I get b'\x9e\xd5z_\xe3.... What do those extra characters (z_) mean and why does the next byte seem to be totally different than what I have in my original string?
The print would not be a problem of course, but when I use AES.decrypt I get garbage, even when I'm sure that I have the same password in both the sending and the receiving end of my setup. If my code is totally wrong, I would very much appreciate some help to correctly implement what I'm trying to do here.
Edit:
I have been trying something else now, I'm trying to turn the string of HEXes into an array of bytes using a loop. It seems to work right until passing it to the decrypting function. I get the message "ValueError: Input strings must be a multiple of 16 in length" which I don't understand since my input string is exactly 64 characters long (when printing len(msg)). The message is all weird characters, but since it's parsed from standard hexadecimal values ranging from 0x00 to 0xff, why doesn't it work? | Converting string of Hex numbers to hex for pycrypto (Python 3.4) | 0 | 0 | 0 | 607 |
32,530,506 | 2015-09-11T19:06:00.000 | 4 | 0 | 1 | 1 | python,macos,pip,homebrew | 32,530,618 | 3 | false | 0 | 0 | Homebrew is a package manager, similar to apt on ubuntu or yum on some other linux distros. Pip is also a package manager, but is specific to python packages. Homebrew can be used to install a variety of things such as databases like MySQL and mongodb or webservers like apache or nginx. | 1 | 48 | 0 | I want to install pillow on my Mac. I have python 2.7 and python 3.4, both installed with Homebrew. I tried brew install pillow and it worked fine, but only for python 2.7. I haven't been able to find a way to install it for python 3. I tried brew install pillow3 but no luck. I've found a post on SO that says to first install pip3 with Homebrew and then use pip3 install pillow. As it happens, I have already installed pip3.
I've never understood the difference, if any, between installing a python package with pip and installing it with Homebrew. Can you explain it to me? Also, is it preferable to install with Homebrew if a formula is available? If installing with Homebrew is indeed preferable, do you know how to install pillow for python 3 with Homebrew?
The first answers indicate that I haven't made myself plain. If I had installed pillow with pip install pillow instead of brew install pillow would the installation on my system be any different? Why would Homebrew make a formula that does something that pip already does? Would it check for additional prerequisites or something? Why is there a formula for pillow with python2, but not as far as I can tell for pillow with python3? | Is there a difference between "brew install" and "pip install"? | 0.26052 | 0 | 0 | 38,886 |
32,531,308 | 2015-09-11T20:03:00.000 | 0 | 0 | 0 | 0 | python,django,django-class-based-views,django-imagekit | 32,639,949 | 1 | true | 1 | 1 | Based on the answer provided by kicker86. I plan to retain single image. | 1 | 1 | 0 | I have just started using django imagekit. I have a list view page where the images are of dimensions 270 x 203 (30 KB approx.) and same images have a size of 570 x 427 (90 KB approx.) in the details view page.
I wanted to know;
Should have create 2 different images for each image with different size and dimensions.
if the answer to the 1st query is yes. How to do it on Django Imagekit.
PS: I am planning to use django Imagekit on the form level. | django imagekit - list view and details view image | 1.2 | 0 | 0 | 95 |
32,531,409 | 2015-09-11T20:11:00.000 | 0 | 0 | 1 | 0 | python-3.x,py2exe,cx-freeze,pythonw | 32,531,528 | 1 | true | 0 | 0 | I've used cx-freeze to deploy python apps compiled to windows .exe files for us by computer novice users for several years and it has worked well. you will occasionally run into issues with dependencies you will have to take extra steps for (Datetime for example) but nothing that isn't surmountable. The easiest way to handle it is to install the folder on the computer yourself and create a desktop shortcut to it for the user. That keeps it simple for them. If you are not close to them you can always use a program like team viewer to gain access to their computer like remote desktop. | 1 | 0 | 0 | I would like to send my Python3 script to my father-in-law and grandmother. Each has their own Windows machine, one is running Windows 7 and the other is running XP.
Not sure how to package it up for them to run on their respective machines. Is there such a method?
My script prompts, while in the IDE environment, for Keyword, path, filename. So there are some inputs, the user has to type in. Not sure if that will affect the portable script creation.
After reading through some responses here on StackOverFlow, I found py2exe does not work with Python 3.
Also Pytonw, suggested here as well, looks very complicated. I don't think either of my relative could carry out those steps.
Lastly CX-Freeze site I get ublock filters-Badware risks and a big warning window when I visit their website. | Package Python 3 executable that does not require programming knowledge | 1.2 | 0 | 0 | 106 |
32,531,468 | 2015-09-11T20:15:00.000 | 0 | 0 | 1 | 0 | python,regex | 32,531,657 | 2 | false | 0 | 0 | I'd recommend cleaning the input up a but first. Get rid of all the white space, or at least the spaces. Check for an equal sign and see if there's a 0 on one side or the other. If there is you can remove it. If not you have to decide how clever you want to be.
Get it close to a format you want to deal with, in other words. Then you can check if they've entered a valid re.
You also need to decide if you can handle shuffling the order of the terms around and letters other than x. Also whether you want the ^ or ** or just x2.
You probably want to grab each term individually (all the terms between the + or -) and decide what kind of term it is.
In other words there's a lot to do before the re expression.
Incidentally have you seen SymPy | 1 | 0 | 0 | I am currently trying to do a quadratic equation solver. I searched on the web how people did their version, but all of them implied that the user entered the coefficients, which, although the easiest way, I really hate it.
I want the user to introduce the whole equation, and let the program know which are the coefficients, and calculate the solution. I discovered the concept of regex, and automatically, the re module.
I understand how to implement the quadratic formula to solve the problem, but the problem is that I don't know which function should I use, and how to get the coefficients from the input.
I want the regex to be like:
\d(\sx\s\(\^)\s2/x\^2)(\s\+\s)\dbx(\s\+\s)\d = 0
To find the coefficients in this:
ax^2 + bx + c = 0
I am aware that the regex sucks, because I only started to understand it yesterday, so you can also tell me how to improve that.
EDIT:
Let's clarify what I exactly want.
How to improve the regex that I tried doing above?
What Python function should I use so that I can only have the coefficients?
How can I take the groups and turn them into usable integers, assuming that it doesn't store those groups? | Python: Regular expression for quadratic equations | 0 | 0 | 0 | 2,333 |
32,533,228 | 2015-09-11T22:53:00.000 | 2 | 0 | 0 | 0 | python,django,pickle | 32,554,191 | 2 | true | 1 | 0 | While it's not really the "answer" to my question, the best solution I found was to implement a dehydrate() method on the model, allowing me to alter the model's __dict__ and store that instead.
On recovery from the cache, it's as simple as using the ** syntax and you'll have your original model back. | 1 | 1 | 0 | I'm currently trying to pickle certain Django models. I created a __getstate__ and __setstate__ method for the model, but it looks like pickle.dumps() is using the default __reduce__ instead.
Is there a way to force use of __getstate__ and __setstate__ ? If not, what is the best way to overwrite __reduce__ ?
I am currently using Django 1.6 and Python 2.7.6, if that helps.
In essence, I am using get and set state to remove two fields before pickling in order to save space. | How to Pickle Django Model | 1.2 | 0 | 0 | 1,634 |
32,533,279 | 2015-09-11T23:00:00.000 | -1 | 0 | 1 | 0 | python,reference,namespaces,maya | 32,544,758 | 2 | false | 0 | 0 | Just set the namespace and then rename the object, as long as the the namespace is set the newly renamed object will be in the active namespace. You can also explictly specify the namespace in the rename command. If you want to rename something into the root namespace use a leading colon. | 1 | 0 | 0 | I have an animation file which has a reference namespace "rig:", I need to remove the namespace before I export it, so I use the following code to remove the namespace and it work:
cmds.namespace( removeNamespace = ns[0], mergeNamespaceWithRoot = True)
Now, the problem is I need to add the namespace back, but couldn't find out how. I tried to use add or set namespace, but it only add namespace if I create new object, it won't add it back to my existing nodes. Anyone have ideas how it works? Million thanks!! | Maya: Add namespace back to my node after remove | -0.099668 | 0 | 0 | 2,205 |
32,535,082 | 2015-09-12T04:11:00.000 | 2 | 0 | 0 | 0 | python,mayavi,mayavi.mlab | 32,535,083 | 1 | true | 0 | 0 | Mlab can auto-generate code for anything that can be changed via the GUI interface - this is an extremely efficient way to get the syntax you need.
From the figure window, click the Mayavi icon in the upper left corner, then click the red button in the "Mayavi Pipeline" window. This will open a window that prints all the commands corresponding to changes you make with the GUI, which can be copied into your script. This is much faster than googling/asking Stack Overflow. | 1 | 1 | 0 | I have plotted something with Mayavi/mlab, and I can't remember what the proper syntax is
to set the camera position,
change the view angle, or
turn parallel projection on/off
etc
Is there a faster way to get this syntax than wading through the mlab documentation, or asking another question on Stack Overflow? | How can I quickly find the syntax for manipulating my mlab/mayavi plot? | 1.2 | 0 | 0 | 94 |
32,536,548 | 2015-09-12T07:56:00.000 | 1 | 0 | 1 | 1 | python,astronomy,pyephem | 32,557,689 | 1 | false | 0 | 0 | Alas — I am not aware of any settings in the libastro library, the PyEphem is based on, that would allow the use of alternative time scales. | 1 | 1 | 0 | Is there a way to make PyEphem give times in Dynamical Time (Terrestrial Time), without using delta_t() every time?
According to documentations, PyEphem uses Ephemeris Time. So isn't there a way to just 'switch off' the conversion to UTC? | Using Terrestrial Time in PyEphem | 0.197375 | 0 | 0 | 110 |
32,538,824 | 2015-09-12T12:32:00.000 | 1 | 0 | 1 | 0 | python,idl-programming-language | 32,551,381 | 1 | true | 0 | 0 | There is a Python bridge in IDL 8.5 which would allow you import your visualization routines into Python. Or you could port your visualization routines to a Python visualization library; the most common and general vis library in Python is matplotlib. | 1 | 1 | 0 | I have a number of IDL visualization routines which I would prefer to be using in Python. These IDL visualization facilities provided a point-and-click interface with the cursor to see values and positions of pixels.
Is there a Python equivalent to this? If I were to rewrite these routines, how could I provide the same type of visualization facilities? | How to use IDL visualization routines in Python | 1.2 | 0 | 0 | 79 |
32,539,832 | 2015-09-12T14:17:00.000 | 0 | 0 | 1 | 0 | python-2.7,ipython-notebook,jupyter | 68,754,045 | 6 | false | 0 | 0 | I've constructed this awhile ago using jupyter nbconvert, essentially running a notebook in the background without any UI:
nohup jupyter nbconvert --ExecutePreprocessor.timeout=-1 --CodeFoldingPreprocessor.remove_folded_code=False --ExecutePreprocessor.allow_errors=True --ExecutePreprocessor.kernel_name=python3 --execute --to notebook --inplace ~/mynotebook.ipynb > ~/stdout.log 2> ~/stderr.log &
timeout=-1 no time out
remove_folded_code=False if you have Codefolding extension enabled
allow_errors=True ignore errored cells and continue running the notebook to the end
kernel_name if you have multiple kernels, check with jupyter kernelspec list | 1 | 58 | 0 | I use Jupyter Notebook to run a series of experiments that take some time.
Certain cells take way too much time to execute so it's normal that I'd like to close the browser tab and come back later. But when I do the kernel interrupts running.
I guess there is a workaround for this but I can't find it | Keep Jupyter notebook running after closing browser tab | 0 | 0 | 0 | 36,665 |
32,540,787 | 2015-09-12T15:55:00.000 | 0 | 0 | 1 | 0 | python-3.x,tkinter,virtualenv,yum,fedora-21 | 32,723,786 | 1 | false | 0 | 1 | I was using python 3.3.2 interpreter. Turns out the default packages installed when running the command yum install python3-tkinter are set to work with python 3.4.1 interpreter. Configuring my virtualenv to use the python 3.4.1 interpreter proved to be the solution as the python interpreter was then able to find the required libraries in it's path. | 1 | 0 | 0 | I'm using Fedora 21. Installed the python3-tkinter package using yum install python3-tkinter. The package gets stored in the /usr/lib64/python3.4 directory. Is there a way to use pip to install tkinter?
Have a virtualenv setup with python3. When I try to run my program within that virtualenv I get:
ImportError: No module named 'tkinter'.
Does it make sense to copy the package directories from /usr/lib64/python3.4 to the site_packages folder associated with the virtualenv? | Installation of tkinter on python3 | 0 | 0 | 0 | 1,536 |
32,543,419 | 2015-09-12T20:32:00.000 | 2 | 0 | 0 | 0 | python,django,django-settings | 32,543,440 | 2 | false | 1 | 0 | That is not a Django setting.
It's perfectly good practice to define your own project-specific settings inside settings.py, and that is presumably what the original developer did here. | 1 | 0 | 0 | Despite googling I can't find any documentation for the Django HOST_DOMAIN setting in the settings.py.
I am going through a settings.py file I have been given and this is the only part of the file I am not 100% clear on. | What is the Django HOST_DOMAIN setting for? | 0.197375 | 0 | 0 | 44 |
32,543,608 | 2015-09-12T20:54:00.000 | 21 | 0 | 1 | 0 | python,performance,list,deque,cpython | 32,543,863 | 3 | false | 0 | 0 | Is there performance difference?
Yes. deque.popleft() is O(1) -- a constant time operation. While list.pop(0) is O(n) -- linear time operation: the larger the list the longer it takes.
Why?
CPython list implementation is array-based. pop(0) removes the first item from the list and it requires to shift left len(lst) - 1 items to fill the gap.
deque() implementation uses a doubly linked list. No matter how large the deque, deque.popleft() requires a constant (limited above) number of operations. | 1 | 47 | 0 | deque.popleft() and list.pop(0) seem to return the same result. Is there any performance difference between them and why? | deque.popleft() and list.pop(0). Is there performance difference? | 1 | 0 | 0 | 53,977 |
32,544,925 | 2015-09-12T23:49:00.000 | 1 | 0 | 1 | 0 | python,numpy,matplotlib,scikit-learn,anaconda | 32,574,361 | 1 | false | 0 | 0 | The MacHD/Library/Frameworks/python.framework/versions/3.4/site-packages/sklearn is for Python 3.4 (note the 3.4 in the path) and the MacHD/Library/Python/2.7/ is for Python 2.7. The packages for each are independent of each other. | 1 | 0 | 1 | Beginner here, please be gentle! I’m receiving an error that reads
ImportError: No module named sklearn when using pycharm.
I’m trying to import matplotlib, numpy, and sklearn. I’ve downloaded scikit_learn. I’ve also downloaded anaconda.
I have “two” pythons. Looks like this…
MacHD/Library/Frameworks/python.framework/versions/3.4/site-packages/sklearn
MacHD/Library/Python/2.7/ ... in here is pip and scikit_learn
The strange thing is that matplotlib and numpy work but not sklearn. How can I figure out what's wrong? | Cannot import scikit-learn | 0.197375 | 0 | 0 | 809 |
32,545,277 | 2015-09-13T00:53:00.000 | 0 | 0 | 0 | 1 | python-2.7,cassandra,centos6 | 32,554,637 | 1 | false | 0 | 0 | Changing python installation to scl fixed the problem. I uninstalled the python2.7 but cleaning out /usr/local with all python 2.7 related things in bin and lib. Reinstalled python27 using the following sequence:
yum install centos-release-SCL
yum install python27
scl enable python27 bash
Installed pip using "easy_install-2.7 pip"
Now I can install cassandra driver... | 1 | 0 | 0 | I am trying to install using pip2.7 install cassandra-driver and it fails with the long stack trace. The error is RuntimeError: maximum recursion depth exceeded while calling a Python object. I can install number of things like scikit etc, just fine. Is there something special needed? Here is the tail of the stack trace.
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 837, in obtain
return installer(requirement)
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/setuptools/dist.py", line 272, in fetch_build_egg
dist = self.__class__({'script_args':['easy_install']})
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/setuptools/dist.py", line 225, in __init__
_Distribution.__init__(self,attrs)
File "/usr/local/lib/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/setuptools/dist.py", line 257, in finalize_options
ep.require(installer=self.fetch_build_egg)
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 2029, in require
working_set.resolve(self.dist.requires(self.extras),env,installer))
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 579, in resolve
env = Environment(self.entries)
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 748, in __init__
self.scan(search_path)
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 777, in scan
for dist in find_distributions(item):
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 1757, in find_on_path
path_item,entry,metadata,precedence=DEVELOP_DIST
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 2151, in from_location
py_version=py_version, platform=platform, **kw
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 2128, in __init__
self.project_name = safe_name(project_name or 'Unknown')
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 1139, in safe_name
return re.sub('[^A-Za-z0-9.]+', '-', name)
File "/usr/local/lib/python2.7/re.py", line 155, in sub
return _compile(pattern, flags).sub(repl, string, count)
File "/usr/local/lib/python2.7/re.py", line 235, in _compile
cachekey = (type(key[0]),) + key
RuntimeError: maximum recursion depth exceeded while calling a Python object | pip2.7 cassandra-driver installation on centos 6.6 fails with recursion depth issue | 0 | 0 | 0 | 480 |
32,546,564 | 2015-09-13T05:17:00.000 | 1 | 0 | 1 | 0 | python,list,indexing | 32,546,605 | 2 | false | 0 | 0 | Assume your list is [A, B, C, D, E] and you are given 3, 1 as your indices.
You should swap the item at index 3 (D) and index 1 (B), such that the list now:
[A, D, C, B, E]
I am assuming that he wants you to do this swap 'in place'. By that, he does not want you to create a new list. You should modify the original list.
I would confirm with your professor to make sure you do not lose points. | 1 | 0 | 0 | I'm super new to python, in a class, and the current assignment is:
"Write a function that takes a list, index to list, and another index to list and swaps the data in the list at the two indexes of the single list."
I have no idea what my teacher is trying to get me to do.
Can anyone explain this in layman's terms? | What does "index to list" mean? | 0.099668 | 0 | 0 | 1,349 |
32,546,992 | 2015-09-13T06:35:00.000 | 4 | 0 | 0 | 0 | python,scikit-learn | 32,551,334 | 1 | true | 0 | 0 | RandomForests in scikit-learn don't handle missing values at the moment [as of 0.16 and upcoming 0.17], and you do need to impute the values before. | 1 | 2 | 1 | I have data with missing values and I would like to build a classifier for it. I know that scikit-learn will help you impute values for the missing data. However, in my case it is not clear this is the right thing to do or even easy. The problem is that the features in the data are correlated so it's not obvious now to do this imputation in a sensible way.
I know that in R some of the classifiers (decision trees, random forests) can directly handle missing values without your having to do any imputation.
Can any of the classifiers in scikit learn 0.16.1 do likewise and if so, how should I represent the missing values to help it?
I have read discussions on the scikit learn github about this topic but I can''t work out what has actually been implemented and what hasn't. | Which classifiers handle missing values in scikit learn 0.16.1 | 1.2 | 0 | 0 | 2,290 |
32,551,277 | 2015-09-13T15:28:00.000 | 3 | 0 | 0 | 0 | python,google-chrome,selenium | 33,557,990 | 1 | true | 0 | 0 | It turns out that I had to unzip the folder and instead of typing the path to the folder as an argument, you had to supply the .exe file in the path as well. Maybe it was an intermittent thing, or something that only didn't work when I posted the question. | 1 | 4 | 0 | I have always used Firefox in webdriver. I want to try using Chrome. I have downloaded chromedriver and included it in the Path variable. However, this code returns an error:
>>> webdriver.Chrome()
selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
I have also tried including the path:
>>> webdriver.Chrome('C:\Python34\chromedriver_win32.zip')
OSError: [WinError 193] %1 is not a valid Win32 application
What is the problem here? I am sorry if I am doing something completely wrong or my problem seems hard to solve. Any help will be appreciated. I have also searched all over the internet, but I have not found anything yet.
Seriously, can't anybody solve this problem? | Using webdriver to run in Chrome with Python | 1.2 | 0 | 1 | 7,011 |
32,551,405 | 2015-09-13T15:40:00.000 | 0 | 0 | 0 | 0 | python,shell,python-idle | 38,809,051 | 1 | false | 0 | 1 | IDLE normally runs in two processes: one to run the graphical user interface, one to run your code. The processes currently communicate through a socket. Each process polls the socket for input 20 times a second. The user process also calls tk update in case the use is using tkinter but is not running mainloop.
I have a 6-core Pentium with gigabytes of memory and SSD main drive running 64-bit Win10. I currently have 5 IDLE shells running and 5 corresponding user processes: installed 2.7, 3.4, 3.5, 3.6, and development build of 3.6. Task Manager mostly shows each at 0%. One occasionally bumps up to .7%. This would be a much higher % on a single core machine. This seems to happen more often with the 3.4 IDLE.
I can only speculate that some combination of less memory, slower memory, less CPU cache space, much slower swap to disk, old single-core laptop chip with fewer instructions, and older OS with fewer system calls results in the difference.
Does your laptop have its maximum memory?
How often when using IDLE is the 12% an actual problem? (I guess that partly depend on time on battery versus power cord.) | 1 | 1 | 0 | I have Python 3.4.3 installed on an older laptop (Pentium 4) with Windows XP.
It seems that half the time when the Python shell is open, the CPU usage goes up to 12-13%.
It is the Python shell itself, not any Python script it may have launched, and not the IDLE text editor.
I have yet to figure out the pattern when it goes up and when it does not. CPU usage actually goes to zero when I begin to debug a tkinter based script.
The shell window is opened by clicking on IDLE shortcut, if it makes any difference.
I have resorted to closing the shell until I need it, while I am working in IDLE editor.
Any idea why this happens and can this be remedied? | Python Shell High CPU Usage | 0 | 0 | 0 | 2,211 |
32,551,690 | 2015-09-13T16:08:00.000 | 3 | 0 | 1 | 0 | python,module,installation,package | 32,551,952 | 2 | false | 0 | 0 | So why do we need programs such as pip to 'install' Python modules? Why not just download the files, put them in our project's folder and import them?
It's just meant to facilitate the installation of softwares without having to bundle all the dependencies nor ask the user to download the files.
You can type pip install mysoftware and that will also install the required dependencies. You can also upgrade a software easily.
What exactly does it mean to 'install' a module or a package? And what exactly does pip do?
It will copy the files in a directory that is in your Python path. This way you will be able to import the package without having to copy the directory in your project. | 2 | 4 | 0 | A Python module is just a .py source file. A Python package is simply a collection of modules.
So why do we need programs such as pip to 'install' Python modules? Why not just download the files, put them in our project's folder and import them?
What exactly does it mean to 'install' a module or a package? And what exactly does pip do?
Are things different on Windows and on Linux? | What is installing Python modules or packages? | 0.291313 | 0 | 0 | 667 |
32,551,690 | 2015-09-13T16:08:00.000 | 0 | 0 | 1 | 0 | python,module,installation,package | 32,552,298 | 2 | false | 0 | 0 | With your proposal, for each and every project you have to download the required modules as dependencies. You have to download them again and again and add them with your project which is not very suitable though some platform like node.us do it.
What pip do is to keep the modules you installed in /use/lib/python*/site-packages/ so clearly it is included in your Python's path. So, when you try to import a module or package it checks in site-package if it exists. If exists,then this code will be used with your project. If not, you will get an error. | 2 | 4 | 0 | A Python module is just a .py source file. A Python package is simply a collection of modules.
So why do we need programs such as pip to 'install' Python modules? Why not just download the files, put them in our project's folder and import them?
What exactly does it mean to 'install' a module or a package? And what exactly does pip do?
Are things different on Windows and on Linux? | What is installing Python modules or packages? | 0 | 0 | 0 | 667 |
32,553,773 | 2015-09-13T19:36:00.000 | 0 | 0 | 0 | 0 | python,sql,postgresql,csv,psycopg2 | 32,553,942 | 1 | false | 0 | 0 | Would you know how to do it if there were only those two columns in CSV file?
If yes, then the simplest solution is to transform the CSV prior to importing into Postgres. | 1 | 0 | 0 | I am trying to read a data from csv file to postgres table. I have two columns in table, but there are four fields in csv data file. I want to read only two specific columns from csv to table. | how to copy specific columns from CSV file to postgres table using psycopg2? | 0 | 1 | 0 | 509 |
32,553,806 | 2015-09-13T19:39:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,tf-idf,text-classification | 32,555,599 | 2 | true | 0 | 0 | What I think you have is an unsupervised learning application. Clustering. Using the combined X & Y dataset, generate clusters. Then overlay the X boundary; the boundary that contains all X samples. All items from Y in the X boundary can be considered X. And the X-ness of a given sample from Y is the distance from the X cluster centroid. Something like that. | 2 | 1 | 1 | I have a collection X of documents, all of which are of class A (the only class in which I'm interested or know anything about). I also have a much larger collection Y of documents that I know nothing about. The documents in X and Y come from the same source and have similar formats and somewhat similar subject matters. I'd like to use the TF-IDF feature vectors of the documents in X to find the documents in Y that are most likely to be of class A.
In the past, I've used TF-IDF feature vectors to build naive Bayes classifiers, but in these situations, my training set X consisted of documents of many classes, and my objective was to classify each document in Y as one of the classes seen in X.
This seems like a different situation. Here, my entire training set has the same class (I have no documents that I know are not of class A), and I'm only interested in determining if documents in Y are or are not of that class.
A classifier seems like the wrong route, but I'm not sure what the best next step is. Is there a different algorithm that can use that TF-IDF matrix to determine the likelihood that a document is of the same class?
FYI, I'm using scikit-learn in Python 2.7, which obviously made computing the TF-IDF matrix of X (and Y) simple. | If my entire training set of documents is class A, how can I use TF-IDF to find other documents of class A? | 1.2 | 0 | 0 | 264 |
32,553,806 | 2015-09-13T19:39:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,tf-idf,text-classification | 32,604,626 | 2 | false | 0 | 0 | The easiest thing to do is what was already proposed - clustering. More specifically, you extract a single feature vector from set X and then apply K-means clustering to the whole X & Y set.
ps: Be careful not to confuse k-means with kNN (k-nearest neighbors). You are able to apply only unsupervised learning methods. | 2 | 1 | 1 | I have a collection X of documents, all of which are of class A (the only class in which I'm interested or know anything about). I also have a much larger collection Y of documents that I know nothing about. The documents in X and Y come from the same source and have similar formats and somewhat similar subject matters. I'd like to use the TF-IDF feature vectors of the documents in X to find the documents in Y that are most likely to be of class A.
In the past, I've used TF-IDF feature vectors to build naive Bayes classifiers, but in these situations, my training set X consisted of documents of many classes, and my objective was to classify each document in Y as one of the classes seen in X.
This seems like a different situation. Here, my entire training set has the same class (I have no documents that I know are not of class A), and I'm only interested in determining if documents in Y are or are not of that class.
A classifier seems like the wrong route, but I'm not sure what the best next step is. Is there a different algorithm that can use that TF-IDF matrix to determine the likelihood that a document is of the same class?
FYI, I'm using scikit-learn in Python 2.7, which obviously made computing the TF-IDF matrix of X (and Y) simple. | If my entire training set of documents is class A, how can I use TF-IDF to find other documents of class A? | 0.099668 | 0 | 0 | 264 |
32,556,233 | 2015-09-14T01:39:00.000 | 0 | 1 | 0 | 0 | dronekit-python | 32,556,732 | 1 | false | 0 | 0 | Requesting individual packets should work, but that was never meant to be requested lots of times per second.
In order to get a certain packet many times per second, set up streams. A stream will trigger a certain number of times per second, and will then send whichever packet is associated with it, automatically. The ATTITUDE message is in the group called EXTRA1.
Let's suppose you want to receive 10 ATTITUDE messages per second. The relevant parameter is called SR0_EXTRA1. This defines the number of Attitude packets sent per second. The default is 4. Try increasing that parameter to 10. | 1 | 0 | 0 | I have Ardupilot on plane, using 3DR Radio back to Raspberry Pi on the ground doing some advanced geo and attitude based maths, and providing audio feedback to pilot (rather than looking to screen).
I am using Dronekit-python, which in turn uses Mavproxy and Mavlink. What I am finding is that I am only getting new attitude data to the Pi at about 3hz - and I am not sure where the bottleneck is:
3DR is running at 57.6 khz and all happy
I have turned off the automatic push of logs from Ardupilot down to Pi (part of Mavproxy)
The Pi can ask for Attitude data (roll, yaw etc.) through the DroneKit Python API as often as it likes, but only gets new data (ie, a change in value) about every 1/3 second.
I am not deep enough inside the underlying architecture to understand what the bottleneck may be -- can anyone help? Is it likely a round trip message response time from base to plan and back (others seem to get around 8hz from Mavlink from what I have read)? Or latency across the combination of Mavproxy, Mavlink and Drone Kit? Or is there some setting inside Ardupilot or Telemetry that copuld be driving this.
I am aware this isn't necessarily a DroneKit issue, but not really sure where it goes as it spans quite a few components. | ArduPilot, Dronekit-Python, Mavproxy and Mavlink - Hunt for the Bottleneck | 0 | 0 | 0 | 1,082 |
32,557,853 | 2015-09-14T05:31:00.000 | 0 | 1 | 1 | 0 | python | 32,558,377 | 1 | false | 0 | 0 | Using the classmethod decorator It act's like an alternate constructor.
althought it'll still need to use the arguments needed for your __init__ you can do other stuff in it. for simplicity's sake. It's like another __init__ method for your class. | 1 | 0 | 0 | I know there are alternatives to call the functions without classmethod director? But what is the advantage apart from calling it with class argument? | Where do we use python class method decorator? | 0 | 0 | 0 | 53 |
32,561,547 | 2015-09-14T09:37:00.000 | 1 | 0 | 0 | 0 | oracle,python-2.7,cx-oracle | 32,578,626 | 1 | false | 0 | 0 | I was able to sort it out. I had installed the incorrect version of cx_Oracle previously. It was for 12c oracle client. I installed 11g version later and it started working for me.
Note: There is no need to set ORACLE_HOME environment variable.
Oracle client, Python, Windows OS all of them must be of same architecture. Either 32 or 64 bit. | 1 | 3 | 0 | It's been two days I am trying to work with cx_Oracle. I want to connect to oracle from python. But I am getting "ImportError: DLL load failed: The specified procedure could not be found." error. I have already gone through many posts and tried the things suggested on them, but nothing helped me.
I checked the versions of Windows, Python, Oracle client as suggested on many posts but all of them looks good to me.
Python veriosn 2.7: 64 bit
Python 2.7.8 (default, Jun 30 2014, 16:08:48) [MSC v.1500 64 bit (AMD64)] on win
32
Windows 7: 64 bit
Oracle client is 11.2.0: 64 bit
I ran Sqlplus and checked task manager to confirm that. As I have both 32 and 64 bit client installed on my system, but 64 bit is set in PATH variable.
Please help me to sort out this problem. Do let me know if any other information is needed. | Not able to import cx_Oracle in python "ImportError: DLL load failed: The specified procedure could not be found." | 0.197375 | 1 | 0 | 1,699 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.