Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
40,139,245 | 2016-10-19T18:43:00.000 | 0 | 0 | 1 | 0 | python,package,atom-editor,flake8 | 45,655,919 | 3 | false | 0 | 0 | It could be your Environment Variable paths, especially if you have two concurrent versions of Python installed. Check to see which one is at the top of the list and ahead of the one your not currently developing within. | 2 | 4 | 0 | I have follow, these steps, but
"apm install linter
Next, we’re going to install a Python Linter package, to help us detect errors in our Python code.
This package is called linter-flake8 and it’s an interface to flake8. To install it, you need to run:
pip install flake8
pip install flake8-docstrings
apm install linter-flake8
You must restart Atom to see the changes"
I have followed those steps and every package with PIP and APM were installed, however, corrections are not made on my python code in ATOM. Is there something else i need to configure or to do appart from steps i mentioned?
2
3 | how to install flake8 in atom on windows | 0 | 0 | 0 | 6,354 |
40,139,245 | 2016-10-19T18:43:00.000 | 0 | 0 | 1 | 0 | python,package,atom-editor,flake8 | 64,344,462 | 3 | false | 0 | 0 | Maybe you got installed flake8 on C:\Users\myuser\AppData\Roaming\Python\Python3X\Scripts just like me. If that is the case you only need to add that location to the PATH. | 2 | 4 | 0 | I have follow, these steps, but
"apm install linter
Next, we’re going to install a Python Linter package, to help us detect errors in our Python code.
This package is called linter-flake8 and it’s an interface to flake8. To install it, you need to run:
pip install flake8
pip install flake8-docstrings
apm install linter-flake8
You must restart Atom to see the changes"
I have followed those steps and every package with PIP and APM were installed, however, corrections are not made on my python code in ATOM. Is there something else i need to configure or to do appart from steps i mentioned?
2
3 | how to install flake8 in atom on windows | 0 | 0 | 0 | 6,354 |
40,141,260 | 2016-10-19T20:48:00.000 | 5 | 0 | 0 | 0 | python,selenium,raspberry-pi | 40,141,261 | 1 | false | 1 | 0 | I have concluded, after hours and a whole night of debugging that you can't install it, because there is no chromedriver compatible with a raspberry pi processor. Even if you download the linux 32bit. You can confirm it by running this line in a terminal window path/to/chromedriver it will give you this error
cannot execute binary file: Exec format error
Hope this helps anyone that wanted to do this :) | 1 | 2 | 0 | If your seeing this I guess you are looking to run chromium on a raspberry pi with selenium.
like this Driver = webdriver.Chrome("path/to/chomedriver") or like this webdriver.Chrome() | selenium run chrome on raspberry pi | 0.761594 | 0 | 1 | 729 |
40,141,313 | 2016-10-19T20:51:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,aws-lambda | 56,790,848 | 1 | true | 0 | 0 | In IoT Code, Create a rule for invoking a Lambda to accept JSON data. Then you can do anything with that data. | 1 | 0 | 0 | I am Publishing data from Raspberry Pi to AWS IoT and I can see the updates there.
Now, I need to get that data into AWS Lambda and connect it to AWS SNS to send a message above a threshold. I know about working with SNS and IoT.
I just want to know that how I can get the data from AWS IoT to AWS Lambda ??
Please Help !!
Thanks :) | Stream data from AWS IoT to AWS Lambda using Python? | 1.2 | 0 | 0 | 471 |
40,142,959 | 2016-10-19T23:10:00.000 | 5 | 0 | 0 | 0 | python | 41,708,567 | 1 | false | 0 | 0 | Just looking for the answer to this myself. gmplot was updated to June 2016 to include a hovertext functionality for the marker method, but unfortunately this isn't available for the scatter method. The enthusiastic user will find that the scatter method simply calls the marker method over and over, and could modify the scatter method itself to accept a title or range of titles.
If like myself you are using an older version, make sure to run
pip install --upgrade gmplot
and to place a marker with hovertext (mouse hovering over pin without clicking)
gmap=gmplot.GoogleMapPlotter("Seattle")
gmap.marker(47.61028142523736, -122.34147349538826, title="A street corner in Seattle")
st="testmap.html"
gmap.draw(st) | 1 | 3 | 1 | I plotted some points on google maps using gmplot's scatter method (python). I want to add some text to the points so when someone clicks on those points they can see the text.
I am unable to find any documentation or example that shows how to do this.
Any pointers are appreciated. | Add text to scatter point using python gmplot | 0.761594 | 0 | 0 | 8,147 |
40,143,675 | 2016-10-20T00:36:00.000 | 4 | 0 | 0 | 0 | python,python-3.x,pandas,fuzzywuzzy | 40,163,636 | 1 | false | 0 | 0 | Thanks everyone for your inputs. I have solved my problem! The link that "agg3l" provided was helpful. The "TypeError" I saw was because either the "url_entrance" or "company_name" has some floating types in certain rows. I converted both columns to string using the following scripts, re-ran the fuzz.ratio script and got it to work!
df_combo['url_entrance']=df_combo['url_entrance'].astype(str)
df_combo['company_name']=df_combo['company_name'].astype(str) | 1 | 2 | 1 | I have a pandas dataframe called "df_combo" which contains columns "worker_id", "url_entrance", "company_name". I am trying to produce an output column that would tell me if the URLs in "url_entrance" column contains any word in "company_name" column. Even a close match like fuzzywuzzy would work.
For example, if the URL is "www.grandhotelseattle.com" and the "company_name" is "Hotel Prestige Seattle", then the fuzz ratio might be somewhere 70-80.
I have tried the following script:
>>>fuzz.ratio(df_combo['url_entrance'],df_combo['company_name'])
but it returns only 1 number which is the overall fuzz ratio for the whole column. I would like to have fuzz ratio for every row and store those ratios in a new column. | fuzzy match between 2 columns (Python) | 0.664037 | 0 | 0 | 3,169 |
40,143,742 | 2016-10-20T00:46:00.000 | 1 | 0 | 1 | 0 | python | 40,143,871 | 1 | true | 0 | 0 | No. The byte code changes between versions, and is tagged with a magic number to make it clear which interpreters it will work with. And that's just within CPython, between CPython and PyPy they don't even have to agree on where you'd look for the magic number, let alone what it means. .pyc files are largely an optimization, not portable in the way the source files are; they're only distributable if they're distributed with the interpreter that understands them.
Basically, the Python language standards covers source syntax and libraries, not byte code formats | 1 | 2 | 0 | I have been reading about Python interpreters, especially CPython and PyPy, I know that there are 2 steps to run a Python code:
Convert to Bytecode
Interpre to MachineCode
My question is, if bytecode is generated by CPython, lets say Python version 2.7.0, will the bytecode run on PyPy for Python 2.7.0? | Can a specific Python version bytecode run on different interpreters? | 1.2 | 0 | 0 | 48 |
40,144,869 | 2016-10-20T03:21:00.000 | 2 | 0 | 1 | 0 | python | 40,144,945 | 3 | false | 0 | 0 | You can just use readlines() and dump the file into a list. Then you can simply generate 1 million random numbers. Of course they have to be within the range of the size of the list/ number of lines in the file and each time a random number is generated access the line at that location in the list and write it to the file you want to move it in. | 1 | 0 | 0 | we are trying to get random lines of about 1M from a very big file which may have around 3M records in it. The selected random lines needs to be written into a third file.
Do you have any suggesstions for us? | Python - Read random lines from a very big file and append to another file | 0.132549 | 0 | 0 | 1,175 |
40,146,167 | 2016-10-20T05:31:00.000 | 0 | 0 | 1 | 0 | python-3.x,module,qpython3 | 43,598,999 | 3 | false | 0 | 0 | create your game library, such as myGameLib.py in your script's local directory.
Then from your main python code:
import from myGameLib *
Note that this will work. However, if you want to create a sub directory for your python lib, such as a local directory to your script, /mysubdir,
import from mysubdir.myGameLib *
appears to be broken in QPython3. | 3 | 1 | 0 | Is there any way to create my own module in qpython3 ? If there is, it would be great to fix my code properly, without going all the way down to fix just onde line. | Create a module in qpython 3 | 0 | 0 | 0 | 368 |
40,146,167 | 2016-10-20T05:31:00.000 | 0 | 0 | 1 | 0 | python-3.x,module,qpython3 | 40,146,369 | 3 | false | 0 | 0 | well, i'm creating a game with 3 games inside it and i would like to put the game functions inside individual modules, like:
from tictactoe import
structureTictactoe
from Chess import structureChess
then these functions when called simply print the specific game structure like the tictactoe grade and the chess table. For instance, it's simpler to edit the game functions inside individual modules | 3 | 1 | 0 | Is there any way to create my own module in qpython3 ? If there is, it would be great to fix my code properly, without going all the way down to fix just onde line. | Create a module in qpython 3 | 0 | 0 | 0 | 368 |
40,146,167 | 2016-10-20T05:31:00.000 | 0 | 0 | 1 | 0 | python-3.x,module,qpython3 | 41,535,782 | 3 | false | 0 | 0 | You have to write the functions that you want into a .py script that has the same name as you want the module to be called. You then have to add that into the site packages directory and then you should be able to access them from anywhere.
Just make sure that it is in the "qpython\lib\python3.2\packages\" directory | 3 | 1 | 0 | Is there any way to create my own module in qpython3 ? If there is, it would be great to fix my code properly, without going all the way down to fix just onde line. | Create a module in qpython 3 | 0 | 0 | 0 | 368 |
40,148,265 | 2016-10-20T07:39:00.000 | 0 | 0 | 1 | 0 | python,jupyter,jupyter-notebook | 42,150,227 | 2 | false | 0 | 0 | Did you install python by Anaconda?
Try to install under Anaconda2/envs when choosing destination folder,
like this:
D:/Anaconda2/envs/py3
then"activate py3" by cmd, py3 must be then same name of installation folder | 2 | 0 | 0 | I have anaconda2 and anaconda3 installed on windows machine, have no access to internet and administrator rights. How can I switch between python 2 and 3 when starting jupyter? Basic "jupyter notebook" command starts python 2. With internet I would just add environment for python 3 and select it in jupyter notebook after start but how can I do this in this situation? | jupyter notebook select python | 0 | 0 | 0 | 1,469 |
40,148,265 | 2016-10-20T07:39:00.000 | 1 | 0 | 1 | 0 | python,jupyter,jupyter-notebook | 42,550,420 | 2 | false | 0 | 0 | There's important points to consider:
you have to have jupyter notebook installed in each environment you want to run it from
if jupyter is only installed in one environment, your notebook will default to that environment no matter from which environment your start it, and you will have no option to change the notebook kernel (i.e. the conda package, and therefore which python version to use for your notebook)
You can list the packages in your environment with conda list. You can also check what environments exist with conda info --envs to make sure there's indeed one with python 3 (and use conda list to check it has jupyter installed).
From what you write, since your notebook defaults to python2, conda list should show you python 2 related packages.
So, as has been pointed, first activate the anaconda environment for python3 with the command activate your_python3_environment then restart your Notebook.
You don't need internet for this but you do need to be able to swap between anaconda2 and 3 (which you say are both installed) and both should have jupyter installed. | 2 | 0 | 0 | I have anaconda2 and anaconda3 installed on windows machine, have no access to internet and administrator rights. How can I switch between python 2 and 3 when starting jupyter? Basic "jupyter notebook" command starts python 2. With internet I would just add environment for python 3 and select it in jupyter notebook after start but how can I do this in this situation? | jupyter notebook select python | 0.099668 | 0 | 0 | 1,469 |
40,154,798 | 2016-10-20T12:41:00.000 | 0 | 1 | 1 | 0 | python,python-3.x,logging,module | 40,155,626 | 2 | false | 0 | 0 | If you call only a single python script at once (main.py)
then you simply can define your logging config once; for exemple:
logging.basicConfig(filename=logfilepath, format='%(levelname)s:%(message)s')
and call it wherever you want in the modules, for exemple
logging.<level>("log_string") | 1 | 3 | 0 | My project is composed by a main.py main script and a module aaa.py . I've succesfully setup a log procedure by using the logging package in main.py, also specifying a log filename.
Which is the cleanest/"correct" way to let the functions contained in aaa.py to write in the same log file? | logging package - share log file between main and modules - python 3 | 0 | 0 | 0 | 411 |
40,160,138 | 2016-10-20T16:49:00.000 | 1 | 0 | 1 | 0 | python,module | 40,162,659 | 1 | true | 0 | 0 | Could you achieve a similar effect by having your module read and write from a separate configuration file? Perhaps using something simple like pickle or json, or something more standard like docs.python.org/2/library/configparser.html | 1 | 0 | 0 | I am importing a module in python, and initializing some extra attributes for that module object. Is it possible that those extra attributes to be reflected in corresponding files without editing those files directly. | How to reflect changes in the module object to module file in python | 1.2 | 0 | 0 | 41 |
40,165,529 | 2016-10-20T22:28:00.000 | 0 | 0 | 1 | 0 | python,debugging,jupyter | 40,775,869 | 1 | false | 0 | 0 | Not tested, but in q exit with: exit 0 | 1 | 0 | 0 | I've just tried to use the q library in a Python Jupyter notebook to debug a function. That put me right in interactive console mode to debug.
But now I can't exit the interactive console mode. I tried the obvious ones: q, exit(), quit() but none of them works.
What is the right command? | Exit interactive console in Jupyter notebook using the q library to debug | 0 | 0 | 0 | 174 |
40,165,588 | 2016-10-20T22:34:00.000 | 2 | 0 | 1 | 0 | python,pip | 52,500,156 | 3 | false | 0 | 0 | If you wish to check out which pip you are using by using the --version option, that is typing in
pip --version in the command line.
In Windows 10, this would return the version of the pip installed along with the path from which pip.exe is called.
You could then follow the suggestion by @sytech to rename the pip.exe as needed. | 2 | 2 | 0 | I have different Python environments installed in different directories in Windows. How can I ensure I am using the pip for one particular version of Python?
Unfortunately, due to the groups I work with using a variety of Python flavors, I need all of the Python installations I have. I'm finding it very difficult, however, to use a version of pip that's not in my PATH. | Python on Windows: Which pip | 0.132549 | 0 | 0 | 6,628 |
40,165,588 | 2016-10-20T22:34:00.000 | 0 | 0 | 1 | 0 | python,pip | 40,165,629 | 3 | false | 0 | 0 | If you have several Python interpreters in PATH you can use these to access the correct pip, for example python -m pip or python3 -m pip.
If you are talking about virtual environments, I believe that the only way to achieve that is to activate the relevant environment before using pip. | 2 | 2 | 0 | I have different Python environments installed in different directories in Windows. How can I ensure I am using the pip for one particular version of Python?
Unfortunately, due to the groups I work with using a variety of Python flavors, I need all of the Python installations I have. I'm finding it very difficult, however, to use a version of pip that's not in my PATH. | Python on Windows: Which pip | 0 | 0 | 0 | 6,628 |
40,166,386 | 2016-10-21T00:03:00.000 | 2 | 0 | 1 | 0 | python-2.7,ubuntu,tensorflow,spyder | 40,695,844 | 1 | false | 0 | 0 | Enter the enviornment
source activate tensorflow
install spyder
conda install spyder
Run spyder
spyder
` | 1 | 0 | 1 | I build and installed tensorflow in my ubuntu 16.04 with gpu. In command line I can easily activate tensorflow environment but while I try to run the code through spyder it show this : "No module named tensorflow.examples.tutorials.mnist"
how can I run my python code from spyder with tensorflow? | spyder cant load tensorflow | 0.379949 | 0 | 0 | 771 |
40,169,593 | 2016-10-21T06:16:00.000 | 0 | 0 | 0 | 0 | python,hadoop,cgi | 40,175,474 | 1 | false | 0 | 0 | Aditya - Make sure that your Hadoop ports are listening in the machine. In default hdfs run under the port 9000, Similarly you can fix a default ports for hadoop and filter it out as per your requirement.
Make a unique identification over the HDFS metadata directory. or list the hadoop conf file (core-site,hdfs-site....) in each directory. These are the mandatory things for hadoop.
You can able to get it in Environmental variables/Classpath, or use this command hadoop version. It will works when path variables(%HADOOP_HOME%) properly configured.It doesn't requires any hadoop services are in running state rather proper environmental variable are configured in that machine. | 1 | 0 | 0 | I'm building a Python CGI based website that lets me setup a Hadoop cluster.
I was wondering if there was any way to check if there's already a premade cluster present? Because I want to limit the website to creating only one cluster at a time. I want to know if an offline cluster exists.
The only idea I have so far, is to save a flag into a file when the cluster is made and change its value when the cluster is deleted. | Check if Hadoop cluster exists | 0 | 0 | 0 | 97 |
40,173,481 | 2016-10-21T09:50:00.000 | 24 | 1 | 0 | 1 | python-2.7,amazon-web-services,nlp,celery,aws-lambda | 48,643,501 | 1 | true | 1 | 0 | I would like to share a personal experience. I moved my heavy-lifting tasks to AWS Lambda and I must admit that the ROI has been pretty good. For instance, one of my tasks was to generate monthly statements for the customers and then mail them to the customers as well. The data for each statement was fed into a Jinja template which gave me an HTML of the statement. Using Weasyprint, I converted the HTML to a Pdf file. And then mailing those pdf statements was the last step. I researched through various options for creating pdf files directly, but they didn't looked feasible for me.
That said, when the scale was low, i.e. for when the number of customers was small, celery was wonderful. However to mention, during this task, I observed CPU usages went high. I would add to the celery queue this task for each of the customers, from which the celery workers would pick up the tasks and execute it.
But when the scale went high, celery didn't turn out to be a robust option. CPU usages were pretty high(I don't blame celery for it, but that is what I observed). Celery is still good though. But do understand this, that with celery, you can face scaling issues. Vertical scaling may not help you. So you need horizontally scale as your backend grows to get get a good performance from celery. When there are a lot of tasks waiting in the queue, and the number of workers is limited, naturally a lot of tasks would have to wait.
So in my case, I moved this CPU-intensive task to AWS Lambda. So, I deployed a function that would generate the statement Pdf from the customer's statement data, and mail it afterward. Immediately, AWS Lambda solved our scaling issues. Secondly, since this was more of a period task, not a daily task - so we didn't need to run celery everyday. The Lambda would launch whenever needed - but won't run when not in use. Besides, this function was in NodeJS, since the npm package I found turned out to be more efficient the solution I had in Python. So Lambda is also advantageous because you can take advantages of various programming languages, yet your core may be unchanged. Also, I personally think that Lambda is quite cheap - since the free tier offers a lot of compute time per month(GB-seconds). Also, underlying servers on which your Lambdas are taken care to be updated to the latest security patches as and when available. As you can see, my maintenance cost has drastically dropped.
AWS Lambdas scale as per need. Plus, they can serve a good use case for tasks like real-time stream processing, or for heavy data-processing tasks, or for running tasks which could be very CPU intensive. | 1 | 12 | 0 | Currently I'm developing a system to analyse and visualise textual data based on NLP.
The backend (Python+Flask+AWS EC2) handles the analysis, and uses an API to feed the result back to a frontend (FLASK+D3+Heroku) app that solely handles interactive visualisations.
Right now the analysis in the prototype is a basic python function which means on large files the analysis take longer and thus resulting a request timeout during the API data bridging to frontend. As well as the analysis of many files is done in a linear blocking queue.
So to scale this prototype, I need to modify the Analysis(text) function to be a background task so it does not block further execution and can do a callback once the function is done. The input text is fetched from AWS S3 and the output is a relatively large JSON format aiming to be stored in AWS S3 as well, so the API bridge will simply fetch this JSON that contains data for all the graphs in the frontend app. (I find S3 slightly easier to handle than creating a large relational database structure to store persistent data..)
I'm doing simple examples with Celery and find it fitting as a solution, however i just did some reading in AWS Lambda which on paper seems like a better solution in terms of scaling...
The Analysis(text) function uses a pre-built model and functions from relatively common NLP python packages. As my lack of experience in scaling a prototype I'd like to ask for your experiences and judgement of which solution would be most fitting for this scenario.
Thank you :) | Celery message queue vs AWS Lambda task processing | 1.2 | 0 | 0 | 7,335 |
40,174,747 | 2016-10-21T10:52:00.000 | 1 | 0 | 0 | 0 | python,qt | 40,174,748 | 1 | false | 0 | 1 | This might be the problem:
Maybe you call isVisible() from an onClose()-function. Means, your Widget was visible, but is not anymore, when you finally call the isVisible() function
Solution
Call isVisibleTo([ParentWidget]). This will give you the visibility value relative to e.g. your QMainWindow. | 1 | 1 | 0 | I'd like to save the visibility state of a QDockWidget, when the dialog is closed. Even though, the widget is visible, isVisible will return false.
What to do?
Using Python (2.7 in my case) | Python Qt: Check visibility of widget / isVisible always return false | 0.197375 | 0 | 0 | 725 |
40,177,368 | 2016-10-21T13:01:00.000 | -7 | 0 | 1 | 1 | python,windows,macos,docker,pyinstaller | 40,177,468 | 2 | true | 0 | 0 | I don't think so. Your docker container container will be a Linux system. If you run it, when ever you are on windows/mac/linux, it is still running on a linux environement so you it will not be a windows or mac compatible binary. I don't know well of python. But if you can't make windows binary from linux, you will not be able to do so in a container. | 1 | 5 | 0 | I am supposed to create an executable for windows, mac and linux. However, I don't have a windows machine for time being and also I don't have a mac at all. I do have a Linux machine but I don't want to change the partition or even create a dual boot with windows.
I have created an application using python and am making my executable using pyinstaller. If I make use of Docker (install images of windows and mac on linux), will I be able to create executable for windows and mac with all dependencies (like all .dll for windows and if any similar for mac)? | Can i use Docker for creating exe using pyinstaller | 1.2 | 0 | 0 | 6,314 |
40,180,601 | 2016-10-21T15:38:00.000 | 1 | 0 | 0 | 0 | google-api-python-client,double-click-advertising | 40,370,299 | 2 | true | 1 | 0 | I found the way to solve this problem. The actual API v1 has this capabilities but the documentation is not very clear about it.
You need to download your Line Items file as CSV or any other supported format, then from that downloaded file you must edit it with any script you want, so you must edit the columns of Status to perform this operation. Also, if you want to create a new campaign, you will need to do the same for new Line Items. After editing the CSV or created one, you must uploaded back to google with the relative endpoint: uploadlineitems.
Google will answer to the owner of the Bid Manager account what changes were accepted from that file that you sent.
I have confirmed that this is the same behaviour that Google uses for other products where they consume their own API:
Download or Create Line Items file as CSV or any other supported format.
Edit Line Items.
Upload Line Items.
So basically you only need to create a script that edits CSV files and another to authenticate with the API. | 1 | 3 | 0 | Is it possible with the Google DoubleClick Bid Manager API to create campaigns, set bids and buy adds?, I have checked the documentation and it seems that there are limited endpoints.
These are all the available endpoints according to the documentation:
doubleclickbidmanager.lineitems.downloadlineitems Retrieves line items in CSV format.
doubleclickbidmanager.lineitems.uploadlineitems Uploads line items in
CSV format.
doubleclickbidmanager.queries.createquery Creates a query.
doubleclickbidmanager.queries.deletequery Deletes a stored query as
well as the associated stored reports.
doubleclickbidmanager.queries.getquery Retrieves a stored query.
doubleclickbidmanager.queries.listqueries Retrieves stored queries.
doubleclickbidmanager.queries.runquery Runs a stored query to
generate a report.
doubleclickbidmanager.reports.listreports Retrieves stored reports.
doubleclickbidmanager.sdf.download Retrieves entities in SDF format.
None of these endpoints can do tasks as buy ads, set bids or create campaigns, so I think those tasks can only be done through the UI and not with the API.
Thanks in advance for your help. | Create Campaigns, set bids and buy adds from DoubleClick Bid Manager API | 1.2 | 0 | 1 | 1,015 |
40,182,944 | 2016-10-21T18:07:00.000 | 18 | 0 | 1 | 0 | python,assert,raise | 40,183,165 | 10 | false | 0 | 0 | try/except blocks let you catch and manage exceptions. Exceptions can be triggered by raise, assert, and a large number of errors such as trying to index an empty list. raise is typically used when you have detected an error condition. assert is similar but the exception is only raised if a condition is met.
raise and assert have a different philosophy. There are many "normal" errors in code that you detect and raise errors on. Perhaps a web site doesn't exist or a parameter value is out of range.
Assertions are generally reserved for "I swear this cannot happen" issues that seem to happen anyway. Its more like runtime debugging than normal runtime error detection. Assertions can be disabled if you use the -O flag or run from .pyo files instead of .pyc files, so they should not be part of regular error detection.
If production quality code raises an exception, then figure out what you did wrong. If it raises an AssertionError, you've got a bigger problem. | 3 | 76 | 0 | I have been learning Python for a while and the raise function and assert are (what I realised is that both of them crash the app, unlike try - except) really similar and I can't see a situation where you would use raise or assert over try.
So, what is the difference between raise, try, and assert? | What's the difference between raise, try, and assert? | 1 | 0 | 0 | 76,394 |
40,182,944 | 2016-10-21T18:07:00.000 | 34 | 0 | 1 | 0 | python,assert,raise | 40,183,056 | 10 | false | 0 | 0 | raise - raise an exception.
assert - raise an exception if a given condition is (or isn't) true.
try - execute some code that might raise an exception, and if so, catch it. | 3 | 76 | 0 | I have been learning Python for a while and the raise function and assert are (what I realised is that both of them crash the app, unlike try - except) really similar and I can't see a situation where you would use raise or assert over try.
So, what is the difference between raise, try, and assert? | What's the difference between raise, try, and assert? | 1 | 0 | 0 | 76,394 |
40,182,944 | 2016-10-21T18:07:00.000 | -1 | 0 | 1 | 0 | python,assert,raise | 63,772,752 | 10 | false | 0 | 0 | raise is used for raising an exception;
assert is used for raising an exception if the given condition is False. | 3 | 76 | 0 | I have been learning Python for a while and the raise function and assert are (what I realised is that both of them crash the app, unlike try - except) really similar and I can't see a situation where you would use raise or assert over try.
So, what is the difference between raise, try, and assert? | What's the difference between raise, try, and assert? | -0.019997 | 0 | 0 | 76,394 |
40,184,760 | 2016-10-21T20:13:00.000 | 0 | 0 | 0 | 0 | python,django,amazon-web-services,amazon-s3,django-staticfiles | 40,185,281 | 1 | false | 1 | 0 | Clean solution would be to read the source for collectstatic and write your own management command that would do the same thing, but would write a file list into the database. A quick and dirty way would be to pipe the output of collectstatic into a script of some sort that would reformat it as SQL and pipe it through a database client. | 1 | 0 | 0 | I am storing all the static files in AWS S3 Bucket and I am using Docker containers to run my application. This way, whenever I want to deploy the changes, I create a new container using a new image.
I am running ./manage.py collectstatic on every deployment because sometimes I add libraries to the project that have static files; and it takes forever to reupload them to S3 on every deployment. Is there a way I can keep a list of static files uploaded to S3 in my database, so that collectstatic only uploads to the added files. | How can I make django to write static files list to database when using collectstatic | 0 | 1 | 0 | 228 |
40,188,251 | 2016-10-22T04:16:00.000 | 3 | 0 | 0 | 0 | python,numpy | 40,188,461 | 1 | false | 0 | 0 | The first argument indicates the shape of the array. A scalar argument implies a "flat" array (vector), whereas a tuple argument is interpreted as the dimensions of a tensor. So if the argument is the tuple (m,n), numpy.zeros will return a matrix with m rows and n columns. In your case, it is returning a matrix with n rows and 1 column.
Although your two cases are equivalent in some sense, linear algebra routines that require a vector as input will likely expect something like the first form. | 1 | 4 | 1 | What is the difference between
numpy.zeros(n)
and
numpy.zeros(n,1)?
The output for the first statement is
[0 0 ..... n times]
whereas the second one is
([0]
[0]
.... n rows) | Difference between numpy.zeros(n) and numpy.zeros(n,1) | 0.53705 | 0 | 0 | 1,550 |
40,193,388 | 2016-10-22T14:38:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,csv | 65,651,852 | 8 | false | 0 | 0 | I think the best way to check this is -> simply reading 1st line from file and then match your string instead of any library. | 1 | 11 | 1 | I have a CSV file and I want to check if the first row has only strings in it (ie a header). I'm trying to avoid using any extras like pandas etc. I'm thinking I'll use an if statement like if row[0] is a string print this is a CSV but I don't really know how to do that :-S any suggestions? | How to check if a CSV has a header using Python? | 0 | 0 | 0 | 24,553 |
40,194,021 | 2016-10-22T15:47:00.000 | 0 | 0 | 1 | 0 | python-2.7,spyder | 41,088,731 | 2 | false | 0 | 0 | You Could try to uninstall that version of spyder and download Anaconda, a free package manager that comes pre-installed with spyder and should work fine, as it did for me as I have windows 7 x64 | 1 | 0 | 0 | After installing Winpython on windows 7 64 bits, when I launch Spyder I face this:
ImportError: No module named encodings
Python 2.7.12 Shell works well but Spyder don't.
Do you know how to solve this problem?
I really appreciate any help you can provide | Can't launch Spyder on windows 7 | 0 | 0 | 0 | 609 |
40,194,860 | 2016-10-22T17:12:00.000 | 2 | 1 | 0 | 0 | python,optimization,pyomo | 40,197,955 | 2 | false | 0 | 0 | Source-level documentation is not currently supported for Pyomo. The Pyomo developers have discussed whether/how to make this a priority, but this is not yet the focus of the team. | 1 | 1 | 0 | I am new pyomo user. I want to learn that is there any document or way to see all attributes, methods and functions of pyomo? Same issue for CBC. | Pyomo attributes, methods, functions | 0.197375 | 0 | 0 | 315 |
40,195,188 | 2016-10-22T17:45:00.000 | 0 | 1 | 0 | 0 | php,python,asp.net,raspberry-pi,sms | 40,195,564 | 1 | true | 0 | 0 | What are the (logical) pitfalls of this scenario?
My opion would be to pass the data and the two fields (phoneNumber and SmsType) through a POST request rather then an GET request because you can send more data in an post request and encapsulate it with JSON making it easier to handle the data.
What would be a simpler approch ?
Maybe not simple but more elegant, extend the python script with something like flask and build the webserver right into the python script, saves you running a webserver with php! | 1 | 0 | 0 | Just discovered the amazing Raspberry Pi 3 and I am trying to learn how to use it in one of my projects.
Setup:
ASP.NET app on Azure.
RPi:
software: Raspbian, PHP, Apache 2, and MariaDB.
has internet access and a web server a configured.
3G dongle for SMS sending, connected to the RPi.
Desired scenario:
when a specific button within the ASP app is clicked:
through jQuery $.ajax() the RPi's ip is called with the parameters phoneNumber and smsType.
then the RPi:
fetches the SMS text from a MariaDB database based on the smsType parameter.
invokes a Python script using the PHP exec("python sendSms.py -p phoneNumber -m fetchedText", $output) (i.e. with the phone number and the fetched text):
script will send the AT command(s) to the dongle.
script will return true or false based on the action of the dongle.
echo the $output to tell the ASP what is the status.
finally, the ASP will launch a JavaScript alert() saying if it worked or not.
This is what I need to accomplish. For most of the parts I found resources and explanations. However, before starting on this path I want to understand few things:
General questions (if you think they are not appropriate, please ignore this category):
What are the (logical) pitfalls of this scenario?
What would be a simpler way to approach this?
Specific questions:
Is there a size limit to consider when passing parameters through the url? | Calling Raspberry Pi from ASP.NET to send a SMS remotely | 1.2 | 0 | 0 | 98 |
40,195,987 | 2016-10-22T19:09:00.000 | 3 | 0 | 1 | 0 | python,spss | 40,203,463 | 2 | false | 0 | 0 | You can simplify this further in two ways. First, the spssaux.VariableDict object has a built-in filter mechanism using a regular expression. So you could write
vars = spssaux.VariableDict(pattern="(.*jkl)|(.*def)).variables
to get the list.
The second way would be to use the SPSSINC SELECT VARIABLES extension command, which is included in the Python Essentials to generate a macro according to selection criteria that include name patterns, variable type, and other properties. This could then be used in regular syntax. The command appears on the Utilities menu as Define Variable Macro. | 1 | 1 | 0 | Starting to use SPSS/Python, I need to average variables whose names contain two different strings. I found many examples for individual strings (or numbers, etc.), but my strings are not adjacent.
var1_blabla_def_blabla_jkl
var2_blabla_blabla_def_jkl
var3_blabla_jkl_blabla_blabla
How do I get the mean over var1 and var2, containing "def" AND "jkl", and not var3 that contains only jkl? I am not sure what the regular expression would be for this pattern and how then to feed this into something like spss.Submit('compute %s=mean(%s))
Many thanks for any help and hints, I appreciate it. | Python/SPSS: pattern in names not consecutive | 0.291313 | 0 | 0 | 221 |
40,200,840 | 2016-10-23T08:04:00.000 | 1 | 0 | 1 | 0 | python | 40,201,438 | 3 | false | 0 | 0 | the else suite is executed after the for terminates normally (not by a break).
so it will definitely execute the else statement in your code, because you don't break in the for loop. | 1 | 1 | 0 | I am writing a program to search a txt file for a certain line based only on part of the string. If the string isn't found, it should print not found once, but it is printing it multiple times. Even after indenting and using a correct code it still prints: | how do i stop the invalid code message repeating while also occuring at the right time | 0.066568 | 0 | 0 | 41 |
40,204,380 | 2016-10-23T15:03:00.000 | 2 | 0 | 0 | 1 | python-3.x,python-3.4,python-3.5 | 49,835,752 | 1 | true | 0 | 0 | Variant A:
Run this script as python3.4 /path/to/script
Variant B:
Change the schebang to #!/usr/bin/python3.4 | 1 | 0 | 0 | I have a script that I found on the internet that worked in Python 3.4, but not Python 3.5. I'm not too familiar in python, but it has the
#!/usr/bin/env python3
schlebang at the top of the file. And it also throws this exception when I try to run it:
Traceback (most recent call last):
File "/home/username/folder/script.py", line 18, in
doc = opener.open(url)
File "/usr/lib/python3.5/urllib/request.py", line 472, in open
response = meth(req, response)
File "/usr/lib/python3.5/urllib/request.py", line 582, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.5/urllib/request.py", line 504, in error
result = self._call_chain(*args)
File "/usr/lib/python3.5/urllib/request.py", line 444, in _call_chain
result = func(*args)
File "/usr/lib/python3.5/urllib/request.py", line 968, in http_error_401
url, req, headers)
File "/usr/lib/python3.5/urllib/request.py", line 921, in http_error_auth_reqed
return self.retry_http_basic_auth(host, req, realm)
File "/usr/lib/python3.5/urllib/request.py", line 931, in retry_http_basic_auth
return self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.5/urllib/request.py", line 472, in open
response = meth(req, response)
File "/usr/lib/python3.5/urllib/request.py", line 582, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.5/urllib/request.py", line 510, in error
return self._call_chain(*args)
File "/usr/lib/python3.5/urllib/request.py", line 444, in _call_chain
result = func(*args)
File "/usr/lib/python3.5/urllib/request.py", line 590, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
Python isn't really my preferred langage, so I don't know what to do. This is a script that's supposed to access my Gmail account and pull new mails from it. Do you guys have any suggestions? I'm using Arch Linux, if that helps. | use python 3.4 instead of python 3.5 | 1.2 | 0 | 1 | 585 |
40,205,197 | 2016-10-23T16:36:00.000 | 1 | 0 | 0 | 0 | python,django,sqlite | 40,205,233 | 5 | false | 1 | 0 | Just typing quit does the work. | 3 | 3 | 0 | How do I exit dbshell (SQLite 3) on the command line when using Django?
It's my first time to use the command. I watch a book and am practicing Django at the same time. After I run this command, I have no idea how to leave the environment since I have never learned SQL before. | How do I exit dbshell (SQLite 3) on the command line when using Django? | 0.039979 | 1 | 0 | 5,764 |
40,205,197 | 2016-10-23T16:36:00.000 | 1 | 0 | 0 | 0 | python,django,sqlite | 47,724,071 | 5 | false | 1 | 0 | You can just hit the key combination Ctrl + C. | 3 | 3 | 0 | How do I exit dbshell (SQLite 3) on the command line when using Django?
It's my first time to use the command. I watch a book and am practicing Django at the same time. After I run this command, I have no idea how to leave the environment since I have never learned SQL before. | How do I exit dbshell (SQLite 3) on the command line when using Django? | 0.039979 | 1 | 0 | 5,764 |
40,205,197 | 2016-10-23T16:36:00.000 | 3 | 0 | 0 | 0 | python,django,sqlite | 51,351,884 | 5 | false | 1 | 0 | You can type .exit in thew shell to exit. For more information about commands, type .help.
It raises an error and exits ... it was helpful :) | 3 | 3 | 0 | How do I exit dbshell (SQLite 3) on the command line when using Django?
It's my first time to use the command. I watch a book and am practicing Django at the same time. After I run this command, I have no idea how to leave the environment since I have never learned SQL before. | How do I exit dbshell (SQLite 3) on the command line when using Django? | 0.119427 | 1 | 0 | 5,764 |
40,207,279 | 2016-10-23T20:05:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,deployment,autoscaling | 40,207,943 | 1 | false | 1 | 0 | After everything has been setup on your instance bake an AMI from the fully ready instance. Use the AMI ID in the autoscaling configuration. That way any instance that is spun-up by the autoscaling group will be ready with all the required software. | 1 | 0 | 0 | I currently have auto-scaling setup so that once my existing instances reach high usage, new nodes are created on which my flask application is deployed and run.
The issue I am having is that deployment takes a while (7minsish) because I have many dependencies in my requirements.txt and it takes a while to stand up a node and install all of them.
How can I quicken this process? | Installing flask dependencies takes long on aws | 0 | 0 | 0 | 43 |
40,207,785 | 2016-10-23T21:04:00.000 | 0 | 1 | 0 | 1 | python,cmd,console,sublimetext3 | 40,207,826 | 1 | false | 0 | 0 | Tools > Build System > New Build System
You can set it up there and choose your python binary | 1 | 0 | 0 | I have multiple python version installed in Ubuntu OS. I've set up sublime text 3 (ST3) to run python2 code successfully. But I would like to have the option to run python code using command console during debugging as well. But I found that when I open the cmd console, the python version that it used is not the same of the python that I'm building my python code. To be exact, cmd console called python3, while I would like it to use python2. Any way to set which default python that the cmd console call? Thanks. | sublime text 3: How to open command console with correct python version | 0 | 0 | 0 | 230 |
40,208,006 | 2016-10-23T21:34:00.000 | 2 | 0 | 1 | 0 | python | 40,213,427 | 3 | true | 0 | 0 | Transliteration is not going to help (it will turn Cyrillic P into Latin R). At first glance, Unicode compatibility form (NFKD or NFKC) look hopeful, but that turns U+041C (CYRILLIC CAPITAL LETTER EM) into U+041C (and not U+004D (LATIN CAPITAL LETTER EM)) - so that won't work.
The only solution is to build your own table of allomorphs, and translate all strings into a canonical form before comparing.
Note: When I said "Cyrillic P", I cheated and used the Latin allomorph - I don't have an easy way to enter Cyrillic. | 1 | 3 | 0 | Python treats words МАМА and MAMA differently because one of them is written using latin and another using cyrillian.
How to make python treat them as one same string?
I only care about allomorphs. | Detect same words using different alphabets? | 1.2 | 0 | 0 | 90 |
40,208,629 | 2016-10-23T22:57:00.000 | 1 | 0 | 0 | 0 | python,tkinter | 40,209,914 | 1 | false | 0 | 1 | OK, i have great idea. In my app for shutdown computer im using os.system('shutdown /p /f') but i could use different switches. shutdown /s /t xxx will do the same but in set time, and computer will shut because it will add scheduled task, and in task manager wont be name of my application which solves my problem :) And my application can be even closed, there is no need to make borderless window and using Quit button because 'shutdown' command works in background.
I hope it is not a problem that i by myself found answer for my question. At least i think ive found correct answer. Will check this when ill back from work.
I found 2 external programs that may help me, but i dont know how use them? Can anybody tell me how use nircmd or psexec? I need only shutdown function from them | 1 | 0 | 0 | I'm writing an app in Python 3.4 using tkinter. It is a timer but with two more functions.
When time reaches zero app will shutdown computer (its for windows os generally)
The timer can be stopped when user inputs correct password.
The main window is borderless with blocked Quit button when time is running. But I encountered new problem. Applications can be killed in task manager. So my question is: Is there some way to hide or block from killing process in task manager? | Is there some way to hide or block from kill app wrote in Python3.4? | 0.197375 | 0 | 0 | 306 |
40,208,695 | 2016-10-23T23:04:00.000 | 1 | 0 | 1 | 1 | python,windows,python-3.x,conda,pydot | 56,009,752 | 4 | false | 0 | 0 | Try running anaconda prompt as 'administrator', then use:
conda install -c conda-forge pydotplus | 1 | 2 | 0 | What is a proven method for installing pydotplus for Python 3.5 on a 64-bit Windows(10) system? So far I haven't had any luck using conda or a number of other approaches.
It appears there are several viable options for both Linux Ubuntu and Windows for Python 2.7. Unfortunately it's necessary for me to use this particular configuration, so any suggestions would be greatly appreciated! | How to install pydotplus for Python 3.5 on Windows64 | 0.049958 | 0 | 0 | 6,170 |
40,209,687 | 2016-10-24T01:53:00.000 | 0 | 0 | 1 | 0 | python,ide,spyder | 40,285,522 | 1 | true | 0 | 0 | FryninoS,
If you put your mouse over the information box it will stay open until you move the mouse off the box.
Austin. | 1 | 0 | 0 | So I'm using Spyder as my Python IDE. It has a great feature which are hints, f.e when I type numpy.arange( it shows me, that I need to insert stop, start, step etc. But it appears on screen, and disappears after like 2-3s, and most of the times I don't manage to read the whole thing, but anyways I would still like to see it, just to think about what should I type. So is there a way to extend the timeout of those hints, or make them stay there until f.e I close the parentheses?
P.S Am I having delusions, or is IPython interpreter much faster than simple Python command line interpreter?
P.S2 Is there a way, to make Spyder do auto-indentation (f.e after going to a new line inside of a function?) | Spyder - hints disappear too fast | 1.2 | 0 | 0 | 148 |
40,214,784 | 2016-10-24T09:19:00.000 | 0 | 0 | 0 | 0 | python-2.7,tkinter | 50,161,426 | 2 | false | 0 | 1 | I had the same exact issue with Python-3.4.3. I followed Brice's solution and got halfway there. Not only did I require the -l flags after the -L flag as he suggested, but I discovered my LD_LIBRARY_PATH was inadequate when performing the 'make altinstall'. Be sure to include the same directory in LD_LIBRARY_PATH as used in your -L flag entry. | 1 | 0 | 0 | python2.7
when I import Tkinter, it prompt no module named _tkinter, I don't have the limits of administrator, so I install tcl and tk, then recompile python with --with-tcltk-includes and --with-tcltk-libs parameter, but when running 'make', the error """*** WARNING: renaming "_tkinter" since importing it failed: build/lib.linux-x86_64-2.7/_tkinter.so: undefined symbol: Tk_Init""" occurred, I really don't know how to deal with it
can somebody help me?
thanks! | undefined symbol: Tk_Init | 0 | 0 | 0 | 648 |
40,223,807 | 2016-10-24T17:07:00.000 | 2 | 0 | 1 | 1 | python,pycharm,pickle | 40,224,304 | 1 | true | 0 | 0 | As suggested in the comments, this is most likely because Python is not added to your environment variables. If you do not want to touch your environment variables, and assuming your Python is installed in C:\Python35\,
Navigate tp C:\Python35\ in Windows Explorer
Go to the address bar and type cmd to shoot up a command prompt in that directory
Alternatively to steps 1 and 2, directly shoot up a command prompt, and cd to your Python installation Directory (default: C:\Python35)
Type python -m pip install pip --upgrade there | 1 | 0 | 0 | When trying to install cPickle using pycharm I get this:
Command "python setup.py egg_info" failed with error code 1 in C:\Users\Edwin\AppData\Local\Temp\pycharm-packaging\cpickle
You are using pip version 7.1.2, however version 8.1.2 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
So then when I go command prompt to type in:
python -m pip install --upgrade pip
I get this:
'python' is not recognized as an internal or external command, operable program or batch file.
So how do I install cPickle?
BTW: I am using windows & python 3.5.1 | Why can't I install cPickle | Pip needs upgrading? | 1.2 | 0 | 0 | 4,792 |
40,224,973 | 2016-10-24T18:22:00.000 | 0 | 0 | 0 | 0 | python,machine-learning | 40,236,052 | 1 | false | 0 | 0 | I recommend you use as much as possible the functions given by sklearn or another ML library (I like TensorFlow). That's because it's very difficult to get the performance of any library. They are calculating in a low level of the operating system, meanwhile the common users don't implement the computational actions outside the python environment.
Moreover, python by itself is not very efficient in relation to the data structures. For example,a simple array is implemented as a LinkedList. The ML libraries use Numpy to their computations to get a better performance. | 1 | 0 | 1 | I am new to machine learning and Python. I am trying to understand when to use the functions in sklearn.linear_model (linearregression and logisticregression) and when to implement my own code for the same. Any suggestions or references will be highly appreciated.
regards
Souvik | Python machine learning sklearn.linear_model vs custom code | 0 | 0 | 0 | 72 |
40,227,047 | 2016-10-24T20:32:00.000 | 4 | 0 | 0 | 0 | python,message,messagebox,pyqt5 | 60,393,444 | 6 | false | 0 | 1 | Assuming you are in a QWidget from which you want to display an error message, you can simply use QMessageBox.critical(self, "Title", "Message"), replace self by another (main widget for example) if you are not is a QWidget class. | 1 | 20 | 0 | In normal Python (3.x) we always use showerror() from the tkinter module to display an error message but what should I do in PyQt5 to display exactly the same message type as well? | Python PyQt5: How to show an error message with PyQt5 | 0.132549 | 0 | 0 | 56,321 |
40,227,797 | 2016-10-24T21:23:00.000 | 0 | 0 | 0 | 0 | python,django,django-models | 40,227,954 | 1 | false | 1 | 0 | There are two options:
Along with app, also add app.cog to INSTALLED_APPS
Or, include app/cog/models.py in app/models.py (i.e. from .cog.models import * or from .cog.models import model1, model2) | 1 | 2 | 0 | I essentially have the following issue:
Say, the model classes I define are in /app/cog/models.py, but Django only checks for them in /app/models.py . Is there any way to let Django dynamically read all the model classes in all models.py files in all subpackages of app?
It might be noteworthy that I really want to follow Django's philosophy concerning apps, which includes "all apps are independent from each other". So, I don't want to give those subpackages their own apps, or otherwise people who use my app would possibly end up with 50 apps after some time (as these subpackages simply extend the functionality of the app, and there will probably be a lot of them). | How can I let Django load models from subpackages of apps? | 0 | 0 | 0 | 370 |
40,230,578 | 2016-10-25T02:47:00.000 | 0 | 0 | 1 | 1 | python-3.x,python-imaging-library,wing-ide | 40,230,900 | 1 | false | 0 | 0 | Most likely Wing is using a different Python than the one you installed Pillow into. Try this on the command line:
import sys; sys.executable
Then set the Python Executable in Wing's Project Properties to the full path of that executable (or in Wing 101 this is set in the Configure Python dialog from the Edit menu). You'll need to restart any debug process and/or the Python Shell from its Options menus. | 1 | 1 | 0 | Using my terminal, the code "from PIL import Image" works perfectly and is recognized by my computer. This allows me to get images using the path address.
Here is my issue, when I open wingIDE and try the same code...this module isn't recognized.
Is anyone familiar with wingIDE that can help me?
I would assume PyCharm people might have the same issue with possibly a similar fix, any advice??
Thanks,
Adam | Using wingIDE with a new module (not recognized) | 0 | 0 | 0 | 313 |
40,230,749 | 2016-10-25T03:10:00.000 | 1 | 0 | 1 | 0 | python,string,algorithm | 40,230,846 | 8 | false | 0 | 0 | First thing that pops to my mind is that you could easily take the individual character count of the original string and then take the character count of each sub-string. Then cross check with the character count of the original string to see if the number of characters from each type needed to create your sub-string is present or not. This is a very easy method to find if a given string is an anagram of another string. The same could be applied to your scenario. | 1 | 15 | 0 | What's the most efficient way to find whether a group of characters, arranged in a string, exists in a string in Python?
For example, if I have string="hello world", and sub-string "roll", the function would return true because all 4 letters in "roll" exist in "hello world".
There's the obvious brute-force methodology, but I was wondering if there's an efficient Python specific way to achieve this.
EDIT: letters count is important. So for example rollll isn't included in hello world (only three l's). | Efficiently find whether a string contains a group of characters (like substring but ignoring order)? | 0.024995 | 0 | 0 | 3,254 |
40,230,865 | 2016-10-25T03:25:00.000 | 6 | 0 | 0 | 0 | python,scikit-learn | 48,947,334 | 1 | false | 0 | 0 | There is no inbuilt way in scikit-learn to do this, you need to write some additional code to be able to do this. However you could use the vocabulary_ attribute of CountVectorizer to achieve this.
Cache the current vocabulary
Call fit_transform
Compute the diff with the new vocabulary and the cached vocabulary | 1 | 7 | 1 | Right now I'm using CountVectorizer to extract features. However, I need to count words not seen during fitting.
During transforming, the default behavior of CountVectorizer is to ignore words that were not observed during fitting. But I need to keep a count of how many times this happens!
How can I do this?
Thanks! | CountVectorizer and Out-Of-Vocabulary (OOV) tokens? | 1 | 0 | 0 | 1,618 |
40,231,128 | 2016-10-25T03:56:00.000 | 0 | 0 | 0 | 0 | python,numpy,scipy | 40,231,181 | 5 | false | 0 | 0 | Sure, no problem. Use 'reshape'. Assuming A1 is a numpy array
A1 = A1.reshape([1,255,255,3])
This will reshape your matrix.
If A1 isn't a numpy array then use
A1 = numpy.array(A1).reshape([1,255,255,3]) | 1 | 1 | 1 | There has a nd array A with shape [100,255,255,3], which correspond to 100 255*255 images. I would like to iterate this multi-dimensional array, and each iteration I get one image. This is what I do, A1 = A[i,:,:,:] The resulting A1 has shape [255,255,3]. However, i would like to enforce it have the shape [1,255,255,3]. How can I do it? | add a dummy dimension for a multi-dimensional array | 0 | 0 | 0 | 1,297 |
40,231,354 | 2016-10-25T04:24:00.000 | 0 | 0 | 0 | 0 | python,flask,import,werkzeug | 52,548,666 | 4 | false | 1 | 0 | I faced same issue.
I got this error when working in python virtual environment.
I had to deactivate virtual environment. Then go to root user and install werkzeug using pip. After that it works in virtual environment. | 2 | 5 | 0 | When I use from flask import *, I get the error
ImportError: No module named werkzeug.exceptions
However, when I do pip freeze, I can see that Werkzeug==0.11.11 is indeed installed. How can I fix this? | Can't import flask because werkzeug | 0 | 0 | 0 | 11,418 |
40,231,354 | 2016-10-25T04:24:00.000 | 1 | 0 | 0 | 0 | python,flask,import,werkzeug | 44,644,053 | 4 | false | 1 | 0 | I am asumming, that the wrong version of Werkzeug was installed in the fist place. This usually happens, when you have 2 versions of python installed, and you use 'pip' for installing dependancies rather than using 'pip3'. Hope this helped! | 2 | 5 | 0 | When I use from flask import *, I get the error
ImportError: No module named werkzeug.exceptions
However, when I do pip freeze, I can see that Werkzeug==0.11.11 is indeed installed. How can I fix this? | Can't import flask because werkzeug | 0.049958 | 0 | 0 | 11,418 |
40,235,643 | 2016-10-25T08:58:00.000 | 2 | 0 | 0 | 0 | python,opencv,image-processing,scikit-image | 40,236,282 | 1 | true | 0 | 0 | What was the result in the first case? It sounds like a good approach. What did you expect and what you get?
You can also try something like that:
Either create a copy of a whole image or just slightly bigger ROI (to include samples that will be used for blurring)
Apply blur on the created image
Apply masks on two images (from original image take everything except ROI and from blurred image take ROI)
Add two masked images
If you want more smooth transition make sure that masks aren't binary. You can smooth them using another blurring (blur one mask and create the second one by calculating: mask2 = 1 - mask1. By doing so you will be sure that weights always add up to one). | 1 | 0 | 1 | I'm trying to blur around specific regions in a 2D image (the data is an array of size m x n).
The points are specified by an m x n mask. cv2 and scikit avaiable.
I tried:
Simply applying blur filters to the masked image. But that isn't not working.
Extracting the points to blur by np.nan the rest, blurring and reassembling. Also not working, because the blur obviously needs the surrounding points to work correctly.
Any ideas?
Cheers | Python: Blur specific region in an image | 1.2 | 0 | 0 | 1,655 |
40,236,281 | 2016-10-25T09:29:00.000 | 0 | 1 | 0 | 0 | php,python,python-3.x,pip,composer-php | 40,252,615 | 1 | true | 0 | 0 | I've decided to create separate PHP package for my PHP library, and upload it to a packagist.org, so, user could get it using php composer, but not forced to, as it would be in case of including library.php into python package. | 1 | 0 | 0 | I've created tool, that runs as a server, and allow clients to connect to it through TCP, and run some commands. It's written on python 3
Now I'm going to build package and upload it to Pypi, and have conceptual problem.
This tool have python client library inside, so, after installation of the package, it'll be possible to just import library into python script, and use for connection to the daemon without dealing with raw TCP/IP.
Also, I have PHP library, for connection to me server, and the problem is - I don't know how to include it into my python package the right way.
Variants, that I found and can't choose the right one:
Just include library.php file into package, and after running "pip install my_package", I would write "require('/usr/lib/python3/dist-packages/my_package/library.php')" into my php file. This way allows to distribute library with the server, and update it synchronously, but add long ugly paths to php require instruction.
As library.php in placed on github repository, I could just publish it's url in the docs, and it'll be possible to just clone repository. It makes possible to clone repo, and update library by git pull.
Create separate package with my library.php, upload it into packagist, and use composer to download it when it's needed. Good for all composer users, and allow manual update, but doens't update with server's package.
Maybe I've missed some other variants.
I want to know what would be true python'ic and php'ic way to do this. Thanks. | How to include PHP library in Python package (on not do it) | 1.2 | 0 | 1 | 605 |
40,238,834 | 2016-10-25T11:31:00.000 | 1 | 0 | 0 | 0 | python,virtual-machine,fedora | 40,239,274 | 1 | false | 1 | 0 | You failed to provide enough context - like what exactly is "your python server", but anyway, you mention a browser cache so I assume it's a web server process. The point is: Python modules are imported only once per process, and once imported changes to the source files are totally irrelevant. So if you have a long running process, it is expected that you restart the process every time you deploy a new version of your modules. | 1 | 1 | 0 | I have a VM in Oracle virtual box which has Fedora24. I have my python server running (Django). There is no web server like Apache.However, when I make changes to the code the files are getting saved, but the changes are not reflected on the server.
I have to do kill -15 processid of python
OR
Restart my VM many times to see the changes.
Any idea why this is happening? Have tried clearing the browser caches also. | Changes made to the Python code not reflected on server in Fedora in Virtual Box(Not duplicate) | 0.197375 | 0 | 0 | 84 |
40,241,133 | 2016-10-25T13:21:00.000 | 1 | 0 | 1 | 0 | python,r,anaconda,conda | 56,415,961 | 1 | false | 0 | 0 | Anaconda uses hard links to reduce the consumed disk space. But if a limit is imposed on the number of files, each hard link counts.
As discussed in the comments, using Miniconda instead of Anaconda, and installing only the packages you actually need, might help.
If this isn't enough, I'd recommend to merge several of your environments into one. Then you'll have fewer hardlinks for the packages that overlap. Of course that is the opposite of what environments are there for, but such is the nature of workarounds. | 1 | 17 | 0 | I'm running conda environments on a compute cluster where the total number of files per "project" is restricted (200k files max). I've only created a couple of conda environments (anaconda for Python 2.7; ~200 python & R packages installed in each environment; high package overlap between environments) and already hit that file number limit. Even when using conda clean -a only a small fraction of the files are removed. Some python packages in my conda environments (e.g., boost) contain >10k files, and clean does not reduce this.
Is there any way to greatly reduce the number of files stored as part of a conda environment? | How to reduce the number of files in the anaconda directory? | 0.197375 | 0 | 0 | 3,047 |
40,244,069 | 2016-10-25T15:33:00.000 | 0 | 0 | 1 | 0 | python,deployment,bundle,packages | 40,977,197 | 1 | true | 0 | 0 | You can use the following command: pip install package_name --target .
This will download the package into your local directory but will not install its' dependencies | 1 | 0 | 0 | In order to reduce my application deployment time, I want to create a zip of my Python program with all packages.
My next step would be to zip packages from a private PyPi source, so pyinstaller will not be good enough.
Is there any way to bundle these packages together before packing?
Thanks! | Bundle my Python code along with all dependencies | 1.2 | 0 | 0 | 53 |
40,246,207 | 2016-10-25T17:33:00.000 | 1 | 0 | 0 | 0 | python,django,api | 40,246,643 | 2 | false | 1 | 0 | An API does not care if the client that sends the requests is a mobile app or browser (unless of course you send and use the information on purpose). For example if your API exposes the "www.myapp.com/registeruser/" URL and requires a POST with username and password, you can call this URL with those parameters from any client that is able to send that.
If what you want is use the same client-side code for both desktop and mobile (trying to understand what you need!), you can look at responsive websites. A package like django-bootstrap3 works very well with Django and is easy to use. | 1 | 0 | 0 | I'm new to the API concept.I have a doubt about the API. I created a web app in Python-django framework.I need to create an API for this web application.I need to use this same app in mobile as mobile app.How can I possible this?
Can I create seperate API for the mobile app also? I searched this in google. but i can't find a correct answer.Please help me... | How can I create an API for my web app and the mobile version of that web app in python? | 0.099668 | 0 | 0 | 60 |
40,251,259 | 2016-10-25T23:19:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,tkinter,console | 40,251,792 | 2 | true | 0 | 1 | You can apply modifiers to the text widget indicies, such as linestart and lineend as well as adding and subtracting characters. The index after the last character is "end".
Putting that all together, you can get the start of the last line with "end-1c linestart". | 2 | 3 | 0 | I am working on a virtual console, which would use the systems builtin commands and then do the action and display output results on next line in console. This is all working, but how do I get the contents of the last line, and only the last line in the tkinter text widget? Thanks in advance. I am working in python 3.
I have treed using text.get(text.linestart, text.lineend) To no avail. Have these been deprecated? It spits out an error saying that AttributeError: 'Text' object has no attribute 'linestart' | How to get the contents of last line in tkinter text widget (Python 3) | 1.2 | 0 | 0 | 2,994 |
40,251,259 | 2016-10-25T23:19:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,tkinter,console | 56,634,803 | 2 | false | 0 | 1 | Test widget has a see(index) method.
text.see(END) will scroll the text to the last line. | 2 | 3 | 0 | I am working on a virtual console, which would use the systems builtin commands and then do the action and display output results on next line in console. This is all working, but how do I get the contents of the last line, and only the last line in the tkinter text widget? Thanks in advance. I am working in python 3.
I have treed using text.get(text.linestart, text.lineend) To no avail. Have these been deprecated? It spits out an error saying that AttributeError: 'Text' object has no attribute 'linestart' | How to get the contents of last line in tkinter text widget (Python 3) | 0 | 0 | 0 | 2,994 |
40,254,884 | 2016-10-26T06:08:00.000 | 0 | 0 | 1 | 0 | python,ipython,jupyter-notebook,jupyter | 42,394,467 | 6 | false | 0 | 0 | I didn't find such option in jupyter notebook, but you can create empty *.py file and then open with jupyter. It is better then plain text, because you get colored text. | 2 | 9 | 0 | I am new to Python. I start learning it with Jupyter notebook. It is very useful to test python code at the same time I can document what I've learned with markdown supported by Jupiter.
Until I started with module/package I noticed that every file ends with " notebook extension .ipynb. I understand that in order for Jupyter to have this good-looking visualization it has to store the file in some kind of format.
is there any solution to create a raw python file using Jupyter?
I am ok if I have to install other plugins to accomplish this. | create a raw python file in jupyter notebook | 0 | 0 | 0 | 37,059 |
40,254,884 | 2016-10-26T06:08:00.000 | 11 | 0 | 1 | 0 | python,ipython,jupyter-notebook,jupyter | 40,255,078 | 6 | false | 0 | 0 | In order to create a python file from an existing notebook (somenotebook.ipynb), please run
jupyter nbconvert somenotebook.ipynb --to script
This will create somenotebook.py. | 2 | 9 | 0 | I am new to Python. I start learning it with Jupyter notebook. It is very useful to test python code at the same time I can document what I've learned with markdown supported by Jupiter.
Until I started with module/package I noticed that every file ends with " notebook extension .ipynb. I understand that in order for Jupyter to have this good-looking visualization it has to store the file in some kind of format.
is there any solution to create a raw python file using Jupyter?
I am ok if I have to install other plugins to accomplish this. | create a raw python file in jupyter notebook | 1 | 0 | 0 | 37,059 |
40,257,045 | 2016-10-26T08:15:00.000 | 0 | 0 | 1 | 1 | python,pyinstaller | 40,260,017 | 1 | false | 0 | 0 | Try copletely uninstalling Python and then re-installing it. Programming can be crazy complicated, but sometimes it's as simple as that. | 1 | 3 | 0 | I want to turn myProgram.py into an executable program. When i run:
pyinstaller --onefile --windowed myProgram.py I have this error:
OSError: Python library not found: .Python, libpython3.5.dylib, Python
This would mean your Python installation doesn't come with proper library files.
This usually happens by missing development package, or unsuitable build parameters of Python installation.
How can I fix the problem? | PyInstaller Not Work Python 3.5 | 0 | 0 | 0 | 571 |
40,259,264 | 2016-10-26T10:03:00.000 | 0 | 0 | 1 | 0 | python,windows,macos,vpython | 40,259,789 | 1 | false | 0 | 0 | Have you tried unistalling and then re-installing Python? I know it's not the most professional answer, but... | 1 | 0 | 0 | I just started learning vPython for my physics project. I was trying out some simple code from the visual library and everything ran great for the first half an our but after I close vPython completely and restarted it my test application doesn't "run" anymore. It shows a blank window and whenever I try to interact with it it shows "not responding". I'm running Windows 10 Pro 64bit with 64-bit Python-2.7.9 and VPython-Win-64-Py2.7-6.11 from vPython.org. I've tried rebooting my computer and reinstalling both Python and vPython but nothing helps. I then transfered my projects to my MacBook Pro and everything runs perfectly. Is there any solution for this? Thank you. | blank window after clicking run module in VIDLE | 0 | 0 | 0 | 58 |
40,264,741 | 2016-10-26T14:17:00.000 | 1 | 0 | 0 | 0 | python-3.x,scipy,sparse-matrix | 40,268,763 | 1 | false | 0 | 0 | sparse.rand calls sparse.random. random adds a optional data_rvs.
I haven't used data_rvs. It can probably emulate the dense randint, but definition is more complicated.
Another option is to generate the random floats and then convert them with a bit of math to the desired integers. You have to be a little careful since some operations will produce a Sparse Efficiency warning. You want operations that will change the nonzero values without touching the zeros.
(I suspect the data_rvs parameter was added in newer Scipy version, but I don't see an indication in the docs). | 1 | 0 | 1 | The method available in python scipy sps.rand() generates sparse matrix of random values in the range (0,1). How can we generate discrete random values greater than 1 like 2, 3,etc. ? Any method in scipy, numpy ? | Generate random sparse matrix filled with values greater than 1 python | 0.197375 | 0 | 0 | 258 |
40,266,219 | 2016-10-26T15:24:00.000 | 1 | 0 | 0 | 0 | python,proxy,web-scraping,scrapy,web-crawler | 40,296,417 | 1 | true | 1 | 0 | Thanks.. I figure out here.. the problem is that some proxy location doesn't work with https.. so I just changed it and now it is working. | 1 | 0 | 0 | I am using a proxy (from proxymesh) to run a spider written in scrapy python, the script is running normally when I don't use the proxy, but when I use it, I am having the following error message:
Could not open CONNECT tunnel with proxy fr.proxymesh.com:31280 [{'status': 408, 'reason': 'request timeout'}]
Any clue about how to figure out?
Thanks in advance. | Proxy Error 408 when running a script written in Scrapy Python | 1.2 | 0 | 1 | 497 |
40,266,372 | 2016-10-26T15:30:00.000 | 0 | 0 | 0 | 0 | java,python,apache-spark,pyspark | 44,125,243 | 1 | true | 0 | 0 | This is because you're setting the maximum available heap size (128M) to be larger than the initial heap size error. Check the _JAVA_OPTIONS parameter that you're passing or setting in the configuration file. Also, note that the changes in the spark.driver.memory won't have any effect because the Worker actually lies within the driver JVM process that is started on starting spark-shell and the default memory used for that is 512M.
This creates a conflict as spark tries to initialize a heap size equal to 512M, but the maximum allowed limit set by you is only 128M.
You can set the minimum heap size through the --driver-java-options command line option or in your default properties file | 1 | 1 | 1 | I'm trying to run a Python script with the pyspark library.
I create a SparkConf() object using the following commands:
conf = SparkConf().setAppName('test').setMaster(<spark-URL>)
When I run the script, that line runs into an error:
Picked up _JAVA_OPTIONS: -Xmx128m
Picked up _JAVA_OPTIONS: -Xmx128m
Error occurred during initialization of VM Initial heap size set to a larger value than the maximum heap size.
I tried to fix the problem by setting the configuration property spark.driver.memory to various values, but nothing changed.
What is the problem and how can I fix it?
Thanks | Java heap size error when running Spark from Python | 1.2 | 0 | 0 | 1,509 |
40,266,970 | 2016-10-26T15:58:00.000 | 0 | 0 | 1 | 0 | python,excel | 57,348,868 | 1 | false | 0 | 0 | You could try:
df['column_name'] = df['column_name'].astype(str) | 1 | 0 | 0 | I'm trying to figure out how to convert the entire column of a spreadsheet from an int to a string. The problem I'm having is that I have a bunch of excell spreadsheets whose values I want to upload to our database. Our numbers are 10 digits long and being converted to scientific notation though, so I want to convert all of our numbers from ints into strings before our upload.
I've been trying to do some research, but I can't find any libraries that would convert an entire column -- do I need to iterate row by row converting the numbers to strings?
Thank you. | Converting Excel Column Type From Int to String in Python | 0 | 1 | 0 | 2,713 |
40,269,957 | 2016-10-26T18:46:00.000 | 0 | 0 | 0 | 0 | python,opencv,tkinter | 40,271,580 | 1 | true | 0 | 1 | OpenCV window in Tkinter window is not good idea. Both windows use own mainloop (event loop) which can't work at the same time (or you have to use threading) and don't have contact one with other.
Probably it is easier to get video frame and display in Tkinter window on Label or Canvas. You can use tk.after(miliseconds, function_name) to run periodically function which will update video frame in Tkinter window. | 1 | 0 | 0 | I am making a hand controlled media player application in Python and through OpenCV.I want to embed gesture window of OpenCV in Tkinter frame so I can add further attributes to it.
Can someone tell how to embed OpenCV camera window into Tkinter frame? | opencv window in tkinter frame | 1.2 | 0 | 0 | 747 |
40,273,053 | 2016-10-26T22:21:00.000 | 1 | 0 | 1 | 0 | python,progress-bar,stdout | 40,273,114 | 2 | false | 0 | 0 | The idea is to use the stderr (import sys; sys.stderr.write('the bar') you could use print ('barstuff', file=sys.stderr) if you are using python3). This works fine if you want to save the stdout in a file while having the bar in the screen. To have the bar always at the bottom of the screen, looks like quite complicated: you should know what is the height of the screen and, I think, this could be almost impossible from python.
Probably, with some magic, you could be able to print the bar at the beginning of the screen and a given number of lines below it using the \r to rewrite on the old strings. | 1 | 0 | 0 | I have a script that simply outputs log events formatted in json to the screen line by line. One line equals one json log event. I usually simply append the results to a file to hold onto them when I need to (./script.py >> json.logs).
This script can take a while depending on the input, and I'd like to add a simple progress bar or number to the bottom of the console as it's working. However, I think this will also be written to the log file if I append like normal and I do not want that.
What is the normal way to approach printing something to the console that will not be appended to stdout or >>? Also, if I'm just simply printing the results to the screen instead of logging them to a file, I need the status bar to not make a mess in the screen as well (or rather, to only ever show at the bottom of the console screen). | How can I output a status bar to the console without interfering with append (>>)? | 0.099668 | 0 | 0 | 688 |
40,273,427 | 2016-10-26T22:55:00.000 | 5 | 0 | 1 | 0 | python,list,stack,queue | 40,273,547 | 3 | false | 0 | 0 | I think that, strictly speaking, neither end of a list has to be the top of a stack/front of a queue. The implementation of your data structure is separate from the expected behavior of the data structure.
For instance, a stack exhibits a last in, first out (LIFO) behavior. In other words, the last element that was stored in the stack is the "top" element. If you decide to implement your stack as a list where every new element is added at index 0, and all existing elements are shifted over by 1, then index 0 would be your top. On the other hand, if you implement your stack as a list where every new element is appended to the end of the list, then index -1 would be your top.
With that said, the former implementation is quite inefficient because every time you push/pop values on/off the stack, you have to shift your entire list, whereas the latter implementation is more efficient because you can simply append/remove elements to/from the end of the list.
Also, just to point out something mentioned in other answers/comments that I didn't make explicitly clear, your implementation doesn't have to be a list either. When I said that implementation and behavior are separate, that also goes for underlying data structure. | 1 | 7 | 0 | Admittedly this seems like a silly question, but bear with me.
In a question I'm given relating to a Stack, we are to define a function that returns the item "on top" of the stack. To me, I don't know which side is the "top" because really, either side could be.
Also, I'm given a question relating to a Queue which asks us to define a function that returns the item "on the front" of the queue. Again, either side can be interpreted as the "front"
If the questions were reworded to ask "return the last item on the list" or the "first item on the list" this makes perfect sense, but unfortunately that is not the case.
So I would like to know: is there a definition for both "front" and "top" in terms of stacks/queues which are basically just lists, or are these terms ambiguous? | Which end of a list is the top? | 0.321513 | 0 | 0 | 1,637 |
40,273,524 | 2016-10-26T23:06:00.000 | 0 | 0 | 0 | 0 | python-3.x,wsgi,restful-architecture | 40,368,532 | 1 | false | 1 | 0 | I implemented a solution by creating a new Python thread and attaching the second transaction to it. To ensure it kicks of after the first transaction, i put a small delay in the thread before it starts the second transaction. Hoping there are no issues introduced with threading. | 1 | 1 | 0 | We have a WSGI application with Python3 running under Apache Linux.
We want to interact with an external API after acknowledging a request / notification received via the Web server
Sample WSGI python code:
def application(environ, start_response):
path= environ.get('PATH_INFO', '')
if path == "/ProcessTransact":
import sys
sys.stderr.write("Entering /ProcessTransact, Checking validity ...\n" )
# Get the context of the notification/request from Post parameters etc, assume it is a valid ...
status = '200 OK'
body = b"Acknowledge the valid submit with '200 OK'"
response_headers = [
('Content-Type', 'text/html'),
('Content-Length', str(len(body)))
]
start_response(status, response_headers)
return [body]
# we have acknowledged the context of the above request
# we want to do an HTTP POST based on the context
# When we return [body], we lost the processing thread
import requests #or maybe something else
sys.stderr.write("POST RESTful transactions here after acknowledging the request (we never get here).\n")
Our code is slightly different to the sample code (using Werkzeug).
What is the best way to solve this?
We are purposefully not using any frameworks (except Werkzeug) and we want to avoid large changes in architecture (thousands of lines of code)
Thank you,
Kris | WSGI with RESTful post-processing | 0 | 0 | 0 | 253 |
40,274,205 | 2016-10-27T00:26:00.000 | 3 | 0 | 1 | 0 | python,python-3.x,python-2.x,integer-division,floor-division | 40,274,263 | 3 | false | 0 | 0 | Divides the variable with floor division by two and assigns the new amount to the variable. | 1 | 11 | 0 | I came across with the code syntax d //= 2 where d is a variable. This is not a part of any loop, I don't quite get the expression.
Can anybody enlighten me please? | What does the "variable //= a value" syntax mean in Python? | 0.197375 | 0 | 0 | 1,687 |
40,275,060 | 2016-10-27T02:17:00.000 | 2 | 0 | 1 | 0 | python,if-statement | 40,275,079 | 2 | false | 0 | 0 | is not is a single operator, equal to the negation of is. Since '' is None is false, '' is not None is true.
But since is tests identity, not equality, '' is (not None) still won't do what you want. | 1 | 0 | 0 | >>> if '' is not None:
... print'23333'
...
23333
I think (not None) is True and ('') is False so why it running print? | Why this if return True in Python | 0.197375 | 0 | 0 | 62 |
40,277,199 | 2016-10-27T06:01:00.000 | 0 | 0 | 0 | 0 | python,flask | 40,277,255 | 1 | false | 1 | 0 | If you want the user to stay in place, you should send the form using JavaScript asynchronously. That way, the browser won't try to fetch and render a new page.
You won't be able to get this behavior from the Flask end only. You can return effectively nothing but the browser will still try to get it and render that nothing for the client. | 1 | 0 | 0 | I have written a python function using flask framework to process some data submitted via a web form. However I don't want to re-render the template, I really just want to process the data and the leave the web form, it the state it was in, when the POST request was created. Not sure how to do this ... any suggestions ? | Flask return nothing, instead of having the re-render template | 0 | 0 | 0 | 795 |
40,282,480 | 2016-10-27T10:38:00.000 | 0 | 0 | 0 | 0 | python,google-cloud-dataflow | 47,528,448 | 1 | true | 0 | 0 | As jkff pointed out in the above comment, the code is indeed correct and the procedure is the recommended way of programming a tensorflow algorithm. The DoFn applied to each element was the bottleneck. | 1 | 1 | 1 | I'm using the python apache_beam version of dataflow. I have about 300 files with an array of 4 million entries each. The whole thing is about 5Gb, stored on a gs bucket.
I can easily produce a PCollection of the arrays {x_1, ... x_n} by reading each file, but the operation I now need to perform is like the python zip function: I want a PCollection ranging from 0 to n-1, where each element i contains the array of all the x_i across the files. I tried yielding (i, element) for each element and then running GroupByKey, but that is much too slow and inefficient (it won't run at all locally because of memory constraints, and it took 24 hours on the cloud, whereas I'm sure I can at least load all of the dataset if I want).
How do I restructure the pipeline to do this cleanly? | what's the equivalent of the python zip function in dataflow? | 1.2 | 0 | 0 | 283 |
40,282,779 | 2016-10-27T10:53:00.000 | 0 | 0 | 1 | 1 | python,pip | 40,283,551 | 1 | true | 0 | 0 | I had the same problem before and the solution is quite simple.
First try updating pip via command:
pip install --upgrade pip
If that doesn't work try uninstalling current version of python and reinstalling the newest version.
Note1: Do not just delete install files and files in your C drive ,uninstall everything packages, everything that might cause problems, especially delete old python packages and extensions they might not work with the newest python version and that might be the problem. You can see in python website which packages and extensions are supported.
Note2: Do not and I repeat DO NOT install .msi or .exe extensions they don't work anymore always use .whl (wheel) files. If you have one .msi or .exe uninstall them form your system completely; that also means that you have to uninstall them from command prompt.
Note3: Always check if the .whl is compatible with your Python version.
Note4: Also don't forget to save your projects before doing anything.
Hope that works :D
Happy Coding. | 1 | 0 | 0 | I know this question has been asked a few times before, but none of the answers I've read have managed to solve my problem.
When I try to run any of the following, I get an error saying "pip.exe has stopped working:
easy_install
pip
pip3
It was working for me previously (the last time I used it was probably a month ago), but not anymore. I'm using Python 3.4.4, I checked the PATH and it's configured correctly. Does anyone know what else might be causing the issue? | pip.exe has stopped working | 1.2 | 0 | 0 | 2,538 |
40,282,987 | 2016-10-27T11:03:00.000 | 0 | 0 | 1 | 0 | python-3.x,postgresql-9.1 | 40,284,042 | 1 | false | 0 | 0 | You don't say what updates each "iteration" performs, but clearly you are reading and writing 7 million rows. Would it be possible to use the database to perform the updates? | 1 | 0 | 0 | I am using python for reading Unicode data and then Preprocessing it and storing it in a database (Postgres)
Now the database has 3 tables with 4 attributes and 700,000 tuples each. I read the data and map it to python dictionary and list according to the way I need to use it.
Now I have to iterate through all these tuples, do some calculations and write again in the database.
I have to do 1000 iteration like these. The problem is 1 iteration takes about 50 minutes which makes it impossible to make those many iterations.
Is there anyway I can make these iterations faster?
Any new idea is welcome. Not necessary in python. | Data Preprocessing with python | 0 | 1 | 0 | 334 |
40,284,296 | 2016-10-27T12:12:00.000 | 4 | 0 | 1 | 0 | python,numpy,matrix | 40,284,356 | 1 | true | 0 | 0 | The Hermitian part is (A + A.T.conj())/2, the anti-hermitian part is (A - A.T.conj())/2 (it is quite easy to prove).
If A = B + C with B Hermitian and C anti-Hermitian, you can take the conjugate (I'll denote it *) on both sides, uses its linearity and obtain A* = B - C, from which the values of B and C follow easily. | 1 | 0 | 1 | Imagine I have a numpy array in python that has complex numbers as its elements.
I would like to know if it is possible to split any matrix of this kind into a hermitian and anti-hermitian part? My intuition says that this is possible, similar to the fact that any function can be split into an even and an uneven part.
If this is indeed possible, how would you do this in python? So, I'm looking for a function that takes as input any matrix with complex elements and gives a hermitian and non-hermitian matrix as output such that the sum of the two outputs is the input.
(I'm working with python 3 in Jupyter Notebook). | python: split matrix in hermitian and anti-hermitian part | 1.2 | 0 | 0 | 309 |
40,284,713 | 2016-10-27T12:34:00.000 | 0 | 1 | 1 | 0 | python,c++ | 40,971,453 | 1 | true | 0 | 0 | My further research shows that it is not possible to have packages structure in extension modules. Actually, you can create the structure using a simple trick, add a module to existing module as an object. For example, you can create the structure like this: mod1.mod2.mod3. But, it's not same like packages. You cannot use import mod1.mod2 or import mod1.mod2.mod3. You have to use import mod1, and that will import all modules. No way to import just one module. | 1 | 0 | 0 | I am writing Python extension modules for my C++ application. Also, I embedded Python interpreter in the same application and use these modules. So, I am not building separately these extension modules, because modules are created and used in the same application (just add PyImport_AppendInittab("modulename", &PyInit_modulename) before the Py_Initialize()).
If I do it like this is it possible to create Python package structure?
Currently, I have import module, but I need to have the possibility to use import package.module in my embedded Python interpreter.
Is there anything for creating packages like there is a function for modules PyModule_Create()? | Python extension modules package structure (namespaces) | 1.2 | 0 | 0 | 159 |
40,287,113 | 2016-10-27T14:20:00.000 | 4 | 0 | 0 | 0 | python,python-3.x,numpy,linear-regression | 40,293,068 | 1 | false | 0 | 0 | some brief answers
1) Calling statsmodels repeatedly is not the fastest way. If we just need parameters, prediction and residual and we have identical explanatory variables, then I usually just use params = pinv(x).dot(y) where y is 2 dimensional and calculate the rest from there. The problem is that inference, confidence intervals and similar require work, so unless speed is crucial and only a restricted set of results is required, statsmodels OLS is still more convenient.
This only works if all y and x have the same observations indices, no missing values and no gaps.
Aside: The setup is a Multivariate Linear Model which will be supported by statsmodels in, hopefully not very far, future.
2) and 3) The fast simple linear algebra of case 1) does not work if there are missing cells or no complete overlap of observation (indices). In the analog to panel data, the first case requires "balanced" panels, the other cases imply "unbalanced" data. The standard way is to stack the data with the explanatory variables in a block-diagonal form. Since this increases the memory by a large amount, using sparse matrices and sparse linear algebra is better. It depends on the specific cases whether building and solving the sparse problem is faster than looping over individual OLS regressions.
Specialized code: (Just a thought):
In case 2) with not fully overlapping or cellwise missing values, we would still need to calculate all x'x, and x'y matrices for all y, i.e. 500 of those. Given that you only have two regressors 500 x 2 x 2 would still not require a large memory. So it might be possible to calculate params, prediction and residuals by using the non-missing mask as weights in the cross-product calculations.
numpy has vectorized linalg.inv, as far as I know. So, I think, this could be done with a few vectorized calculations. | 1 | 4 | 1 | I think I have a pretty reasonable idea on how to do go about accomplishing this, but I'm not 100% sure on all of the steps. This question is mostly intended as a sanity check to ensure that I'm doing this in the most efficient way, and that my math is actually sound (since my statistics knowledge is not completely perfect).
Anyways, some explanation about what I'm trying to do:
I have a lot of time series data that I would like to perform some linear regressions on. In particular, I have roughly 2000 observations on 500 different variables. For each variable, I need to perform a regression using two explanatory variables (two additional vectors of roughly 2000 observations). So for each of 500 different Y's, I would need to find a and b in the following regression Y = aX_1 + bX_2 + e.
Up until this point, I have been using the OLS function in the statsmodels package to perform my regressions. However, as far as I can tell, if I wanted to use the statsmodels package to accomplish my problem, I would have to call it hundreds of times, which just seems generally inefficient.
So instead, I decided to revisit some statistics that I haven't really touched in a long time. If my knowledge is still correct, I can put all of my observations into one large Y matrix that is roughly 2000 x 500. I can then stick my explanatory variables into an X matrix that is roughly 2000 x 2, and get the results of all 500 of my regressions by calculating (X'Y)/(X'X). If I do this using basic numpy stuff (matrix multiplication using * and inverses using matrix.I), I'm guessing it will be much faster than doing hundreds of statsmodel OLS calls.
Here are the questions that I have:
Is the numpy stuff that I am doing faster than the earlier method of calling statsmodels many times? If so, is it the fastest/most efficient way to accomplish what I want? I'm assuming that it is, but if you know of a better way then I would be happy to hear it. (Surely I'm not the first person to need to calculate many regressions in this way.)
How do I deal with missing data in my matrices? My time series data is not going to be nice and complete, and will be missing values occasionally. If I just try to do regular matrix multiplication in numpy, the NA values will propagate and I'll end up with a matrix of mostly NAs as my end result. If I do each regression independently, I can just drop the rows containing NAs before I perform my regression, but if I do this on the large 2000 x 500 matrix I will end up dropping actual, non-NA data from some of my other variables, and I obviously don't want that to happen.
What is the most efficient way to ensure that my time series data actually lines up correctly before I put it into the matrices in the first place? The start and end dates for my observations are not necessarily the same, and some series might have days that others do not. If I were to pick a method for doing this, I would put all the observations into a pandas data frame indexed by the date. Then pandas will end up doing all of the work aligning everything for me and I can extract the underlying ndarray after it is done. Is this the best method, or does pandas have some sort of overhead that I can avoid by doing the matrix construction in a different way? | Fastest way to calculate many regressions in python? | 0.664037 | 0 | 0 | 3,105 |
40,289,327 | 2016-10-27T16:01:00.000 | 0 | 0 | 0 | 0 | python,django,python-2.7,django-south,django-1.4 | 40,289,636 | 1 | true | 1 | 0 | Answering this question:
So the only solution I can think of is exporting the database from the old machine, where it is working, to the new one. Would that work?
Yes, this can work if you are sure that your database is in sync with your models. It is actually the way to go, if you want to be best prepared of updating your production environment.
get a dump from the current production machine
create a new database and load the dump
check whether there are differences between the models and the migration history (this is more reliable with the new Django migrations, South was an external tool and had not all of the possibilities) (e.g. ./manage.py showmigrations (1.10), ./manage.py migrate --list (1.7-1.9 and South)
If you are confident that no migrations have to be run but the listing shows differences then do: ./manage.py migrate --fake
Note, in newer versions you can do ./manage.py migrate and it will report that everything is in order if the models and the migrations are in sync. This can be a sanity check before you deploy onto production. | 1 | 0 | 0 | I am trying to setup a Django app locally in a new machine but migrations seem to be totally broken. They need to be performed in a particular order, which worked in the first machine I set the environment in a couple months ago, but now there are inconsistencies (although I am pretty sure no new migrations were generated).
So the only solution I can think of is exporting the database from the old machine, where it is working, to the new one. Would that work?
This would not solve the broken migrations issue, but at least I can work on the code till there's a proper soltuion. | Django: broken migrations | 1.2 | 0 | 0 | 1,968 |
40,289,943 | 2016-10-27T16:35:00.000 | 1 | 0 | 1 | 0 | python,list,python-3.x,numpy | 40,290,127 | 5 | false | 0 | 0 | Looping and adding is likely better, since you want to preserve the structure of the original. Plus, the error you mentioned indicates that you would need to flatten the numpy array and then add to each element. Although numpy operations tend to be faster than list operations, converting, flattening, and reverting is cumbersome and will probably offset any gains. | 1 | 5 | 1 | Currently, I have a 3D Python list in jagged array format.
A = [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0], [0], [0]]]
Is there any way I could convert this list to a NumPy array, in order to use certain NumPy array operators such as adding a number to each element.
A + 4 would give [[[4, 4, 4], [4, 4, 4], [4, 4, 4]], [[4], [4], [4]]].
Assigning B = numpy.array(A) then attempting to B + 4 throws a type error.
TypeError: can only concatenate list (not "float") to list
Is a conversion from a jagged Python list to a NumPy array possible while retaining the structure (I will need to convert it back later) or is looping through the array and adding the required the better solution in this case? | Converting a 3D List to a 3D NumPy array | 0.039979 | 0 | 0 | 7,011 |
40,293,466 | 2016-10-27T20:14:00.000 | 1 | 1 | 0 | 0 | python,curl,ibm-watson,retrieve-and-rank | 40,302,783 | 1 | false | 1 | 0 | Sorry, no - there isn't a public supported API for submitting questions for use in the tool.
(That wouldn't stop you looking to see how the web tool does it and copying that, but I wouldn't encourage that as the auth step alone would make that fairly messy). | 1 | 0 | 0 | Is there a way to upload questions to "Retrieve and Rank" (R&R) using cURL and have them be visible in the web tool?
I started testing R&R using web tool (which I find very intuitive). Now, I have started testing the command line interface (CLI) for more efficient uploading of question-and-answer pairs using train.py. However, I would still like to have the questions visible in web tool so that other people can enter the collection and perform training there as well. Is it possible in the present status of R&R? | Upload questions to Retrieve and Rank using cURL, visible in webtool | 0.197375 | 0 | 1 | 85 |
40,295,356 | 2016-10-27T22:43:00.000 | 0 | 0 | 1 | 0 | python,python-3.4,zlib | 40,295,652 | 1 | false | 0 | 0 | You might need to install the zlib-devel package as well: yum install zlib-devel.
Otherwise you need to post the full error message when you try to run import zlib. | 1 | 0 | 0 | I am trying to add zlib package to my python. However, after I install it using "yum install zlib", only the default python (which is python 2.4.3) can import it. While the other python (3.4.4) still cannot use zlib.
When I try to import zlib in python 3.4.4 by
import zlib
It displays
Traceback (most recent call last):
File "< stdin >", line 1, in
ImportError: No module named 'zlib'
My question is, how can I install package to python which is not default?
PS. I installed both zlib and zlib-devel
Thanks | How to install a package to python which is not default | 0 | 0 | 0 | 124 |
40,295,665 | 2016-10-27T23:18:00.000 | 1 | 0 | 1 | 0 | python | 40,295,827 | 1 | false | 0 | 0 | The difference between a tuple and a list is that lists are mutable and tuples are not. So its not about data type safety, its about whether or not you want the elements to be able to be changed | 1 | 0 | 0 | Is there a reason that Python allows lists to contain multiple types? If I had a mixed collection of objects I would think the safe data type to use would be a tuple. Also, I find it strange that list methods (like sort) can be called on mixed lists so I assume there must be a good reason for allowing this. It appears at first glance that this would make writing type safe functions much more difficult. | Why does Python allow mixed type lists? | 0.197375 | 0 | 0 | 392 |
40,296,765 | 2016-10-28T01:49:00.000 | 1 | 0 | 0 | 0 | python,nlp,gensim,doc2vec | 41,733,461 | 1 | false | 0 | 0 | Gensim's Word2Vec/Doc2Vec models don't store the corpus data – they only examine it, in multiple passes, to train up the model. If you need to retrieve the original texts, you should populate your own lookup-by-key data structure, such as a Python dict (if all your examples fit in memory).
Separately, in recent versions of gensim, your code will actually be doing 1,005 training passes over your taggeddocs, including many with a nonsensically/destructively negative alpha value.
By passing it into the constructor, you're telling the model to train itself, using your parameters and defaults, which include a default number of iter=5 passes.
You then do 200 more loops. Each call to train() will do the default 5 passes. And by decrementing alpha from 0.025 by 0.002 199 times, the last loop will use an effective alpha of 0.025-(200*0.002)=-0.375 - a negative value essentially telling the model to make a large correction in the opposite direction of improvement each training-example.
Just use the iter parameter to choose the desired number of passes. Let the class manage the alpha changes itself. If supplying the corpus when instantiating the model, no further steps are necessary. But if you don't supply the corpus at instantiation, you'll need to do model.build_vocab(tagged_docs) once, then model.train(tagged_docs) once. | 1 | 1 | 1 | I am preparing a Doc2Vec model using tweets. Each tweet's word array is considered as a separate document and is labeled as "SENT_1", SENT_2" etc.
taggeddocs = []
for index,i in enumerate(cleaned_tweets):
if len(i) > 2: # Non empty tweets
sentence = TaggedDocument(words=gensim.utils.to_unicode(i).split(), tags=[u'SENT_{:d}'.format(index)])
taggeddocs.append(sentence)
# build the model
model = gensim.models.Doc2Vec(taggeddocs, dm=0, alpha=0.025, size=20, min_alpha=0.025, min_count=0)
for epoch in range(200):
if epoch % 20 == 0:
print('Now training epoch %s' % epoch)
model.train(taggeddocs)
model.alpha -= 0.002 # decrease the learning rate
model.min_alpha = model.alpha # fix the learning rate, no decay
I wish to find tweets similar to a given tweet, say "SENT_2". How?
I get labels for similar tweets as:
sims = model.docvecs.most_similar('SENT_2')
for label, score in sims:
print(label)
It prints as:
SENT_4372
SENT_1143
SENT_4024
SENT_4759
SENT_3497
SENT_5749
SENT_3189
SENT_1581
SENT_5127
SENT_3798
But given a label, how do I get original tweet words/sentence? E.g. what are the tweet words of, say, "SENT_3497". Can I query this to Doc2Vec model? | How to extract words used for Doc2Vec | 0.197375 | 0 | 0 | 1,260 |
40,297,196 | 2016-10-28T02:38:00.000 | 0 | 0 | 0 | 0 | python,django,python-3.x,atom-editor | 40,298,315 | 1 | false | 1 | 0 | Django have a debugger enviroment:
$ mkdir myvenv
$ cd myvenv
$ python3 -m venv myvenv
$ source myvenv/bin/activate
Now your prompt is: (myvenv)diego@AspireM1640 ~/www/myvenv $
Go to your project folder, run the server debug: python manage.py runserver or for intranet with IP:python manage.py runserver 192.168.1.33:8000 | 1 | 0 | 0 | I'm using Atom for development python. I created a simple project with python and Django. I have already installed
python-debugger
But how can I run debug it in view of django. | How to run debug in view django with Atom python-debugger? | 0 | 0 | 0 | 1,402 |
40,298,966 | 2016-10-28T06:04:00.000 | 2 | 0 | 0 | 0 | python,scikit-learn,data-mining,categorical-data | 40,315,040 | 2 | true | 0 | 0 | Depends what type of model you're using, make_pipeline(LabelEncoder, OneHotEncoder) or pd.get_dummies) is the usual choice, and can work well with classifiers from either linear_model or tree. LabelEncoder on its own would another choice, although this won't work well unless there's a natural ordering on your labels (like level of education or something) or unless you're using very deep trees, which are able to separate individual labels. | 1 | 0 | 1 | I am new to data mining. I have a data set which includes directors' names. What is the right way to convert them to something that Scikit learn estimators can use without problem?
From what I found on the internet I thought that sklearn.preprocessing.LabelEncoder is the right choice. | How to handle nominal data in scikit learn, python? | 1.2 | 0 | 0 | 861 |
40,299,538 | 2016-10-28T06:46:00.000 | 0 | 0 | 1 | 0 | python,group-by,itertools | 40,299,648 | 2 | false | 0 | 0 | Bucket them. You'd need to manually work out the breaks in advance. Can you sort then, in advance? That would make it easier.
Actually, if you use log, then a multiplicative threshold turns into a constant threshold, e.g. 0.98..1.02 in log-land ~= (-0.02, +0.02).
So, use the log of all your numbers.
You'll still need to bucket them before doing groupby.
If you want code, give us a better (random-seeded) reproducible example that has more numbers testing the corner-cases. | 1 | 0 | 1 | I have a list of numbers and I need to group it. itertools.grouby work perfectly for sequences of same numbers but I need same behavior for numbers with a threshold (2-3%)
E.X: lst = [1, 500, 19885, 19886, 19895, 90000000]
and I expect [[1], [500], [19885, 19886, 19895], [90000000]]
Can you suggest me something? | Python groupby threshold | 0 | 0 | 0 | 415 |
40,306,581 | 2016-10-28T13:52:00.000 | 1 | 0 | 1 | 0 | python | 40,306,767 | 3 | false | 0 | 0 | Also os lib can do the trick for you, returns true or false
import os
os.path.exists(dir_or_file_to_check) | 1 | 1 | 0 | I want to parsing some program's log and I want to check the existence of a core file in the log directory.
Assume the path is the log directory, and core file names always begins with string core (e.g., name is core.20161027.183805.28273.0001.dmp). Then is there any directed API I can use to check a core file in the path directory?
Thanks | how to check existence of a core files in directory using Python | 0.066568 | 0 | 0 | 826 |
40,306,972 | 2016-10-28T14:11:00.000 | 1 | 0 | 0 | 0 | linux,python-3.x | 40,307,020 | 3 | false | 0 | 0 | Why don't you use the Udev to force the location by your self, simply you can create a UDEV script that keep listening on the drives insertion and map the inserted USB drive to specific location on the machine | 1 | 0 | 0 | How do I find (in Python 3.x) the default location where flash drives automatically mount when plugged in on a computer that I happen to be using at the time? (It could be any of various non-specific Linux distributions and older/new versions. Depending on which one it is, it may mount at such locations as /media/driveLabel, /media/userName/driveLabel, /mnt/driveLabel, etc.)
I was content just assuming /media/driveLabel until Ubuntu updated its default mount location to include the username (so, now I can't use a static location for bookmarked file settings of a portable app I made across my computers, since I use multiple usernames). So, the paths for the bookmarked files need to be updated every time I use a new computer or user. Note that files on the hard drives are also bookmarked (so, those don't need to be changed; they're set not to load if you're not on the right computer for them).
Anyway, now I'm not content just going with /media mounts, if there's a solution here. I would prefer to be able to find this location without having to mount something and find a path's mount location first, if possible (even though that may help me with the problem that sparked the question). It seems like there should be some provision for this, whether in Python, or otherwise.
In other words, I want to be able to know where my flash drive is going to mount (sans the drive label part)—not where it's already mounted.
EDIT: If /media/username/drivelabel is pretty standard for the automatic mounts across all the major distributions that support automatic mounting (the latest versions, at least, as I seem to recall that Ubuntu didn't always include the username), feel free to let me know, as that pretty much answers the question. Or, you could just tell me a list of automatic flash drive mount locations specific to which major distributions. I guess that could work (though I'd have to update it if they changed things).
FYI EDIT: For my problem I'll probably just save the mount location with the bookmark (so my program knows what part of the bookmark path it was when I open it), and replace that in the bookmark path with the new current mount location when a user loads the bookmark. | How do I find the location where flash drives automatically mount in Python? | 0.066568 | 0 | 1 | 607 |
40,309,098 | 2016-10-28T16:12:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook | 72,368,192 | 7 | false | 0 | 0 | In JupyterLab, you can view two notebooks arranged as panes side-by-side. (Or even two views of the same notebook.)
Then you can select a cell or continuous range of them. When they are highlighted go to the top cell and click and drag over to the other notebook to copy them. | 5 | 100 | 0 | I am trying to copy cells from one jupyter notebook to another. How this is possible? | Is it possible to copy a cell from one jupyter notebook to another? | 0 | 0 | 0 | 66,536 |
40,309,098 | 2016-10-28T16:12:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook | 72,355,254 | 7 | false | 0 | 0 | VSCode can open and execute jupyter notebooks.
In the same software it is also possible to cut/copy and paste from one notebook to another (something that I didn't manage to do with jupyter notebook or lab).
It saved me a lot of time. | 5 | 100 | 0 | I am trying to copy cells from one jupyter notebook to another. How this is possible? | Is it possible to copy a cell from one jupyter notebook to another? | 0 | 0 | 0 | 66,536 |
40,309,098 | 2016-10-28T16:12:00.000 | 2 | 0 | 1 | 0 | python,jupyter-notebook | 60,544,868 | 7 | false | 0 | 0 | For windows-
Use Ctrl + Shift + C to copy cells after selecting them using shift + arrow keys.
Then, switch to the notebook to which you want to copy the selected cells and go to command mode in it by pressing Esc key.
Then, use Ctrl + Shift + V to paste the cells in that notebook.
Note- I have not tested this on Linux but should work just as the procedure above. | 5 | 100 | 0 | I am trying to copy cells from one jupyter notebook to another. How this is possible? | Is it possible to copy a cell from one jupyter notebook to another? | 0.057081 | 0 | 0 | 66,536 |
40,309,098 | 2016-10-28T16:12:00.000 | 3 | 0 | 1 | 0 | python,jupyter-notebook | 40,309,208 | 7 | false | 0 | 0 | I have not done it myself though, but general practice is to avoid doing it as it can disturb the Cell JSON. It was not even possible until a few versions before. Recent Github posts has made it possible to do so though. Copy paste the cell in question to a code editor such as Atom or Sublime Text, make the changes you want to do and then paste it into the new Jupyter notebook. It should work. | 5 | 100 | 0 | I am trying to copy cells from one jupyter notebook to another. How this is possible? | Is it possible to copy a cell from one jupyter notebook to another? | 0.085505 | 0 | 0 | 66,536 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.