Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,402,076 | 2017-02-22T20:53:00.000 | 0 | 0 | 1 | 0 | python,pip,jython,robotframework | 42,432,425 | 2 | true | 1 | 0 | In Jython PIP is not a module you install separately, but an already included executable in c:\Jython\bin\pip.exe. | 1 | 2 | 0 | Could any please tell me which Jython package you are using where you are getting all the pip package too.
Or could anyone please share me the folder location.
Because whatever version of Jython I am using getting an error while running the command : “jython –m pip install robotframework”
Error: Jython.exe: No module named pip.
N.B: I have both Jython 2.7.0 and jython-installer-2.7.1b3. | not able to install pip install robotframework on jython 2.7.0 and 2.7.1b3.Getting error as "jython.exe: No module named pip." | 1.2 | 0 | 0 | 644 |
42,405,357 | 2017-02-23T01:19:00.000 | 0 | 0 | 1 | 1 | python | 42,405,571 | 1 | false | 0 | 0 | It looks like you are using the python of the system. I am myself on macOS and I went crazy several times with Apple tricks. I strongly advise to install python with anaconda, it is very simple and then you can try as many environments you want with different versions of python and of the modules. And you have a much better control.
Sorry if this is not a fully documented answer, it is more like a comment but I do not have permission to give comments anymore (reputation loss due to a bounty). I hope this helps. | 1 | 0 | 0 | Please any one help me, I am reading a fasta file through python3.6 or 3.5 on my macOS sierra and getting this error but code working properly when running on windows machine with python 3.5.2.
Please any one tell me what's the actual problem.
I install twice python on my mac but nothing works.
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd0 in position 647: invalid continuation byte | Getting this error when running python 3 version on macOS sierra 10.12 | 0 | 0 | 0 | 141 |
42,405,493 | 2017-02-23T01:36:00.000 | 0 | 0 | 0 | 0 | python,mysql,sql,python-3.x,pandas | 42,406,043 | 1 | false | 0 | 0 | I am not familiar with pandas but strictly speaking from a database point of view you could just have your panda values inserted in a PANDA_VALUES table and then join that PANDA_VALUES table with the table(s) you want to grab your data from.
Assuming you will have some indexes in place on both PANDA_VALUES table and the table with your column the JOIN would be quite fast.
Of course you will have to have a process in place to keep PANDA_VALUES tables updated as the business needs change.
Hope it helps. | 1 | 0 | 1 | I generally use Pandas to extract data from MySQL into a dataframe. This works well and allows me to manipulate the data before analysis. This workflow works well for me.
I'm in a situation where I have a large MySQL database (multiple tables that will yield several million rows). I want to extract the data where one of the columns matches a value in a Pandas series. This series could be of variable length and may change frequently. How can I extract data from the MySQL database where one of the columns of data is found in the Pandas series? The two options I've explored are:
Extract all the data from MySQL into a Pandas dataframe (using pymysql, for example) and then keep only the rows I need (using df.isin()).
or
Query the MySQL database using a query with multiple WHERE ... OR ... OR statements (and load this into Pandas dataframe). This query could be generated using Python to join items of a list with ORs.
I guess both these methods would work but they both seem to have high overheads. Method 1 downloads a lot of unnecessary data (which could be slow and is, perhaps, a higher security risk) whilst method 2 downloads only the desired records but it requires an unwieldy query that contains potentially thousands of OR statements.
Is there a better alternative? If not, which of the two above would be preferred? | Selecting data from large MySQL database where value of one column is found in a large list of values | 0 | 1 | 0 | 352 |
42,405,551 | 2017-02-23T01:41:00.000 | 2 | 0 | 0 | 0 | python,django,ubuntu,virtualenv,pythonpath | 42,462,927 | 2 | false | 1 | 0 | ok I found out what the problem was. It turns out when I started my virtualenv I used sudo command but when I pip install my packages I didn't use the sudo command which caused a permission problem or some sort when installing the packages. So it made django not showing up on the path. When starting a virtual env never use the sudo command... | 2 | 1 | 0 | I am trying to deploy my Django Projects on Amazon AWS using Ubuntu 16.04. I am running python version 2.7.12 and Django 1.10.5. I created my virtualenv named venv and then activated it.
I get this error when I try to run python manage.py runserver.
Traceback (most recent call last):
File "manage.py", line 17, in
"Couldn't import Django. Are you sure it's installed and "
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
Then I realize Django might not be in my python path. So I added
export PYTHONPATH="/usr/local/lib/python2.7/dist-packages/django"
into my venv/bin/activate script. Now with the virtualenv activated I can go into python and type
import sys
sys.path
['', '/usr/local/lib/python2.7/dist-packages/django', '/home/ubuntu/TravelBuddy/venv/lib/python2.7', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/plat-x86_64-linux-gnu', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-tk', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-old', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/home/ubuntu/TravelBuddy/venv/local/lib/python2.7/site-packages', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/site-packages']
As you can see now django is indeed in my python path. I thought this was going to fix the problem but it didn't: it still says couldn't import Django. Now I am confused because when I deactivate my virtualenv and import Django it does work.
this is what prints out when I deactivate my virtualenv and do sys.path
['', '/usr/local/lib/python2.7/dist-packages/django', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] | couldn't import django in virtualenv but works when deactivated | 0.197375 | 0 | 0 | 13,406 |
42,405,551 | 2017-02-23T01:41:00.000 | 0 | 0 | 0 | 0 | python,django,ubuntu,virtualenv,pythonpath | 44,610,084 | 2 | false | 1 | 0 | 1- install python3
brew install python3
2- install django
pip3 install django | 2 | 1 | 0 | I am trying to deploy my Django Projects on Amazon AWS using Ubuntu 16.04. I am running python version 2.7.12 and Django 1.10.5. I created my virtualenv named venv and then activated it.
I get this error when I try to run python manage.py runserver.
Traceback (most recent call last):
File "manage.py", line 17, in
"Couldn't import Django. Are you sure it's installed and "
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
Then I realize Django might not be in my python path. So I added
export PYTHONPATH="/usr/local/lib/python2.7/dist-packages/django"
into my venv/bin/activate script. Now with the virtualenv activated I can go into python and type
import sys
sys.path
['', '/usr/local/lib/python2.7/dist-packages/django', '/home/ubuntu/TravelBuddy/venv/lib/python2.7', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/plat-x86_64-linux-gnu', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-tk', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-old', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/home/ubuntu/TravelBuddy/venv/local/lib/python2.7/site-packages', '/home/ubuntu/TravelBuddy/venv/lib/python2.7/site-packages']
As you can see now django is indeed in my python path. I thought this was going to fix the problem but it didn't: it still says couldn't import Django. Now I am confused because when I deactivate my virtualenv and import Django it does work.
this is what prints out when I deactivate my virtualenv and do sys.path
['', '/usr/local/lib/python2.7/dist-packages/django', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] | couldn't import django in virtualenv but works when deactivated | 0 | 0 | 0 | 13,406 |
42,406,596 | 2017-02-23T03:42:00.000 | 1 | 0 | 0 | 1 | python,hadoop,mapreduce | 42,406,958 | 3 | false | 0 | 0 | your job is generating 1 file per mapper, you have to force a reducer phase using 1 reducer to do this, you can accomplish this emitting the same key in all the mappers. | 2 | 1 | 0 | I have about 170 GB data. I have to analyze it using hadoop 2.7.3. There are 14 workers. I have to find total of unique MIME type of each document e.g. total number of documents that are text/html type. When I run mapreduce job(written in python), Hadoop returns many output files instead of single one that I am expecting. I think this is due to many workers that process some data seprately and give output. I want to get single output. Where is the problem. How I can restrict hadoop to give single output (by combining all small output files). | How to combine hadoop mappers output to get single result | 0.066568 | 0 | 1 | 750 |
42,406,596 | 2017-02-23T03:42:00.000 | 1 | 0 | 0 | 1 | python,hadoop,mapreduce | 42,414,979 | 3 | false | 0 | 0 | Make your mapper to emit for each document processed - (doc-mime-type, 1) then count up all such pairs at reduce phase. In essence, it is a standard word count exercise except your mappers emit 1s for each doc's mime-type.
Regarding number of reducers to set: Alex's way of merging reducers' results is preferable as allows to utilize all your worker nodes at reduce stage. However, if job to be run on 1-2 nodes then just one reducer should work fine. | 2 | 1 | 0 | I have about 170 GB data. I have to analyze it using hadoop 2.7.3. There are 14 workers. I have to find total of unique MIME type of each document e.g. total number of documents that are text/html type. When I run mapreduce job(written in python), Hadoop returns many output files instead of single one that I am expecting. I think this is due to many workers that process some data seprately and give output. I want to get single output. Where is the problem. How I can restrict hadoop to give single output (by combining all small output files). | How to combine hadoop mappers output to get single result | 0.066568 | 0 | 1 | 750 |
42,408,247 | 2017-02-23T06:05:00.000 | 5 | 0 | 0 | 1 | python-2.7,lftp | 42,412,125 | 1 | false | 0 | 0 | You can either suspend the whole lftp process (command suspend) or limit transfer rate to e.g. 1Bps (set net:limit-total-rate 1). In either case the files being transferred remain open.
You can also stop the transfer and continue it later using -c option of get or mirror. | 1 | 4 | 0 | Googled around and looked on this forum but couldn't find if I can pause a download using lftp.
Currently downloading tons of logs and would like to pause, add more drives to the system and continue downloading.
Thanks | lftp pause and resume download | 0.761594 | 0 | 0 | 3,150 |
42,417,542 | 2017-02-23T13:45:00.000 | 0 | 0 | 0 | 0 | python,ffmpeg,frame | 42,419,760 | 1 | false | 0 | 0 | There is no such thing as a frame number. You can count frames from the start of the video, but the frame itself does not record that information. If you extract a frame to make a stand alone image, its frames number is now 1 of 1 | 1 | 1 | 0 | I have a video file from which I've extracted a specific frame to analyze. However, I want to know what the frame number of this frame is.
I can't seem to find anything. I've had a look at ffmpeg showinfo, but that doesn't seem to work.
I've also looked into exifread, which produced information about the frame - except for the frame number.
Any ideas? | Identify a specific frame number of a video file | 0 | 0 | 0 | 491 |
42,418,948 | 2017-02-23T14:46:00.000 | 1 | 0 | 0 | 0 | python,pattern-matching,classification,svm,image-recognition | 42,419,250 | 1 | true | 0 | 0 | If you want to use SVM as a classifier it does not make a lot of sense to make one average histogram for male and one for female because when you train you SVM classifier you can take all the histograms into account, but if you compute the average histograms you can use a nearest neighbor classifier instead. | 1 | 1 | 1 | I am working on a personal project: gender classification (male | female) in python. I'm beginner in this domain
I computed histograms for every image in training data.
Now, to test if a test image is male or female is possible to make an average histogram for male | female and compare test histograms? Or I must compare all histograms with test histogram?
If there is possible to make an average. How should I do it?
Also, is ok to use SVM for classification?
PS. I am looking for free faces databases.
Thanks | python lbp image classification | 1.2 | 0 | 0 | 470 |
42,421,036 | 2017-02-23T16:19:00.000 | 0 | 0 | 1 | 0 | python,django | 42,421,652 | 3 | false | 1 | 0 | The first query does not return a dictionary with two keys. On the contrary, it returns a ValuesQuerySet; each element of that queryset is a dictionary.
The ValuesQuerySet, like any other queryset, retains a connection with the model, and it is therefore able to add any other elements to the query as necessary. The query as a whole is not executed until the queryset is iterated. | 1 | 0 | 0 | I have +5 hours training to explain how : Item.objects.values('type', 'state') returns a dictionary that contains only two keys.
However Item.objects.values('type', 'state').annotate(nb=Count('id')) works !!
How does the interpreter knows that id attribute exists if it's not returned by values function ? | django values function strange behaviors? | 0 | 0 | 0 | 37 |
42,423,038 | 2017-02-23T18:04:00.000 | 3 | 0 | 1 | 0 | ipython,pycharm | 42,793,520 | 2 | true | 0 | 0 | (tl;dr: Use jupyter console --existing in the PyCharm "Terminal" tool window (not the "Python Console" tool window) to connect to an existing iPython kernel running in a local Jupyter Notebook server.)
I can confirm that the comment by @john-moutafis suggesting ipython console --existing is the right idea. The command gives "WARNING | You likely want to use jupyter console in the future" so I tried that.
I have a project using a conda environment as its interpreter. Jupyter Notebook is installed in the conda environment.
I open the Terminal tool window. It automatically activates the conda environment.
I type jupyter notebook. The notebook server starts and a browser window opens.
I create a notebook in the browser, and execute a cell containing foo = "bar".
In PyCharm, I open another Terminal tool window by clicking the plus sign to the left of the terminal pane.
In the new terminal I type jupyter console --existing, and it starts an ipython console session.
At the prompt I type dir(), and foo is among the results, confirming that I'm attached to the same kernel as the notebook.
I don't know how it picks which kernel to connect to when there are multiple kernels running in the notebook server.
Don't type exit in the iPython session if you plan to continue using the notebook, it shuts down the kernel.
Unfortunately, tools like Debug and "Execute Line/Selection in Console", which are available for the "Python Console" tool window, are not available for the "Terminal" tool window. In fact, because the Terminal tool window is a simple tool, and that's where I've run my commands, this solution isn't very integrated with PyCharm. The terminal opens in the project directory and activates the conda environment, and it's conveniently adjacent to the editors and tools of the IDE, but otherwise there's no connection to PyCharm's tools.
If anyone can successfully attach PyCharm's integrated PyDev debugger to a running kernel, please chime in.
I'm using PyCharm 2016.3 on macOS 10.12.3. | 2 | 8 | 0 | Is there a way to open an IPython interactive console in pycharm that is connected to an existing running kernel (similar to "python --existing")?
btw: in case it's relevant, in my case, the running kernel is of a Jupiter notebook...
EDIT: To clarify, my question is NOT about how to open an interactive console in PyCharm. It is about how to connect that interactive console to a an existing running (Jupiter notebook) Kernel. | how to open an IPython console connected to an exiting running kernel in PyCharm | 1.2 | 0 | 0 | 2,521 |
42,423,038 | 2017-02-23T18:04:00.000 | 0 | 0 | 1 | 0 | ipython,pycharm | 47,066,267 | 2 | false | 0 | 0 | The easiest way for me is just to type %qtconsole in a jupyter notebook cell and run it. A qt console will open already connected to the running kennel. No PyCharm involved. | 2 | 8 | 0 | Is there a way to open an IPython interactive console in pycharm that is connected to an existing running kernel (similar to "python --existing")?
btw: in case it's relevant, in my case, the running kernel is of a Jupiter notebook...
EDIT: To clarify, my question is NOT about how to open an interactive console in PyCharm. It is about how to connect that interactive console to a an existing running (Jupiter notebook) Kernel. | how to open an IPython console connected to an exiting running kernel in PyCharm | 0 | 0 | 0 | 2,521 |
42,423,373 | 2017-02-23T18:21:00.000 | 0 | 1 | 0 | 0 | c#,python,hex,usb,raspberry-pi3 | 42,424,805 | 1 | true | 0 | 0 | Yes you can do this by using libraries such as pyserial like Leon said for the serial communication.
For the SQL database, you can use sqlalchemy to manage it.
This module (PZEM-004T) uses TTL serial communication, so if yours is not selled with an USB adapter you need one like a FTDI232 based one for example.
I don't know what your program is intended for, but as it's a datalogger, if you want it to run every time your raspberry pi reboot you can call it in your /etc/rc.local | 1 | 0 | 0 | Hi every one but first let me say sorry about my english. I hope you guys will understand what I mean :)
Question :
Is it possible that RaspberryPi with RASPBIAN OS can communicates with PZEM-004T Energy monitor via USB port. I want to use Python to send Hexadecimal Code to request such as voltage, current, power and energy then read data that reply from module(PZEM-004T) and keep it into phpMyadmin.
For example
If I send hex command code : B1 C0 A8 01 01 00 1B,
module will replys data back : A1 00 11 20 00 00 D2.
Then convert replied data to decimal and keep it into database.
please suggest me what is the best way to success this challenge :) | Is it possible? Python send Hex code via usb port (raspberry pi) | 1.2 | 0 | 0 | 593 |
42,426,498 | 2017-02-23T21:27:00.000 | 1 | 0 | 1 | 0 | python,jupyter,rise | 69,008,555 | 2 | false | 0 | 0 | I had the same issue on chrome, somehow the issue for me was solved by simply changing the jupyterlab theme to jupyterlab dark. This issue persisted only on the light theme. Although this might be a temporary fix for the issue. | 1 | 4 | 0 | I am using Jupyter Notebook to build slides. Until yesterday there were two buttons right from the CellToolBar button to start the slide show (one button was RISE.js). All of the sudden these buttons are gone. Now there are no buttons on the right side of the CellToolBar button at all.
I tried conda update jupyter and conda update -c damianavila82 rise using the terminal.
But I get the message # All requested packages already installed. So the issue appears to be somewhere else.
Does anyone know what I could do to restore the buttons?
I am on a mac
The version of the jupyter notebook server is 4.3.1
RISE version is 4.0.0b1
Anaconda is 4.3.0
Python 3.5.2
Thanks! | Python / Jupyter Notebook Slide buttons gone? | 0.099668 | 0 | 0 | 2,507 |
42,430,232 | 2017-02-24T03:29:00.000 | 1 | 1 | 0 | 0 | telegram,telegram-bot,python-telegram-bot | 56,536,495 | 8 | false | 0 | 0 | Got the solution to this problem.
Here is bot which automatically forwards messages from one channel to another without the forward tag.
Moreover the copying speed is legit!
@copythatbot
This is the golden tool everyone is looking for. | 3 | 3 | 0 | I have many telegram channels, 24\7 they send messages in the format
"buy usdjpy sl 145.2 tp 167.4"
"eurusd sell sl 145.2 tp 167.4"
"eurusd sl 145.2 tp 167.4 SELL"
or these words in some order
My idea is to create app that checks every channel's message, and redirects it to my channel if it is in the above format.
Does telegram api allow it? | How can I redirect messages from telegram channels that are in certain format?[telegram bot] | 0.024995 | 0 | 1 | 36,182 |
42,430,232 | 2017-02-24T03:29:00.000 | 4 | 1 | 0 | 0 | telegram,telegram-bot,python-telegram-bot | 42,467,337 | 8 | false | 0 | 0 | You cannot scrape from a telegram channel with a bot, unless, the bot is an administrator in the channel, which only the owner can add.
Once that is done, you can easily redirect posts to your channel by listening for channel_post updates. | 3 | 3 | 0 | I have many telegram channels, 24\7 they send messages in the format
"buy usdjpy sl 145.2 tp 167.4"
"eurusd sell sl 145.2 tp 167.4"
"eurusd sl 145.2 tp 167.4 SELL"
or these words in some order
My idea is to create app that checks every channel's message, and redirects it to my channel if it is in the above format.
Does telegram api allow it? | How can I redirect messages from telegram channels that are in certain format?[telegram bot] | 0.099668 | 0 | 1 | 36,182 |
42,430,232 | 2017-02-24T03:29:00.000 | 2 | 1 | 0 | 0 | telegram,telegram-bot,python-telegram-bot | 42,441,369 | 8 | false | 0 | 0 | This is very easy to do with Full Telegram API.
first on your mobile phone subscribe to all the interested channels
Next you develop a simple telegram client the receives all the updates from these channels
Next you build some parsers that can understand the channel messages and filter out what you are interested in
Finally you send the filtered content (re-formatted) to your own channel.
that's all that is required. | 3 | 3 | 0 | I have many telegram channels, 24\7 they send messages in the format
"buy usdjpy sl 145.2 tp 167.4"
"eurusd sell sl 145.2 tp 167.4"
"eurusd sl 145.2 tp 167.4 SELL"
or these words in some order
My idea is to create app that checks every channel's message, and redirects it to my channel if it is in the above format.
Does telegram api allow it? | How can I redirect messages from telegram channels that are in certain format?[telegram bot] | 0.049958 | 0 | 1 | 36,182 |
42,431,816 | 2017-02-24T05:59:00.000 | 0 | 1 | 0 | 0 | python,selenium,automated-tests,tableau-api | 42,894,435 | 1 | false | 0 | 0 | Automated testing is not really supported by Tableau and can be quite difficult to implement on your own. There are commercial solutions for this at kinesis-ci.com. They can automate functional testing and integrate with Jenkins or other CI tools. | 1 | 0 | 0 | I have a Tableau report with several dropdowns lists. I want to look at particular dropdowns and verify if list values I am seeing is what I want to see.
Is there a way to automate testing on this part of the report? If so, please give me some pointers. I am pretty new to Tableau and I feel doing manual testing on a dropdown with several hundred values is exhausting. Please suggest possible solutions. Thanks. | Automated Testing of Tableau Reports | 0 | 0 | 0 | 1,048 |
42,436,511 | 2017-02-24T10:39:00.000 | 1 | 0 | 1 | 0 | python,atom-editor,pyscripter | 42,436,782 | 1 | false | 0 | 0 | To comment out multiple lines select the code you want to comment and Ctrl + /
To indent multiple lines select the code you would like to indent and press Tab. Press Shift + Tab to indent backwards.
For python I recommend the following packages:
'autocomplete-python' - Useful auto completion package, completes variables, methods, packages,and functions including their arguments.
'python-indent' - Gives indents after the use of ":" like IDLE does, useful so you don't have to Tab every time.
'atom-python-run' - Allows your python programs to be run straight from atom.
For the auto-complete and python-run packages you need to set them up, giving the path to your python directory, for them to run correctly. | 1 | 1 | 0 | hi i am new to programming and i am using atom for python . but how can I comment out multiple lines and give indentation to multiple lines like pyscripter ?
is there any package for that ? | how to comment out multiple lines and give indentation in atom editor | 0.197375 | 0 | 0 | 8,714 |
42,438,998 | 2017-02-24T12:44:00.000 | 1 | 0 | 0 | 0 | python,django,python-requests,celery | 42,439,583 | 1 | true | 1 | 0 | You can put these data into the database/memcache and fetch by userid as a key.
If these data are stateless - it's fine. Concurrent processes take the authenticating parameters, construct request and send it.
If it changes the state (unique incrementing request id, changing token, etc) after each request (or in some requests) - you need to implement a singleton manager to provide correct credentials by request. All tasks should request for credentials from this manager. It can also limit the rate for example.
If you would like to pass this object to the task as a parameter - then you need to serialize it. Just make sure it is seriazeable. | 1 | 1 | 0 | How can I persist an API object across different Celery tasks? I have one API object per user with an authenticated session (python requests) to make API calls. A user_id, csrftoken, etc. is sent with each request.
I need to schedule different tasks in Celery to perform API requests without re-authenticating for each task.
How can I do this? | How can I persist an authenticated API object across different Celery tasks? | 1.2 | 0 | 1 | 152 |
42,440,238 | 2017-02-24T13:38:00.000 | 0 | 0 | 1 | 0 | python | 42,440,311 | 2 | true | 0 | 0 | The name dir is a shortcut for directory. The same shortcut is used for the (unrelated) DOS command dir. | 1 | 2 | 0 | There is a function dir() that can be applied to a module. It gives us the names of all that is defined inside the module.
My doubt may be really silly, but I would like to know what dir in dir() stands for. | Python-function(dir) of import module | 1.2 | 0 | 0 | 96 |
42,441,118 | 2017-02-24T14:27:00.000 | 0 | 0 | 1 | 0 | python,ipython,jupyter-notebook | 42,441,206 | 2 | false | 0 | 0 | %run executes a file, as if you were running it on the command line via the python command
import does what it says, it imports the module into your current notebook, allowing you to use code found in the imported module.
From the sounds of it, since you want to import settings, creating a module that has a function that initialises your settings is probably best, sinc eit's more "pythonic" than running a file beforehand. | 1 | 5 | 0 | I'm unable to find anything meaningful by searching these keywords, so I'm asking here.
What is the main difference between IPython's (when running in a Jupyter notebook) %run and Python's import? If I'd like to import some settings (say, for Matplotlib), for multiple notebooks, which one shall I use? | IPython %run vs. import for loading settings | 0 | 0 | 0 | 2,920 |
42,443,006 | 2017-02-24T15:58:00.000 | 0 | 0 | 0 | 0 | python,django,shell | 42,443,116 | 4 | false | 1 | 0 | root the django project then python manage.py shell do the views actions here. | 1 | 6 | 0 | I would like to get an interactive shell with the code, but I don't even know if such thing exits. Could anyone be able to help me at this point?
EDIT :
I already know we could use python manage.py shell, but I would like something we could insert in the code in such a way we do not have to re-import all the libraries in the shell. | Interactive shell in Django | 0 | 0 | 0 | 11,245 |
42,443,016 | 2017-02-24T15:58:00.000 | 0 | 0 | 0 | 0 | python,postgresql,google-bigquery | 42,454,600 | 1 | false | 0 | 0 | Make an example project and see what times you get, if you can accept those times it's too early to optimize. I see all this is possible in about 3-5 minutes if you have 1Gbit internet access and server running on SSD. | 1 | 1 | 0 | I have about the 3GB 4-5 table in google bigquery and I want to export these table to Postgres. Reading the docs I found I have to do following steps.
create a job that will extract data to CSV in the google bucket.
From google storage to local storage.
Parse all CSV to database
So in the above step is there any efficient way to do all this. I know that step 1 and 2 can't skip no chance of efficiency but in step 3 from reading online, i found that it will take 2-3 hours to do this process.
Can anyone suggest me an efficient way to do this | Dump Data from bigquery to postgresql | 0 | 1 | 0 | 691 |
42,444,796 | 2017-02-24T17:30:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 51,049,687 | 3 | false | 0 | 0 | I had the same issue and was not able to find any resolution for this while the events file kept on growing in size. My understanding is that this file stores the events generated by tensorflow. I went ahead and deleted this manually.
Interestingly, it never got created again while the other files are getting updated when I run a train sequence. | 1 | 8 | 1 | When using estimator.Estimator in tensorflow.contrib.learn, after training and prediction there are these files in the modeldir:
checkpoint
events.out.tfevents.1487956647
events.out.tfevents.1487957016
graph.pbtxt
model.ckpt-101.data-00000-of-00001
model.ckpt-101.index
model.ckpt-101.meta
When the graph is complicated or the number of variables is big, the graph.pbtxt file and the events files can be very big. Is here a way to not write these files? Since model reloading only needs the checkpoint files removing them won't affect evaluation and prediction down the road. | How to turn off events.out.tfevents file in tf.contrib.learn Estimator | 0 | 0 | 0 | 4,050 |
42,445,637 | 2017-02-24T18:17:00.000 | 2 | 0 | 0 | 0 | python,gmail,gmail-api | 42,448,329 | 1 | true | 1 | 0 | Yes, message body is searched.
Try: "from:example.com OR to:example.com"
No, Gmail UI and API search is not case-sensitive.
Be aware that service.users().threads().list() would be more consistent with Gmail UI search assuming the user has conversations enabled which is the Gmail UI default.
in:anywhere expands the search to Trash and Spam which is not normally included. Archived messages are normally included. | 1 | 1 | 0 | When I use code similar to the example code from the api documentation, the query strings which in the web interface return results don't work. This is listing messages, not retrieving them, so I don't think full vs raw helps. The scope granted is gmail.readonly
Is it possible to search on message body with this function?
Is there a way to search on domain name (i.e. all messages from or to *@example.com)
Is the search case-sensitive?
service.users().messages().list(userId=user_id, pageToken=page_token, q=query).execute()
I use 'me' for the user_id, and I checked that it's certainly the same email. A Query for in:anywhere on its own returns the full mail list.
Thanks for the help!
EDIT: The query in question is a single word like a name. Some of them sometimes work with 'name is:anywhere' but not consistently. | Query string for messages list returns inconsistent results compared to web interface | 1.2 | 0 | 0 | 47 |
42,446,290 | 2017-02-24T18:56:00.000 | 1 | 0 | 1 | 0 | python,ipython,jupyter-notebook,jupyter,qtconsole | 42,631,413 | 1 | true | 0 | 0 | IPython provides its own introspection/help tools with ?, so if you do object?, you should get similar output to help(object), and it will go into a pager area in current versions. | 1 | 1 | 0 | I was wondering if more interactive help(object) pages can be generated in qtconsole or notebook, like the ipython terminal console and the builtin python command-line tool. For example, some temporary popup encompassing the Qt window/browser tab (respectively) that can be scrolled and searched.
Anyone have any ideas? | In the IPython terminal, help(object) brings up a colored man page in a "less" window. Can help() also have interaction in jupyter qtconsole/notebook? | 1.2 | 0 | 0 | 99 |
42,446,403 | 2017-02-24T19:04:00.000 | 0 | 0 | 0 | 1 | python,html,linux,ubuntu | 42,446,617 | 1 | true | 0 | 0 | I'm going to answer your question but also beg you to consider another approach.
The functionality you are looking for is usually handled by a database. If you don't want to use anything more complex, SQLite is often all you need. You would then need a simple web application that connects to the database, grabs the fields, and then injects them into HTML.
I'd use Flask for this as it comes with Jinja and that's a pretty simple stack to get started with.
If you really want to edit the HTML file directly in Python, you will need write permissions for whatever user is running the Python script. On Ubuntu, that folder is typically owned by www-data if you are running Apache.
Then you'd open the file in Python, perform file operations on it, and then close it.
with open("/var/www/html/somefile.txt", "a") as myfile:
myfile.write("l33t h4x0r has completed the challenge!\n")
That's an example of how you'd do a simple append operation in Python. | 1 | 0 | 0 | I'm making a "wargame" like the ones on overthewire.org or smashthestack.org. When you finish the game, the user should get a python program that has extra permissions to edit a file in /var/www/html so that they can sign their name. I want to have a program like this so that they can add text to the html file without removing the text of other users and so that it filters offensive words.
How can I make a file editable by a specific program in Linux? And how can I make the program edit the file in python? Do I just use os.system? | Allow a python file to add to a different file linux | 1.2 | 0 | 0 | 31 |
42,449,783 | 2017-02-24T23:06:00.000 | 4 | 0 | 0 | 0 | python,mysql,peewee | 42,451,623 | 2 | false | 0 | 0 | Turns out that I can set the id to None (obj.id = None) which will create a new record when performing save(). | 1 | 2 | 0 | I have an instance of an object (with many attributes) which I want to duplicate.
I copy it using deepcopy() then modify couple of attributes.
Then I save my new object to the database using Python / PeeWee save() but the save() actually updates the original object (I assume it is because that the id was copied from the original object).
(btw no primary key is defined in the object model)
How do I force save the new object? can I change its id?
Thanks. | Copy object instance and insert to DB using peewee creates duplicate ID | 0.379949 | 1 | 0 | 1,374 |
42,451,790 | 2017-02-25T03:57:00.000 | 0 | 0 | 0 | 0 | android,python,django,networking | 44,104,349 | 4 | false | 1 | 0 | A related issue I ran into was in relation to this error:
"code 400, message Bad HTTP/0.9 request type ('\***\***\***\***\***\***\***\***\***')
[time stamp] You're accessing the development server over HTTPS, but it only supports HTTP"
Just verify that this is the reason you are not able to view also. Of course you would see this in your server messages if so. | 2 | 4 | 0 | I have a django server running on my PC. I want to access the server through my android phone.How can I do that? I tried running
python manage.py runserver 0.0.0.0:8000.
After this, I can access server from my pc through PC's ip address, but not accessible through other device coonected to same wifi. | Accessing django localhost server through lan | 0 | 0 | 0 | 9,759 |
42,451,790 | 2017-02-25T03:57:00.000 | 6 | 0 | 0 | 0 | android,python,django,networking | 42,451,815 | 4 | false | 1 | 0 | can you try it python manage.py runserver youripadress:8000 [python manage.py runserver 192.168.0.1:8000] | 2 | 4 | 0 | I have a django server running on my PC. I want to access the server through my android phone.How can I do that? I tried running
python manage.py runserver 0.0.0.0:8000.
After this, I can access server from my pc through PC's ip address, but not accessible through other device coonected to same wifi. | Accessing django localhost server through lan | 1 | 0 | 0 | 9,759 |
42,452,980 | 2017-02-25T06:53:00.000 | 0 | 1 | 1 | 0 | python,python-3.x,encoding,utf-8 | 42,453,222 | 1 | false | 0 | 0 | HaHa yes, thank you. I also realized I had 2.7.13 python open and as soon as I closed it. The module started working both ways. Thank you again!! | 1 | 0 | 0 | appreciate any help I can get. I have searched page after page and have not found a solution that works for my code. Sorry to ask a some what of a redundant question.
I am using Python 3.6.0 and for the life of me cannot get the darn thing to read my special characters. I have a text file with "ā" and am trying to have my module read how many of ā are in the line and the location of them. I have the text file saved as utf-8 encoding and have added utf-8 encoding just about everywhere I can think and it still will not read that the character exists. I am not getting a trace back error or any error at all, that's probably why I'm stumped.
import sys
import re
# coding=utf-8
with open("text1.txt","r", encoding='utf-8') as rf, open('text2.txt','w', encoding='utf-8') as wf:
y = 'ā'
for line in rf:
VarTest = line.count(y)
if VarTest == 1:
VarLocation = [pos for pos, char in enumerate(line) if char == y]
The counter will not count that the character was on the line and I'm pretty sure my code for "VarLocation" is incorrect, but VarTest won't even read/count the darn thing.
Any help would be appreciated thank you! | Python: Text File Reading Special Characters Issues | 0 | 0 | 0 | 644 |
42,453,445 | 2017-02-25T07:52:00.000 | 0 | 1 | 0 | 0 | python,websocket,nat,stun | 64,177,395 | 2 | false | 0 | 0 | NAT punching is used for peer-to-peer (P2P) communication and your audio streaming server seems to be a client-server implementation.
How and if this is going to work heavily depends on your NAT device (which kind of NAT is implemented). Chances are high that your NAT device has short timeouts and you need to punch holes for every client connection (from your raspberry pi).
As you stated you're using WebSockets and these are always TCP, pystun isn't going to work because pystun only supports UDP.
I'd suggest to create a port forwarding in your NAT device, tunnel your traffic using a P2P VPN or host your audio streaming server on a different network. | 1 | 0 | 0 | I have a raspberry pi, which is setup as a audio streaming server. I have used websockets and python as programming language. The client can listen to the live audio stream by connecting to the server hosted on raspberry pi. The system works well in localhost environment. Now, I want to access the server from the internet and by searching I got to know about STUN. I tried to use pystun but I couldn't get the proper port for NAT punching. So can anyone help me to implement STUN?
Note: server is listening at localhost:8000 | How to implement stun with python | 0 | 0 | 1 | 3,652 |
42,457,330 | 2017-02-25T14:32:00.000 | 1 | 1 | 0 | 0 | python,eclipse,pytest | 44,041,455 | 1 | true | 0 | 0 | I did on mac.
Go to "Preferences"
After that "PyDev"
Go to "PyUnit"
Select "Py.test runner"
In the text field "Parameters for test runner" delete key "--verbosity 0"
Also you can delete this key then setup debug or runner config for your test:
"Edit configuration"
"python unittest"
select your config
go to "arguments"
check "override PyUnit"
and delete key "--verbosity 0" | 1 | 0 | 0 | I am trying to setup pytest on Eclipse and I get the following error
usage: runfiles.py [options] [file_or_dir] [file_or_dir] [...]
runfiles.py: error: unrecognized arguments: --verbosity inifile:
None rootdir: D:\EclipseWorkspace\SeleniumPyTest\PyTestSelenium
I have con
I am not looking for an alternative IDE I only want to use Eclipse. | Eclipse PyTest runfiles.py: error: unrecognized arguments: --verbosity | 1.2 | 0 | 0 | 1,215 |
42,457,692 | 2017-02-25T15:05:00.000 | 0 | 0 | 0 | 0 | python,canvas,tkinter | 42,457,856 | 1 | false | 0 | 1 | Yes. Put a frame in the column, and put as many things as you want in the frame.
There are other solutions to the problem. For example, if you have 10 narrow buttons and one wide canvas, you can have the canvas span 10 columns rather than fill a single column. | 1 | 1 | 0 | I have a Tkinter window with a couple of widgets, they are all formatted with
.grid(), but because one of the widget is a large canvas, the column is stretched out. When assigning a button to that coulmn, it stretches as wide as the widget which is not ideal, is there a way of putting more that one button in the same column? | Is a grid within a grid possible? | 0 | 0 | 0 | 187 |
42,458,415 | 2017-02-25T16:14:00.000 | 1 | 0 | 0 | 0 | tensorflow,python-3.5,mnist | 42,461,595 | 1 | true | 0 | 0 | Assuming you are using
from tensorflow.examples.tutorials.mnist import input_data
No, there is no function or argument in that file... What you can do is load all data, and select only the ones and zeros. | 1 | 0 | 1 | I am just starting out with tensorflow and I want to test something only on the 0's and 1's from the MNIST images. Is there a way to import only these images? | Is there any way to only import the MNIST images with 0's and 1's? | 1.2 | 0 | 0 | 98 |
42,459,726 | 2017-02-25T18:07:00.000 | 0 | 0 | 1 | 0 | python,keras | 53,147,988 | 2 | false | 0 | 0 | You probably have to look into more factors.
Look at the system resources, e.g CPU, Memory, Disk IO. (If you use linux, run sar command)
For me, I had other problem with frozen notebook, and it turns out to be the issue of low memory. | 2 | 0 | 1 | I did 1 nb_epoch with batch sizes of 10 and it successfully completed. The accuracy rate was absolutely horrible coming in at a whopping 27%. I want to make it run on more than one epoch to see if the accuracy will, ideally, be above 80% or so, but it keeps freezing my Jupyter Notebook if I try to make it do more than one epoch. How can I fix this?
My backend is Theano just for clarification.
There is definitely a correlation between performance and batch_size. I tried doing batch_size=1 and it took 12s of horrifying, daunting, unforgivable time out of my day to do 1 epoch. | Why does my keras model terminate and freeze my notebook if I do more than one epoch? | 0 | 0 | 0 | 519 |
42,459,726 | 2017-02-25T18:07:00.000 | 0 | 0 | 1 | 0 | python,keras | 42,460,023 | 2 | false | 0 | 0 | It takes time to run through the epochs and sometimes it looks like it freezes, but it still runs and if you wait long enough it will finish. Increasing the batch size makes it run through the epochs faster. | 2 | 0 | 1 | I did 1 nb_epoch with batch sizes of 10 and it successfully completed. The accuracy rate was absolutely horrible coming in at a whopping 27%. I want to make it run on more than one epoch to see if the accuracy will, ideally, be above 80% or so, but it keeps freezing my Jupyter Notebook if I try to make it do more than one epoch. How can I fix this?
My backend is Theano just for clarification.
There is definitely a correlation between performance and batch_size. I tried doing batch_size=1 and it took 12s of horrifying, daunting, unforgivable time out of my day to do 1 epoch. | Why does my keras model terminate and freeze my notebook if I do more than one epoch? | 0 | 0 | 0 | 519 |
42,461,086 | 2017-02-25T20:12:00.000 | 11 | 0 | 0 | 0 | python,pandas,dataframe,subset | 42,461,103 | 1 | false | 0 | 0 | I will answer my own question, hoping it will help someone. I tried this and it worked.
df[(df['gold']>0) & (df['silver']>0)]
Note that I have used & instead of and and I have used brackets to separate the different conditions. | 1 | 6 | 1 | I am trying to subset a pandas dataframe based on values of two columns. I tried this code:
df[df['gold']>0, df['silver']>0, df['bronze']>0] but this didn't work.
I also tried:
df[(df['gold']>0 and df['silver']>0). This didn't work too. I got an error saying:
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
What would you suggest? | Subset pandas dataframe using values from two columns | 1 | 0 | 0 | 6,839 |
42,461,809 | 2017-02-25T21:28:00.000 | 0 | 0 | 1 | 0 | python,editor,utf,wing-ide | 42,595,949 | 1 | false | 0 | 0 | On windows the process for getting code to display correctly includes all of:
including a comment on the second line of each source file with #coding:utf-8, the first should really be the shebang #!/usr/bin/env python
ensure that you have a font installed with good Unicode support, including the character page that you are going to be using - Consolas (the default for Wing-IDE) is usually a good choice but AFAIK does not include the full Arabic character ranges however Lucida Console should provide this.
Ensure that Wing-IDE has the Edit->Preferences->User Interface->Fonts->Editor Font/Size set to the selected font, i.e. Licida Console. | 1 | 0 | 0 | When I open a file which has multilanguage character contents arabic contents are not rendering correctly, I setup encoding to utf-8 but did not help. How do you solve this? | Wing IDE Personal Edition Multilanguage Support | 0 | 0 | 0 | 348 |
42,462,811 | 2017-02-25T23:19:00.000 | 0 | 0 | 1 | 0 | python,windows,spyder | 42,462,843 | 1 | false | 0 | 0 | I had the same problem - it was solved when I reinstall it only for one user. But don't forget to uninstall it first, f just istall over previous installation - it doensn't help. | 1 | 0 | 0 | I downloaded Python 3.6 (32 bit) and then Anaconda in order to use spyder(3.1.3) but it won't open. I tried to run on a terminal (cmd.exe) the command spyder but I had the following message in return : ValueError: stat: embedded null character in path
I don't understand what it means, what do I need to do to open spyder ? | Unable to launch spyder on windows after installation | 0 | 0 | 0 | 286 |
42,463,019 | 2017-02-25T23:46:00.000 | 1 | 0 | 1 | 0 | python,pip,installation | 42,463,096 | 3 | false | 0 | 0 | Using pip to install modules is first easier(just need to use pip install).
And pip also automatically install all the dependencies needed for the module to run.
It's a lot more work to copy and paste especially downloading from pypi since most modules are stored in a .wheel file and has many versions. Pip will install the correct one for your version of python and runs setup.py automatically. | 3 | 7 | 0 | When you're installing modules to python usually you use pip install. Does pip install do anything other than put the modules in the right place? Why can't you just copy and paste the modules? | What's the difference between installing files with pip and copy-pasting | 0.066568 | 0 | 0 | 3,249 |
42,463,019 | 2017-02-25T23:46:00.000 | 7 | 0 | 1 | 0 | python,pip,installation | 42,463,047 | 3 | true | 0 | 0 | Using pip not only copies the modules in the right place, it also properly installs dependencies. In addition, the right place varies from one system to another, one version of python to another, and pip handles that as well.
Finally, copying and pasting files takes either manual intervention, or a lot more lines of script than a simple pip install. | 3 | 7 | 0 | When you're installing modules to python usually you use pip install. Does pip install do anything other than put the modules in the right place? Why can't you just copy and paste the modules? | What's the difference between installing files with pip and copy-pasting | 1.2 | 0 | 0 | 3,249 |
42,463,019 | 2017-02-25T23:46:00.000 | 1 | 0 | 1 | 0 | python,pip,installation | 42,463,208 | 3 | false | 0 | 0 | Python packages typically have a setup.py that could do anything from copying a module to building c extensions. Its also common to byte-compile .py files assuming that later users wouldn't have rights to do so after install. You can build distributions with setup.py, so you could for instance, build a binary distribution for a specific operating system and distribute that. But these days, a popular way to install things is to build a python wheel and let pip do the work for you. | 3 | 7 | 0 | When you're installing modules to python usually you use pip install. Does pip install do anything other than put the modules in the right place? Why can't you just copy and paste the modules? | What's the difference between installing files with pip and copy-pasting | 0.066568 | 0 | 0 | 3,249 |
42,463,866 | 2017-02-26T01:59:00.000 | 3 | 0 | 1 | 0 | python-3.x,visual-studio-code | 42,464,564 | 2 | true | 0 | 0 | I am pretty sure your problems of VSCode not finding the correct version of Python will be resolved if you add your ( Python 3.6 installation ) location to the system path. | 1 | 13 | 0 | I have python 3.6 installed, I have a python extension installed on Visual Studio code but I still can't use pip on Visual Studio code. It says it is not a recognised command. Any help please?
Update: I tried installing pip manually but a file in python2.7 keeps stopping. What's bothersome is that I uninstalled python 2.7 ages ago and I've currently removed every folder with it but python-V still says I have python2.7.6 installed.
I'm on windows 10 | How to use pip with Visual Studio Code | 1.2 | 0 | 0 | 77,646 |
42,464,131 | 2017-02-26T02:41:00.000 | 0 | 0 | 0 | 1 | python,security,uuid | 42,464,260 | 1 | true | 0 | 0 | There are probably far simpler and more effective ways to DOS / DDOS your server. Bear that in mind when you decide home much effort to expend on this.
Here are a couple of ideas that may be (partially) effective.
Rate limit the creation of UUIDs ... globally. If you do this, and monitor how close you are to the point where your DB is full, you can keep ahead of that potential DOS vector.
Severely rate limit the UUIDs created by any given client IP address. However, you need to be careful with this. In many / most cases you won't see the real client IP address because of HTTP proxies, NATing and so on.
There are actually a number of ways to rate limit requests.
You can count the requests, and refuse them when the count in a given interval exceeds a given threshold.
You can record the time since the last request, and refuse requests when the interval is too small. (This is a degenerate version of the previous one.)
You can simply service the requests slowly; e.g. put them into a queue and process them at a fixed rate.
However, you also need to beware that your defenses don't create an alternative DDOS mechanism; e.g. hammering the server with UUID requests to prevent real users from getting UUIDs. | 1 | 0 | 0 | I fully recognize that the answer to this question may be "No."
I am writing the client portion of a client-server program that will run on potentially thousands of computers and will periodically report back to the server with system settings and configurations. When the computer first initiates, currently the client code independently generates a UUID value, and reports back to the server with that ID to uniquely identify itself. The server uses this ID number for identify a machine, even when the IP address and other associated data changes.
While each session is protected via TLS, a hacker could trivially identify the protocol and spam the server with thousands of new UUID values, tricking the server into thinking there are an exponential number of new machines on the network - which would eventually fill up the DB and trigger a DoS condition.
Any ideas on how to uniquely identify a server/workstation such that even a hacker could not create "phantom" machines?
Any ideas? Again, I fully understand that the answer may very well be "No".
Using the TPM chip is not an option, primarily because not all machines, architectures or OSs will allow for this option. | Uniquely Identify Computer, prevents hackers | 1.2 | 0 | 0 | 104 |
42,472,265 | 2017-02-26T18:13:00.000 | 3 | 0 | 1 | 0 | python,visual-studio-code | 42,472,350 | 2 | false | 0 | 0 | 1) Install VS Code
2) Go to View > Command Palette
3) Type ext install and click on Install Extensions
4) Search for Python and install it
5) Reload VS
6) Start coding | 1 | 3 | 0 | I'm new to python (and in coding in general). I'd like to ask some help to set up python on VS Code. I've tried to follow several guides but none of them were really helpful.
The following have been downloaded:
Python 3.6
VS Code
Python extensions | How to set up Python in VS Code? | 0.291313 | 0 | 0 | 2,462 |
42,472,958 | 2017-02-26T19:14:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,colors,python-idle | 61,460,514 | 5 | false | 0 | 0 | The strnage output of the length is with the return keyword, and NORMAL is also a color | 1 | 8 | 0 | This is very easily a duplicate question--because it is. However, there are very many inadequate answers to this (e.g. try curses! -- pointing to a 26 page documentation).
I just want to print text in a color other than blue when I'm outputting in IDLE. Is it possible? What's an easy way to do this? I'm running Python 3.6 on Windows.
Please explain with an example.
(I have found that ANSI codes do not work inside IDLE, only on the terminal.) | How do I print colored text in IDLE's terminal? | 0 | 0 | 0 | 19,362 |
42,473,069 | 2017-02-26T19:22:00.000 | 0 | 0 | 0 | 0 | python,django,opencv,django-views,django-rest-framework | 42,482,342 | 1 | false | 1 | 0 | You need to move your settings.py file to the cv_api directory. | 1 | 1 | 0 | How to fix this error?
Traceback (most recent call last):
File "C:/Users/HP/Downloads/cv_api/cv_api/manage.py", line 10, in
execute_from_command_line(sys.argv)
File "F:\Anaconda2Installation\lib\site-packages\django-1.10.5-py2.7.egg\django\core\management__init__.py", line 367, in execute_from_command_line
utility.execute()
File "F:\Anaconda2Installation\lib\site-packages\django-1.10.5-py2.7.egg\django\core\management__init__.py", line 316, in execute
settings.INSTALLED_APPS
File "F:\Anaconda2Installation\lib\site-packages\django-1.10.5-py2.7.egg\django\conf__init__.py", line 53, in getattr
self._setup(name)
File "F:\Anaconda2Installation\lib\site-packages\django-1.10.5-py2.7.egg\django\conf__init__.py", line 41, in _setup
self._wrapped = Settings(settings_module)
File "F:\Anaconda2Installation\lib\site-packages\django-1.10.5-py2.7.egg\django\conf__init__.py", line 97, in init
mod = importlib.import_module(self.SETTINGS_MODULE)
File "F:\Anaconda2Installation\lib\importlib__init__.py", line 37, in import_module
import(name)
ImportError: No module named cv_api.settings | How to fix Django Rest API error? | 0 | 0 | 0 | 93 |
42,473,383 | 2017-02-26T19:49:00.000 | 2 | 0 | 0 | 0 | python,django,pandas,amazon-web-services,amazon-elastic-beanstalk | 42,474,523 | 2 | false | 1 | 0 | I've had problems using panda w/django on a micro aws ec2 instance because of too little memory. Upgrading the instance solved the problem for me -
If you are using a t2.micro for example, i might be worth upgrading to a larger instance just to see if the problem magically disappears - like it did for me.
Perhaps not a completely satisfactory answer, but t might help you narrow down the problem. | 1 | 2 | 0 | I am attempting to deploy a Django project on AWS Elastic Beanstalk. One of my views makes use of Pandas to generate some data.
I was able to get Pandas to compile properly on my EBS hosted site. I was noticing however that the browser would become "hung" when I tried to access any pages. I removed the view with the Pandas and the pandas import and the problem went away. However, when I add the Pandas import back, the problem recurs, leading me to believe it is a problem with Pandas. Also, if I remove the view that utilizes Pandas, but keep the "import pandas" statement, the problem remains. As soon as I remove "import pandas as pd" the problem goes away.
When I SSH into the instance and run manage.py shell I can import Pandas properly and have no problems whatsoever - so I know Pandas has compiled properly.
I checked the logs and nothing jumps out at me. Any help would be greatly appreciated! | Django Pandas AWS | 0.197375 | 0 | 0 | 202 |
42,473,655 | 2017-02-26T20:10:00.000 | 0 | 1 | 0 | 0 | python,xml,opencv,raspberry-pi,object-recognition | 42,484,540 | 1 | true | 0 | 0 | Yes it is possible to have different sizes for positive images and you are right its a very tedious task but what you see in all the tutorials they will just tell to keep all the images in a same size because it takes a lot of time to specify all the different sizes for each images so they try to keep the code simple and clean but if want to try out with different sizes u can do it but before doing it first try it out with the a same size then if it works then only try with the different sizes. blindly don't try it make sure you are on the right way. | 1 | 0 | 0 | As I understood in order to generate XML, we gotta gather negative and positive images. In every tutorial I read/saw, all images; positive or negative, are resized to the same size, where the negative is usually double the size of the positive.
My question is as follow; can I have different sizes for positive images? I know it is going to be tedious since you need to specify the size of each image every time. But is it possible? Or would the detection of the object fail?
Imagine I am detecting an object, lets say a bed. A bed can be single or double, king size, queen size , .. etc. You got my point.
So is it better to create a different XML for each of these sizes? Or I can put them in one positive directory and adjust the parameters accordingly to the size?
Reasons I am using Haar Cascade features is that it is fast and I need the detection to be done later on in real-time on Raspberry. If there any other way, I am open to any other suggestion too.
Thanks! | creating Haar cascade XML - different sizes | 1.2 | 0 | 0 | 378 |
42,477,956 | 2017-02-27T04:42:00.000 | 6 | 0 | 0 | 1 | python,dll | 42,478,265 | 1 | false | 0 | 0 | If you are using a 32 bit Python and the DLL is a 64 bit DLL you will get this error, likewise if the DLL is 32 bit and your Python is 64 bit.
You can check this using the dumpbin /HEADERS <dll filepath> command from a visual studio command prompt. | 1 | 3 | 0 | I found this error [Error 193] %1 is not a valid Win32 application when i run this python command windll.LoadLibrary("C:\Windows\System32\plcommpro.dll")
For this error i found my plcommpro.dll file is not executable file.But I don't know how to make it as a executable file.If someone knows please share me.
Thanks and Best. | Error 193 %1 is not a valid Win32 application | 1 | 0 | 0 | 4,749 |
42,479,954 | 2017-02-27T07:24:00.000 | 3 | 0 | 0 | 0 | python-3.x,tensorflow,keras,gmm | 42,481,143 | 1 | false | 0 | 0 | Are you sure that it is what you want? you want to integrate a GMM into a neural network?
Tensorflow and Keras are libraries to create, train and use neural networks models. The Gaussian Mixture Model is not a neural network. | 1 | 4 | 1 | I am trying to implement Gaussian Mixture Model using keras with tensorflow backend. Is there any guide or example on how to implement it? | Implement Gaussian Mixture Model using keras | 0.53705 | 0 | 0 | 2,619 |
42,480,000 | 2017-02-27T07:28:00.000 | 7 | 0 | 1 | 0 | python,colors,ansi-colors | 42,480,001 | 1 | false | 0 | 0 | You can do this by ending the escape code using 49m. For example, red text on a transparent background would be \033[1;31;49m.
Happy colouring! | 1 | 3 | 0 | I was wondering if it was possible to set the background colour of text, using ANSI colour codes, to transparent, or just the colour of the terminal, so you can use colours without having to deal with the background colour not being the right colour. | Python ANSI Colour codes transparent background | 1 | 0 | 0 | 2,677 |
42,482,398 | 2017-02-27T09:45:00.000 | 0 | 0 | 1 | 0 | python-2.7,python-behave | 42,904,541 | 1 | true | 0 | 0 | As far as I know, you can't pass objects in features, but you can send text and tables in the feature file. The text you send can be exploited to send JSON string of your dict/lists and further create objects using that data in step definition. | 1 | 0 | 0 | If a method takes dictionary as parameter, how to pass it. Do we need to construct the dictionary from values present in feature file and pass internally to the method. Is there any direct way we can pass objects? | How to pass dictionary or list or custom objects from feature file | 1.2 | 0 | 0 | 479 |
42,483,272 | 2017-02-27T10:25:00.000 | 0 | 0 | 0 | 0 | python-2.7,python-3.x,keyboard,mouse,pyautogui | 42,882,714 | 1 | true | 0 | 0 | PyAutoGUI will still work if there's no keyboard or mouse connected. However, PyAutoGUI does not have any way to detect if a keyboard or mouse are connected to your machine. | 1 | 0 | 0 | I wrote a script with python-pyautogui to automate mouse and keyboard actions. Mouse and keyboard commands are working as per script when keyboard and mouse are connected. I wondered, it still works even when they are not connected. if so it is as per design, may i know how to set the condition to execute the script only if keyboard and mouse are connected??
Kindly share your ideas.
Thanks in advance.. | Pyautogui commands are working even when no mouse or keyboard is connected | 1.2 | 0 | 0 | 371 |
42,484,305 | 2017-02-27T11:10:00.000 | 1 | 0 | 0 | 0 | python,github,dataset,energy | 43,356,794 | 1 | false | 0 | 0 | The aim of non-intrusive load monitoring is to obtain a breakdown of the net energy consumption of a building in terms of individual appliance consumption. There has been work on multiple algorithms so as to get this done ( with varying performance) and as always these can be written in any programming language.
NILMTK itself is written in python and is a good toolkit to describe, analyse and integrate nilm algorithms to compare them. | 1 | 1 | 1 | Does anybody know anything about NILM or power signature analysis?
Can i do non-intrusive load monitoring using python?
I got to know about one python toolkit known as NILMTK. But I need help for knowing about NILM.
If anybody know about NILM, then please guide me. Thank you. | What is Non-Intrusive Load Monitoring or energy disaggregation or power signature analysis? | 0.197375 | 0 | 0 | 613 |
42,485,069 | 2017-02-27T11:49:00.000 | 19 | 0 | 1 | 0 | python,windows,pip | 42,485,377 | 1 | true | 0 | 0 | pip list will list all your installed packages. | 1 | 16 | 0 | I have Python installed in Windows and used pip to install lots of things.
How can I know what packages I installed with pip? | How to know what packages are installed with pip | 1.2 | 0 | 0 | 22,379 |
42,489,060 | 2017-02-27T15:06:00.000 | 1 | 0 | 1 | 0 | python-3.x,sqlite,json-api | 42,489,154 | 1 | true | 0 | 0 | Python 3.6 and sqlite both work on a Mac; whether your json api calls will depends on what service you are trying to make calls to (unless you are writing a server that services such calls, in which case you are fine).
Any further recommendations are either a) off topic for SO or b) dependent on what you want to do with these technologies. | 1 | 1 | 0 | I'm starting my Python journey with a particular project in mind;
The title explains what I'm trying to do (make json api calls with python3.6 and sqlite3). I'm working on a mac.
My question is whether or not this setup is possible? Or if I should use MySQL, PostgreSQL or MongoDB?
If it is possible, am I going to have to use any 3rd party software to make it run?
Sorry if this is off topic, I'm new to SO and I've been trying to research this via google and so far no such luck.
Thank you in advance for any help you can provide. | Python3 & SQLite3 JSON api calls | 1.2 | 1 | 0 | 451 |
42,491,349 | 2017-02-27T16:54:00.000 | 1 | 0 | 0 | 0 | python,ssl,google-chrome-extension | 42,491,557 | 1 | false | 0 | 0 | You shouldn't have to do anything special. Any HTTPS requests made by the Chrome extension will go through the same certificate verification as would any other request made in the browser. | 1 | 0 | 0 | Hey I am building a simple API server to handle some functionality for a chrome extension. But I need to the users of my extension/add-on to be logged in and for this I want to make the python api server HTTPS requests only. How would I go about verifying the certificate for my server from the chrome extension? Sorry for this broad ish question, I am very new to web based programming. | Python Server: Chrome Extension SSL certificate | 0.197375 | 0 | 1 | 221 |
42,492,146 | 2017-02-27T17:31:00.000 | 0 | 0 | 1 | 0 | python,string,variables,unique | 42,492,254 | 4 | false | 0 | 0 | Split each string into the numbers you want, for instance by splitting on the _ character and removing any non-numeric characters from each substring. Then you have them in order from left to right, and add each of them to one of three sets for the left, middle and right numbers. Sets can only have the same entry once. You can then print the contents of the sets to get what you want. If needed they can be sorted first. All of these things I've described can be individually googled. | 1 | 1 | 0 | I am looping through variables of the form - V15_1_1. The middle and last number in this string changes for each variable. I want to create a string of all the unique middle numbers.
For example, I may have V15_1_1, V15_2_3, V15_2_6, V15_12_17,V15_12_3 which would return a text string of '1,2,12' | Python String Manipulation - finding unique numbers in many strings | 0 | 0 | 0 | 50 |
42,493,384 | 2017-02-27T18:40:00.000 | 1 | 0 | 0 | 0 | python,django,python-2.7,django-models | 42,494,571 | 2 | false | 1 | 0 | You can have a profile class (say UserProfile) with foreign key to the user that is to be created only when user signs up using the website's registration form. That way, superuser which is created on admin site or through command line wouldn't need an extra profile instance attached to it. | 1 | 2 | 0 | I know that superusers and regular users are both just django's User objects, but how can I write a custom user class that requires some fields for plain users and doesn't require those fields for superusers? | In Django, is it possible for superusers to have different required fields than non-superusers? | 0.099668 | 0 | 0 | 535 |
42,493,984 | 2017-02-27T19:16:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,indexing,sqlite,whoosh | 51,001,220 | 1 | false | 0 | 0 | You need to add a post-save function index_data to your database writers. This post-save should get the data to be written in database, normalize it and index it.
The searcher could be an independent script given an index and queries to be searched for. | 1 | 3 | 0 | Could someone give me an example of using whoosh for a sqlite3 database, I want to index my database. Just a simple connect and searching through the database would be great. I searched online and was not able to find an examples for sqlite3. | Using Whoosh with a SQLITE3.db (Python) | 0 | 1 | 0 | 466 |
42,497,824 | 2017-02-27T23:42:00.000 | 0 | 0 | 1 | 0 | python,pysvn | 42,912,443 | 1 | false | 0 | 0 | The simplest way to call pysvn.Client().checkin() is with the absolute path to the working copy top folder. You should then see from svn log that all the changed files where committed.
By using an absolute path you avoid issues with the current working directory and relative paths.
If this does not help post details of the error message you receive, the version of python, the version of pysvn and your operating system. | 1 | 0 | 0 | After I get the pysvn client, how can I set the working folder to a specific local working folder belonging to a specific repo?
I'd like to set the working folder so i can then commit changes from there.
I have tried passing in the path to the client but that doesn't work. | PYSVN: how to set local working folder so I can commit a file? | 0 | 0 | 0 | 353 |
42,499,530 | 2017-02-28T03:14:00.000 | 0 | 0 | 0 | 0 | python,image,opencv,image-processing | 42,501,462 | 2 | false | 0 | 0 | With only one image, accurate depth estimation is near impossible. However, there are various methods of estimating depth under certain assumptions or the availability of the camera calibration matrix. As mentioned by @WenlongLiu, OpenCV is a very good place to start with. | 1 | 1 | 0 | I have an image captured by android camera. Is it possible to calculate depth of object in the image ? Image contains object and background only. Any suggestion, explanation or links that you think can help me will be appreciated. | how to calculate depth of object in image captured by android camera | 0 | 0 | 0 | 2,510 |
42,499,927 | 2017-02-28T03:56:00.000 | 1 | 0 | 0 | 0 | python,opencv,image-processing,object-recognition | 42,501,574 | 2 | false | 0 | 0 | SIFT feature matching might produce better results than ORB. However, the main problem here is that you have only one image of each type (from the mobile camera and from the Internet. If you have a large number of images of this car model, then you can train a machine learning system using those images. Later you can submit one image of the car to the machine learning system and there is a much higher chance of the machine learning system recognizing it.
From a machine learning point of view, using only one image as the master and matching another with it is analogous to teaching a child the letter "A" using only one handwritten letter "A", and expecting him/her to recognize any handwritten letter "A" written by anyone. | 1 | 1 | 1 | Suppose I have an image of a car taken from my mobile camera and I have another image of the same car taken downloaded from the internet.
(For simplicity please assume that both the images contain the same side view projection of the same car.)
How can I detect that both the images are representing the same object i.e. the car, in this case, using OpenCV?
I've tried template matching, feature matching (ORB) etc but those are not working and are not providing satisfactory results. | How to compare if two images representing the same object if the pictures of the object belongs from two different sources - in OpenCV? | 0.099668 | 0 | 0 | 1,709 |
42,500,030 | 2017-02-28T04:06:00.000 | 0 | 0 | 0 | 1 | python | 42,500,440 | 2 | false | 0 | 0 | You can create init script in /etc/init/ directory
Example:
start on runlevel [2345]
stop on runlevel [!2345]
kill timeout 5
respawn
script
exec /usr/bin/python /path/to/script.py
end script
Save with .conf extension | 1 | 0 | 0 | I am currently using linux. I have a python script which I want to run as a background service such as the script should start to run when I start my machine.
Currently I am using python 2.7 and the command 'python myscripy.py' to run the script.
Can anyone give an idea about how to do this.
Thank you. | Run a python script as a background service in linux | 0 | 0 | 0 | 1,140 |
42,502,614 | 2017-02-28T07:26:00.000 | 0 | 0 | 1 | 1 | python,terminal,version-control | 49,884,463 | 5 | false | 0 | 0 | as usual in Mac python 2.7 is already installed, however if you installed python 3+
then you can just type in terminal: python3
so that you can use the newer version that you installed.
if you want to use python 2.7 then just type: python | 2 | 2 | 0 | I am using a Python on a Mac and I know that Python 2 comes preinstalled on the system (and in fact usable through Terminal). Is there a way to make it so Terminal can run Python 3? Can/should you set this as a default? I know changing the default settings for Python version usage could break your system so should I just install Python 3 and then use it through its launch icon instead? | Run Python 3.6 in Terminal on Mac? | 0 | 0 | 0 | 6,719 |
42,502,614 | 2017-02-28T07:26:00.000 | 0 | 0 | 1 | 1 | python,terminal,version-control | 42,502,774 | 5 | false | 0 | 0 | Best option is to install Python through Anaconda. This allows easy management and much more. You can have virtual environments having different Python versions as well as different modules installed. | 2 | 2 | 0 | I am using a Python on a Mac and I know that Python 2 comes preinstalled on the system (and in fact usable through Terminal). Is there a way to make it so Terminal can run Python 3? Can/should you set this as a default? I know changing the default settings for Python version usage could break your system so should I just install Python 3 and then use it through its launch icon instead? | Run Python 3.6 in Terminal on Mac? | 0 | 0 | 0 | 6,719 |
42,505,621 | 2017-02-28T10:02:00.000 | 0 | 1 | 0 | 0 | android,python,database,raspberry-pi3 | 42,584,122 | 3 | false | 0 | 0 | One thing to watch out for on the Raspberrry Pi is memory usage. I tend to have lots of browser tabs, terminal windows, etc. running. Pi gets very unhappy (i.e. slows to crawl) when it runs low on memory. Even a long scroll-back on Idle can do it (e.g., logging lots to the shell)
I put the resource monitors on the top right of the task bar - memory usage shows in red. As it approaches the top, it's time to close some things!
Right-click on task bar, select "Add / Remove Panel Items", "Panel Applets", "Add", scroll down to "Resource Monitors", select and "Add". It defaults to showing CPU, so click "Preferences" and click "Display RAM usage", "OK", "OK" | 1 | 0 | 0 | I recently got a Raspberry Pi for use with a college project, I'm fairly well versed coding Python but not with Raspberry Pi's unfortunately. What's the best compiler for Raspberry Pi app creation? What I need it to do is connect with a database and another application I have developed for Android. Is it possible to do this? Or am i better to program it on my PC and FTP the files across? | Programming Python on a Raspberry Pi | 0 | 0 | 0 | 454 |
42,506,097 | 2017-02-28T10:22:00.000 | 2 | 0 | 0 | 0 | python,django,authentication | 42,506,251 | 1 | true | 1 | 0 | Firstly, your title doesn't seem to have anything to do with your question; which is a good thing, because using email as a primary key is an incredibly bad idea. People change email addresses all the time, but a PK must stay constant.
Secondly, you should absolutely not copy the contrib.auth code. Apart from anything else, this wouldn't solve your problems with the migrations, it would just make them worse.
A much better solution would be add a pre-save signal on User, which you can do from anywhere in your project (ideally in an AppConfig ready method). And you don't need to change the model in order to make email uneditable; you should do that in the forms that use that model. | 1 | 0 | 0 | I read a lot about the topic, but didn't find anything that sounded as satisfactory as an idea of mine, and also don't see why it would raise problems. So if you can give it a look...
I want to change user authentication in mid-project, i.e. avoid using a custom user model, since this requires to be done before the first migration.
Can I just modify the email field to editable=False in the django.contrib.user.models and add a modified save(), so the email is updated from the username? Or the other way round? Pro
And another distict general question: Would I do such things in the venv, or can I copy the whole auth-folder as a local app? | django user with email as pk - hack possible? | 1.2 | 0 | 0 | 305 |
42,506,643 | 2017-02-28T10:47:00.000 | 0 | 0 | 0 | 0 | python-2.7,ibm-cloud-infrastructure | 42,534,904 | 1 | true | 0 | 0 | that is not possbile using the Softlayer's API even using the Softlayer's control portal that information is not avaiable.
regards | 1 | 0 | 0 | I would like to know how traffic flows in SoftLayer between the servers, In course of traffic flow how to detect unusual traffic and how to detect ports that are prone to unusual/malicious traffic. Can we retrieve this information using any SoftLayer python API's ? | how to get unusual traffic or traffic information in SoftLayer using python API's | 1.2 | 0 | 1 | 61 |
42,507,360 | 2017-02-28T11:19:00.000 | 0 | 0 | 0 | 0 | python-3.x,streaming,matroska | 64,589,001 | 2 | false | 0 | 0 | u can actually open the file and keep appending data in bytes while the video is playing in the widget | 1 | 0 | 0 | I need to make a video server-client, in which the server sends the video data in chunks, and the client has to receive them and show them. Unlike any of the projects I've ever made, I don't have a main structure for it in my mind, because I can't find an api or module for displaying the video which could be separated in chunks. All the files that are to be displayed are matroska .mkv. I've been searching but all I could find was kivy, which does offer video displaying but it requires the whole video, and gstreamer which has the same drawback. Can anyone point me a module I can rely on for video displaying?
Thanks in advance | Streaming video player | 0 | 0 | 0 | 2,348 |
42,510,042 | 2017-02-28T13:27:00.000 | 2 | 0 | 0 | 0 | python,image,binary,classification,svm | 42,510,284 | 1 | true | 0 | 0 | You should probably post this on cross-validated:
But as a direct answer you should probably look into sequence to sequence learners as it has been clear to you SVM is not the ideal solution for this.
You should look into Markov models for sequential learning if you dont wanna go the deep learning route, however, Neural Networks have a very good track record with image classification problems.
Ideally for a Sequential learning you should try to look into Long Short Term Memory Recurrent Neural Networks, and for your current dataset see if pre-training it on an existing data corpus (Say CIFAR-10) may help.
So my recomendation is give Tensorflow a try with a high level library such as Keras/SKFlow.
Neural Networks are just another tool in your machine learning repertoire and you might aswell give them a real chance.
An Edit to address your comment:
Your issue there is not a lack of data for SVM,
the SVM will work well, for a small dataset, as it will be easier for it to overfit/fit a separating hyperplane on this dataset.
As you increase your data dimensionality, keep in mind that separating it using a separating hyperplane becomes increasingly difficult[look at the curse of dimensionality].
However if you are set on doing it this way, try some dimensionality reduction
such as PCA.
Although here you're bound to find another fence-off with Neural Networks,
since the Kohonen Self Organizing Maps do this task beautifully, you could attempt to
project your data in a lower dimension therefore allowing the SVM to separate it with greater accuracy.
I still have to stand by saying you may be using the incorrect approach. | 1 | 0 | 1 | Lets say I have two arrays in dataset:
1) The first one is array classified as (0,1) - [0,1,0,1,1,1,0.....]
2) And the second array costists of grey scale image vectors with 2500 elements in each(numbers from 0 to 300). These numbers are pixels from 50*50px images. - [[13 160 239 192 219 199 4 60..][....][....][....][....]]
The size of this dataset is quite significant (~12000 elements).
I am trying to build bery basic binary classificator which will give appropriate results. Lets say I wanna choose non deep learning but some supervised method.
Is it suitable in this case? I've already tried SVM of sklearn with various parameters. But the outcome is inappropriately inacurate and consists mainly of 1: [1,1,1,1,1,0,1,1,1,....]
What is the right approach? Isnt a size of dataset enough to get a nice result with supervised algorithm? | What algorithm to chose for binary image classification | 1.2 | 0 | 0 | 695 |
42,512,817 | 2017-02-28T15:32:00.000 | 0 | 0 | 1 | 0 | python,windows,fatal-error | 50,874,304 | 9 | false | 0 | 0 | I am not sure why this question is still here without a solution. I just encountered this and solved by this:
Close all CMD or console emulators.
Go to the system environment settings and clear all old Python path or environment settings. Make sure you check the PATH in both User and System settings as well.
Try again python -V and see if you can run it or not.
If you have removed all Python environment settings, I recommend you to reinstall Python and turn on the Add Python to PATH setting during installation. | 7 | 28 | 0 | I'm installing Python on my Windows 10 laptop, and when I try to run it I get this:
Fatal Python error: Py_Initialize: unable to load the file system
codec ModuleNotFoundError: No module named 'encodings' Current thread
0x0000037c (most recent call first): | Fatal Python error on Windows 10 ModuleNotFoundError: No module named 'encodings' | 0 | 0 | 0 | 108,659 |
42,512,817 | 2017-02-28T15:32:00.000 | 34 | 0 | 1 | 0 | python,windows,fatal-error | 51,911,598 | 9 | true | 0 | 0 | I ran into this same issue on Windows 10. Here's how I fixed it:
Open your 'Environment Variables' (Under 'System Properties').
In the window that opens, select the 'Path' row, then click the 'Edit...' button.
There should be two environment variables C:\Python37-32\Scripts\ and C:\Python37-32\ Then click 'OK' (Make sure to check that these path values correspond to the location and version of your Python install.)
Next, in the top portion of the 'Environment Variables' window, look for the PYTHONHOME variable and make sure that it is also set to C:\Python37-32 | 7 | 28 | 0 | I'm installing Python on my Windows 10 laptop, and when I try to run it I get this:
Fatal Python error: Py_Initialize: unable to load the file system
codec ModuleNotFoundError: No module named 'encodings' Current thread
0x0000037c (most recent call first): | Fatal Python error on Windows 10 ModuleNotFoundError: No module named 'encodings' | 1.2 | 0 | 0 | 108,659 |
42,512,817 | 2017-02-28T15:32:00.000 | 0 | 0 | 1 | 0 | python,windows,fatal-error | 50,229,595 | 9 | false | 0 | 0 | Even I had the same issue when I installed the Python 3.7 beta version, and I resolved it by following these steps:
If you have nay previous version of Python installed and the environment variable and path is set for that version already, just remove the path and environment variable
Run the downloaded Python 3.7 EXE file file as administrator
At the end of installation if it asks the permission for path length just click on that. Now type "python" on the command line and see. It should work. | 7 | 28 | 0 | I'm installing Python on my Windows 10 laptop, and when I try to run it I get this:
Fatal Python error: Py_Initialize: unable to load the file system
codec ModuleNotFoundError: No module named 'encodings' Current thread
0x0000037c (most recent call first): | Fatal Python error on Windows 10 ModuleNotFoundError: No module named 'encodings' | 0 | 0 | 0 | 108,659 |
42,512,817 | 2017-02-28T15:32:00.000 | 1 | 0 | 1 | 0 | python,windows,fatal-error | 45,536,117 | 9 | false | 0 | 0 | First, don't forget to select "Add Python 3.x to PATH" before you click on Install now and reboot after installation so that the new path is taken into account by Windows.
Second, I had the the same problem with Python 3 on Windows 7 and 64-bit and I got rid of it by deleting PYTHONPATH and PYTHONHOME from Windows 7 system environment variables, because I had a previous installation of Python 2 and those paths were pointing to my the Python 2 directory. I had to simply to delete the PYTHONPATH and PYTHONHOME variables. | 7 | 28 | 0 | I'm installing Python on my Windows 10 laptop, and when I try to run it I get this:
Fatal Python error: Py_Initialize: unable to load the file system
codec ModuleNotFoundError: No module named 'encodings' Current thread
0x0000037c (most recent call first): | Fatal Python error on Windows 10 ModuleNotFoundError: No module named 'encodings' | 0.022219 | 0 | 0 | 108,659 |
42,512,817 | 2017-02-28T15:32:00.000 | 0 | 0 | 1 | 0 | python,windows,fatal-error | 67,559,697 | 9 | false | 0 | 0 | If this issue is happening to you in a virtual environment, just delete it and create another. It worked for me. | 7 | 28 | 0 | I'm installing Python on my Windows 10 laptop, and when I try to run it I get this:
Fatal Python error: Py_Initialize: unable to load the file system
codec ModuleNotFoundError: No module named 'encodings' Current thread
0x0000037c (most recent call first): | Fatal Python error on Windows 10 ModuleNotFoundError: No module named 'encodings' | 0 | 0 | 0 | 108,659 |
42,512,817 | 2017-02-28T15:32:00.000 | 1 | 0 | 1 | 0 | python,windows,fatal-error | 63,207,392 | 9 | false | 0 | 0 | I solved this issue by deleting my virtual environment and creating a new one. I believe in my case the error came because the old virtual environment was running on Python 3.6, which I had recently uninstalled and replaced with Python 3.8.
This is probably bad practice in general, but I don't have any real projects where the version matters. | 7 | 28 | 0 | I'm installing Python on my Windows 10 laptop, and when I try to run it I get this:
Fatal Python error: Py_Initialize: unable to load the file system
codec ModuleNotFoundError: No module named 'encodings' Current thread
0x0000037c (most recent call first): | Fatal Python error on Windows 10 ModuleNotFoundError: No module named 'encodings' | 0.022219 | 0 | 0 | 108,659 |
42,512,817 | 2017-02-28T15:32:00.000 | 0 | 0 | 1 | 0 | python,windows,fatal-error | 61,180,759 | 9 | false | 0 | 0 | Before installing the Python interpreter, check environment and remove the existing PYTHONHOME, and python path under "PATH" of environment. Or change it to the new path to be installed. | 7 | 28 | 0 | I'm installing Python on my Windows 10 laptop, and when I try to run it I get this:
Fatal Python error: Py_Initialize: unable to load the file system
codec ModuleNotFoundError: No module named 'encodings' Current thread
0x0000037c (most recent call first): | Fatal Python error on Windows 10 ModuleNotFoundError: No module named 'encodings' | 0 | 0 | 0 | 108,659 |
42,514,902 | 2017-02-28T17:14:00.000 | 4 | 0 | 0 | 0 | python,sql,django | 42,515,036 | 1 | true | 1 | 0 | A SQLite database is just a file. To drop the database, simply remove the file.
When using SQLite, python manage.py migrate will automatically create the database if it doesn't exist. | 1 | 1 | 0 | How to remove and add completly new db.sqlite3 database to django project written in pycharm?
I did something wrong and I need completelty new database. The 'flush' command just removes data from databse but it't dosent remove tables schema. So the question is how to get get back my databse to begin point(no data, no sql table) | How to remove and add completly new db.sqlite3 to django project written in pycharm? | 1.2 | 1 | 0 | 1,356 |
42,515,611 | 2017-02-28T17:54:00.000 | 0 | 0 | 0 | 0 | python,selenium,button,youtube | 42,515,710 | 1 | true | 0 | 0 | you can select with CSS Selector like this:
if you want to like:
#watch8-sentiment-actions > span > span:nth-child(1) > button
if you want cancel like:
#watch8-sentiment-actions > span > span:nth-child(2) > button | 1 | 0 | 0 | does anyone of you know how to find / click the YT-Like button in Python using selenium since it doesn't have a real Id, etc...
Thanks for the answers | Python - Selenium: Find / Click YT-Like Button | 1.2 | 0 | 1 | 378 |
42,516,811 | 2017-02-28T19:04:00.000 | 0 | 0 | 1 | 0 | windows,python-3.x,dll,cx-freeze | 52,225,827 | 2 | false | 0 | 1 | I noticed the same issue on some of my builds; one build went well and the other didn't.
So after a bit of searching, I found out that adding import requests suddenly added pywintypes36.dll and VCRUNTIME140.dll to the previously wrong build.
No idea why, and I won't say adding this import is a definitive solution, but some packages such as requests seems to ease cx_Freeze's dependency detection. | 1 | 6 | 0 | I am working on building my application on windows with python 3.5.2, I built the python with VC++ Redistributable 2015.24021 installed.
And I don't want the customer to having to install redist themselves, so I figured that cx_freeze include_msvcr option might be the way to go. However, even if I use include_msvcr option, the .exe is still not executable on windows without redist.
I can see there is a VCRUNTIME140.dll which was copied from my built python 3.5.2, and executing it on machines without redist complains about missing api-ms-win-crt-stdio-l1-1-0.dll.
I can find this .dll file on my build machine, so here are some quick questions.
Is it expected that include_msvcr won't bundle dependent .dll files like the abovementioned one?
Is there any workaround? Like adding the dll to include_files? Where should I put as destination for the dll?
Thanks a lot. | cx_freeze include_msvcr does not bundle windows VC2015 runtime | 0 | 0 | 0 | 1,364 |
42,518,430 | 2017-02-28T20:46:00.000 | 0 | 0 | 1 | 0 | python,multiprocessing | 42,519,368 | 2 | true | 0 | 0 | For other interested persons, I performed this simple test:
Download 18 files from an FTP site, each about 114MB, using python's multiprocessing module and ftp.retrbinary (time shown for two separate download attempts)
Download time with 1 Processor: 14 minutes, 7.2 minutes
Download time with 2 Processors: 4.0 minutes, 3.8 minutes
Download time with 3 Processors: 2.5 minutes, 4.0 minutes
Download time with 4 Processors: 6.0 minutes, 2.3 minutes
Download speed is impacted by several other factors, but in this small sample it appears adding a few processors reduces the time it takes to download multiple files. | 2 | 1 | 0 | I have a long list of files I want to download from an ftp site. I use python to execute the download, and use the multiprocessing module to download 4 or so files at the same time. My hope using multiple processors is that the files will download faster than using just using one thread. Is there a benefit for using multiprocessing to execute multiple download commands? Or will one thread fill up the download bandwidth? | Is there a benefit (increased speed) to downloading files on multiple processors? | 1.2 | 0 | 0 | 73 |
42,518,430 | 2017-02-28T20:46:00.000 | 1 | 0 | 1 | 0 | python,multiprocessing | 42,518,619 | 2 | false | 0 | 0 | One thread is probably capable of saturating your bandwidth. You may want to try it anyway: it could be the FTP server throttles its output per connection and with multiple connections you get to use more of its resources. | 2 | 1 | 0 | I have a long list of files I want to download from an ftp site. I use python to execute the download, and use the multiprocessing module to download 4 or so files at the same time. My hope using multiple processors is that the files will download faster than using just using one thread. Is there a benefit for using multiprocessing to execute multiple download commands? Or will one thread fill up the download bandwidth? | Is there a benefit (increased speed) to downloading files on multiple processors? | 0.099668 | 0 | 0 | 73 |
42,519,094 | 2017-02-28T21:30:00.000 | 0 | 0 | 1 | 0 | python,pip,virtualenv | 42,519,225 | 3 | false | 1 | 0 | The key is that pip installs things for a specific version of Python, and to a very specific location. Basically, the pip command in your virtual environment is set up specifically for the interpreter that your virtual environment is using. So even if you explicitly call another interpreter with that environment activated, it will not pick up the packages pip installed for the default interpreter. | 1 | 0 | 0 | I am trying to start a Python 3.6 project by creating a virtualenv to keep the dependencies. I currently have both Python 2.7 and 3.6 installed on my machine, as I have been coding in 2.7 up until now and I wish to try out 3.6. I am running into a problem with the different versions of Python not detecting modules I am installing inside the virtualenv.
For example, I create a virtualenv with the command: virtualenv venv
I then activate the virtualenv and install Django with the command: pip install django
My problems arise when I activate either Python 2.7 or 3.6 with the commands
py -2 or py -3, neither of the interactive shells detect Django as being installed.
Django is only detected when I run the python command, which defaults to 2.7 when I want to use 3.6. Does anyone know a possible fix for this so I can get my virtualenv working correctly? Thanks! If it matters at all I am on a machine running Windows 7. | Virtualenv installing modules with multiple Python versions | 0 | 0 | 0 | 771 |
42,520,522 | 2017-02-28T23:15:00.000 | 0 | 0 | 0 | 1 | python,linux,numpy,linux-kernel | 42,558,956 | 2 | false | 0 | 0 | So heres one way I've managed to do it.
By using the numpy memmap object you can instantiate an array that directly corresponds with a part of the disk. Calling the method flush() or python's del causes the array to sync to disk, completely bypassing the OS's buffer. I've successfully written ~280GB to disk at max throughput using this method.
Will continue researching. | 2 | 2 | 0 | We are building a python framework that captures data from a framegrabber card through a cffi interface. After some manipulation, we try to write RAW images (numpy arrays using the tofile method) to disk at a rate of around 120 MB/s. We are well aware that are disks are capable of handling this throughput.
The problem we were experiencing was dropped frames, often entire seconds of data completely missing from the framegrabber output. What we found was that these framedrops were occurring when our Debian system hit the dirty_background_ratio set in sysctl. The system was calling the flush background gang that would choke up the framegrabber and cause it to skip frames.
Not surprisingly, setting the dirty_background_ratio to 0% managed to get rid of the problem entirely (It is worth noting that even small numbers like 1% and 2% still resulted in ~40% frame loss)
So, my question is, is there any way to get this python process to write in such a way that it is immediately scheduled for writeout, bypassing the dirty buffer entirely?
Thanks | Make python process writes be scheduled for writeback immediately without being marked dirty | 0 | 0 | 0 | 46 |
42,520,522 | 2017-02-28T23:15:00.000 | 0 | 0 | 0 | 1 | python,linux,numpy,linux-kernel | 52,653,381 | 2 | true | 0 | 0 | Another option is to get the os file id and call os.fsync on it. This will schedule it for writeback immediately. | 2 | 2 | 0 | We are building a python framework that captures data from a framegrabber card through a cffi interface. After some manipulation, we try to write RAW images (numpy arrays using the tofile method) to disk at a rate of around 120 MB/s. We are well aware that are disks are capable of handling this throughput.
The problem we were experiencing was dropped frames, often entire seconds of data completely missing from the framegrabber output. What we found was that these framedrops were occurring when our Debian system hit the dirty_background_ratio set in sysctl. The system was calling the flush background gang that would choke up the framegrabber and cause it to skip frames.
Not surprisingly, setting the dirty_background_ratio to 0% managed to get rid of the problem entirely (It is worth noting that even small numbers like 1% and 2% still resulted in ~40% frame loss)
So, my question is, is there any way to get this python process to write in such a way that it is immediately scheduled for writeout, bypassing the dirty buffer entirely?
Thanks | Make python process writes be scheduled for writeback immediately without being marked dirty | 1.2 | 0 | 0 | 46 |
42,522,650 | 2017-03-01T03:30:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 48,599,471 | 9 | false | 0 | 0 | Try using time.sleep(secs) that should work fine. | 6 | 21 | 0 | I've tried pip install time and sudo -H pip install time, but I keep getting the error:
Could not find a version that satisfies the requirement time (from
versions: ) No matching distribution found for time
I'm working in PyCharm, but what really doesn't make sense is that I can import time in the Python Console but not in my actual code. | Can't install time module | 0.022219 | 0 | 0 | 86,500 |
42,522,650 | 2017-03-01T03:30:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 61,544,586 | 9 | false | 0 | 0 | time is pre-installed because when I import time "import time"
everything goes well | 6 | 21 | 0 | I've tried pip install time and sudo -H pip install time, but I keep getting the error:
Could not find a version that satisfies the requirement time (from
versions: ) No matching distribution found for time
I'm working in PyCharm, but what really doesn't make sense is that I can import time in the Python Console but not in my actual code. | Can't install time module | 0 | 0 | 0 | 86,500 |
42,522,650 | 2017-03-01T03:30:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 61,854,329 | 9 | false | 0 | 0 | I think I had the same problem, I think it's because one variable was called time... | 6 | 21 | 0 | I've tried pip install time and sudo -H pip install time, but I keep getting the error:
Could not find a version that satisfies the requirement time (from
versions: ) No matching distribution found for time
I'm working in PyCharm, but what really doesn't make sense is that I can import time in the Python Console but not in my actual code. | Can't install time module | 0 | 0 | 0 | 86,500 |
42,522,650 | 2017-03-01T03:30:00.000 | -1 | 0 | 1 | 0 | python,python-2.7 | 63,048,323 | 9 | false | 0 | 0 | You should first import the library. So add a statement like:
import | 6 | 21 | 0 | I've tried pip install time and sudo -H pip install time, but I keep getting the error:
Could not find a version that satisfies the requirement time (from
versions: ) No matching distribution found for time
I'm working in PyCharm, but what really doesn't make sense is that I can import time in the Python Console but not in my actual code. | Can't install time module | -0.022219 | 0 | 0 | 86,500 |
42,522,650 | 2017-03-01T03:30:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 65,613,286 | 9 | false | 0 | 0 | You cannot create an object with the name "time" because it will conflict with the "time" module. | 6 | 21 | 0 | I've tried pip install time and sudo -H pip install time, but I keep getting the error:
Could not find a version that satisfies the requirement time (from
versions: ) No matching distribution found for time
I'm working in PyCharm, but what really doesn't make sense is that I can import time in the Python Console but not in my actual code. | Can't install time module | 0.022219 | 0 | 0 | 86,500 |
42,522,650 | 2017-03-01T03:30:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 51,969,620 | 9 | false | 0 | 0 | I have also got the error, while referring the time on the 'requirements.txt' and pushing app to cloud foundry.
So the errors is expected. So, its possible on other scenarios also.
I just removed the time from the 'requirements.txt' before push my app to make it work.! | 6 | 21 | 0 | I've tried pip install time and sudo -H pip install time, but I keep getting the error:
Could not find a version that satisfies the requirement time (from
versions: ) No matching distribution found for time
I'm working in PyCharm, but what really doesn't make sense is that I can import time in the Python Console but not in my actual code. | Can't install time module | 0 | 0 | 0 | 86,500 |
42,522,654 | 2017-03-01T03:30:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,apache-spark,pyspark | 42,522,820 | 1 | true | 0 | 0 | You can create traditional Python data objects such as array, list, tuple, or dictionary in PySpark.
You can perform most of the operations using python functions in Pyspark.
You can import Python libraries in Pyspark and use them to process data in Pyspark
You can create a RDD and apply spark operations on them | 1 | 1 | 1 | I am currently self-learning Spark programming and trying to recode an existing Python application in PySpark. However, I am still confused about how we use regular Python objects in PySpark.
I understand the distributed data structure in Spark such as the RDD, DataFrame, Datasets, vector, etc. Spark has its own transformation operations and action operations such as .map(), .reduceByKey() to manipulate those objects. However, what if I create traditional Python data objects such as array, list, tuple, or dictionary in PySpark? They will be only stored in the memory of my driver program node, right? If I transform them into RDD, can i still do operations with typical Python function?
If I have a huge dataset, can I use regular Python libraries like pandas or numpy to process it in PySpark? Will Spark only use the driver node to run the data if I directly execute Python function on a Python object in PySpark? Or I have to create it in RDD and use Spark's operations? | How Python data structure implemented in Spark when using PySpark? | 1.2 | 0 | 0 | 854 |
42,524,114 | 2017-03-01T05:46:00.000 | 7 | 0 | 0 | 0 | python,webdriver,geckodriver | 42,542,815 | 5 | true | 0 | 0 | For one make sure you are downloading the one for your OS. Windows is at the bottom of the list it will say win32. Download that file or 64 doesn't matter.
After that you are going to want to extract the file. If you get an error that says there is no file in the Winrar file, this may be because in your Winrar settings you have Winrar set to not extract any files that have the extension .exe. If you go to Winrar options then settings then security you can delete this it will say *.exe, and after you delete that you can extract the file. After that is done, search how to update the path so that gecko driver can be accessed. Then you will most likely need to restart. | 2 | 8 | 0 | I am trying to intall webdriver and in order to open firefox i need the geckodriver to be installed and in the correct path.
Firstly the download link to install geckodriver only allows you to install a file that is not an executable. So is there a way to make it an executable?
secondly I have tried to change my path variables in commmand prompt, but of course it didn't work. I then changed the user variable not the system path variables because there is not Path in system. there is a Path in user variables so I edited that to change where the file is located.
I have extracted the geckodriver rar file and have received a file with no extension. I don't know how you can have a file with no extension, but they did it. The icon is like a blank sheet of paper with a fold at the top left.
If anyone has a solution for this including maybe another package that is like webdriver and will allow me to open a browser and then refresh the page after a given amount of time. this is all I want to do. | how to install geckodriver on a windows system | 1.2 | 0 | 1 | 66,774 |
42,524,114 | 2017-03-01T05:46:00.000 | 0 | 0 | 0 | 0 | python,webdriver,geckodriver | 46,927,125 | 5 | false | 0 | 0 | I've wrestled with the same question for last hour.
Make sure you have the latest version of Firefox installed. I had Firefox 36, which, when checking for updates, said it was the latest version. Mozilla's website had version 54 as latest. So download Firefox from website, and reinstall.
Make sure you have the latest gecko driver downloaded.
If you're getting the path error - use the code below to figure out which path python is looking at. Add the geckodriver.exe to the working directory.
import os
os.getcwd() | 2 | 8 | 0 | I am trying to intall webdriver and in order to open firefox i need the geckodriver to be installed and in the correct path.
Firstly the download link to install geckodriver only allows you to install a file that is not an executable. So is there a way to make it an executable?
secondly I have tried to change my path variables in commmand prompt, but of course it didn't work. I then changed the user variable not the system path variables because there is not Path in system. there is a Path in user variables so I edited that to change where the file is located.
I have extracted the geckodriver rar file and have received a file with no extension. I don't know how you can have a file with no extension, but they did it. The icon is like a blank sheet of paper with a fold at the top left.
If anyone has a solution for this including maybe another package that is like webdriver and will allow me to open a browser and then refresh the page after a given amount of time. this is all I want to do. | how to install geckodriver on a windows system | 0 | 0 | 1 | 66,774 |
42,524,336 | 2017-03-01T06:01:00.000 | 0 | 0 | 0 | 1 | python,architecture | 42,525,826 | 1 | false | 0 | 0 | Let me put it like this. You will be having these 4 statements in the following way. In the simplest way you could keep a table of users and a table of hostnames which will have following columns -> fk to users, hostname, last update and boolean is_running.
You will need the following actions.
UPDATE:
You will run this periodically on the whole table. You could optimize this by using a select with a filter on the last update column.
INSERT and DELETE:
This is for when the user adds or removes hostnames. During inserting also ping the hostname and update the last update column as the current time.
For the above 3 operations whenever they run they'd be using a lock on the respective rows. After each of the latter 2 operations you could notify the user.
Finally the READ:
This is whenever the user wants to see the status of his hostnames. If he has added or removed a hostname recently he will be notified only after the commit.
Otherwise do a select * from hostnames where user.id = x and send him the result. Everytime he hits refresh you could run this query.
You could also put indices on both the tables as the read operation is the one that has to be fastest. You could afford slightly slower times on the other 2 operations.
Do let me know if this works or if you've done differently. Thank you. | 1 | 1 | 0 | The ping service that I have in mind allows users to keep easily track of their cloud application (AWS, GCP, Digital Ocean, etc.) up-time.
The part of the application's design that I am having trouble with is how to effectively read a growing/shrinking list of hostnames from a database and ping them every "x" interval. The service itself will be written in Python and Postgres to store the user-inputted hostnames. Keep in mind that the list of hostnames to ping is variable since a user can add and also remove hostnames at will.
How would you setup a system that checks for the most up-to-date list of hostnames, executes pings across said list of hostnames, and store the results, at a specific interval?
I am pretty new to programming. Any help or pointers in the right direction will be greatly appreciated | Designing a pinging service | 0 | 1 | 0 | 45 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.