Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
39,856,291 | 2016-10-04T15:29:00.000 | 1 | 0 | 0 | 0 | python-2.7,tensorflow,deep-learning | 39,858,096 | 1 | false | 0 | 0 | The "basic" version of this is straightforward. You use the same graph as for training the network, but instead of optimizing w.r.t. the parameters of the network, you optimize w.r.t the input (which has to be a variable with the shape of your input image). Your optimization target is the negative (because you want to maximize, but TF optimizers minimize) logit of your target class. You want to run it with a couple of different initial values for the image.
There's also a few related techniques, if you search for DeepDream and adversarial examples you should find a lot of literature. | 1 | 1 | 1 | In TensorFlow it is pretty straight forward to visualize filters and activation layers given a single input.
But I'm more interested in the opposite way: feeding a class (as one-hot vector) to the output layer and see something like the optimal input image for that specific class.
Is there a way to do so or to run the graph reversed?
Background: I'm using Googles Inception V3 with 15 classes and I've trained the network already with a large amount of data up to a good precision. Now I'm interested in understanding why and how the model distinguishes the different classes. | How to visualize DNNs dependent of the output class in TensorFlow? | 0.197375 | 0 | 0 | 95 |
39,857,289 | 2016-10-04T16:19:00.000 | 7 | 0 | 1 | 0 | python,anaconda,conda | 57,060,370 | 4 | false | 1 | 0 | The conda-forge channel is where you can find packages that have been built for conda but yet to be part of the official Anaconda distribution.
Generally, you can use any of them. | 1 | 195 | 0 | Conda and conda-forge are both Python package managers. What is the appropriate choice when a package exists in both repositories? Django, for example, can be installed with either, but the difference between the two is several dependencies (conda-forge has many more). There is no explanation for these differences, not even a simple README.
Which one should be used? Conda or conda-forge? Does it matter? | Should conda, or conda-forge be used for Python environments? | 1 | 0 | 0 | 106,246 |
39,859,834 | 2016-10-04T19:00:00.000 | 6 | 0 | 1 | 0 | python,data-structures,stack,python-module | 39,859,868 | 2 | false | 0 | 0 | pip install pythonds.
And then from pythonds.basic.stack import Stack. Note that it's Stack, not stack. | 1 | 2 | 0 | when I run a programme containing:-
from pythonds.basic.stack import Stack
it says:-
ImportError: No module named pythonds.basic.stack
Please help me out. | How can I install pythonds module? | 1 | 0 | 0 | 7,023 |
39,861,106 | 2016-10-04T20:22:00.000 | 3 | 0 | 1 | 0 | python,matplotlib,ipython,jupyter | 50,086,042 | 2 | false | 0 | 0 | %matplotlib auto should switch to the default backend. | 2 | 7 | 0 | Well, I know I can use %matplotlib inline to plot inline.
However, how to disable it?
Sometime I just want to zoom in the figure that I plotted. Which I can't do on a inline-figure. | How to DISABLE Jupyter notebook matplotlib plot inline? | 0.291313 | 0 | 0 | 9,341 |
39,861,106 | 2016-10-04T20:22:00.000 | 1 | 0 | 1 | 0 | python,matplotlib,ipython,jupyter | 39,861,256 | 2 | true | 0 | 0 | Use %matplotlib notebook to change to a zoom-able display. | 2 | 7 | 0 | Well, I know I can use %matplotlib inline to plot inline.
However, how to disable it?
Sometime I just want to zoom in the figure that I plotted. Which I can't do on a inline-figure. | How to DISABLE Jupyter notebook matplotlib plot inline? | 1.2 | 0 | 0 | 9,341 |
39,861,960 | 2016-10-04T21:20:00.000 | 1 | 1 | 0 | 1 | python,c,system,interpreter | 39,862,114 | 1 | true | 0 | 0 | Make sure you have executable permission for python_script.
You can make python_script executable by
chmod +x python_script
Also check if you are giving correct path for python_script | 1 | 2 | 0 | I am trying to invoke python script from C application using system() call
The python script has #!/usr/bin/python3 on the first line.
If I do system(python_script), the script does not seem to run.
It seems I need to do system(/usr/bin/python3 python_script).
I thought I do not need to specify the interpreter externally if I have #!/usr/bin/python3 in the first line of the script.
Am I doing something wrong? | Do we need to specify python interpreter externally if python script contains #!/usr/bin/python3? | 1.2 | 0 | 0 | 46 |
39,862,590 | 2016-10-04T22:14:00.000 | 2 | 0 | 0 | 1 | python,airflow | 40,569,324 | 1 | false | 0 | 0 | I ran into this issue as well using the LocalExecutor. It seems to be a limitation in how the LocalExecutor works. The scheduler ends up spawning child processes (32 in your case). In addition, your scheduler performs 20 iterations per execution, so by the time it gets to the end of its 20 runs, it waits for its child processes to terminate before the scheduler can exit. If there is a long-running child process, the scheduler will be blocked on its execution.
For us, the resolution was to switch to the CeleryExecutor. Of course, this requires a bit more infrastructure, management, and overall complexity for the Celery backend. | 1 | 2 | 0 | I am using airflow 1.7.1.3.
I have an issue with concurrency DAGs / Tasks. When a DAG is running, the scheduler does not launch other DAGs any more. It seems that scheduler is totally frozen (no logs anymore) ... until the running DAG is finished. Then, the new DAGrun is triggered. My different tasks are long-running ECS task (~10 minutes)
I used LocalExecutor and I let default config about parallelism=32 and dag_concurrency=16. I use airflow scheduler -n 20 and reboot it automatically and I set 'depends_on_past': False for all my DAGs declaration.
For information, I deployed airflow in containers running in an ECS cluster. max_threads = 2 and I have only 2 CPU available.
Any ideas ? Thanks | Airflow does not trigger concurrent DAGs with `LocalExecutor` | 0.379949 | 0 | 0 | 1,233 |
39,862,803 | 2016-10-04T22:35:00.000 | 0 | 1 | 1 | 0 | python-3.x,exception,pip,imapclient | 49,603,292 | 1 | false | 0 | 0 | If you are using Windows.., try right clicking 'cmd.exe' and select 'Run as Administrator' and click 'Yes' to allow the following program to make changes to this computer. | 1 | 0 | 0 | I am trying to download a couple of packages in python 3.5 but pip keeps throwing an exception(via pip install pyzmail), please see below:
How do I overcome this issue?
Exception:
Traceback (most recent call last):
File "c:\users\chiruld\appdata\local\programs\python\python35\lib\pip\basecommand.py", line 122, in main
status = self.run(options, args)
File "c:\users\chiruld\appdata\local\programs\python\python35\lib\pip\commands\install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "c:\users\chiruld\appdata\local\programs\python\python35\lib\pip\req.py", line 1229, in prepare_files
req_to_install.run_egg_info()
File "c:\users\chiruld\appdata\local\programs\python\python35\lib\pip\req.py", line 292, in run_egg_info
logger.notify('Running setup.py (path:%s) egg_info for package %s' % (self.setup_py, self.name))
File "c:\users\chiruld\appdata\local\programs\python\python35\lib\pip\req.py", line 265, in setup_py
import setuptools
File "c:\users\chiruld\appdata\local\programs\python\python35\lib\setuptools__init__.py", line 2, in
from setuptools.extension import Extension, Library
File "c:\users\chiruld\appdata\local\programs\python\python35\lib\setuptools\extension.py", line 5, in
from setuptools.dist import _get_unpatched
File "c:\users\chiruld\appdata\local\programs\python\python35\lib\setuptools\dist.py", line 103
except ValueError, e:
^
SyntaxError: invalid syntax | Issues installing pyzmail or imapclient on python 3.5, pip throws a value and syntax error | 0 | 0 | 0 | 328 |
39,868,163 | 2016-10-05T07:45:00.000 | 0 | 0 | 0 | 0 | python-2.7,dbf,arcpy,pyshp | 39,892,009 | 1 | false | 0 | 0 | Not exactly a programmatical solution for my problem but a practical one:
My shapefile is always static, only the attributes of the features will change. So I copy my original shapefile (only the basic files with endings .shp, .shx, .prj) to my output folder and rename it to the name I want.
Then I create my CSV-File with all calculations and convert it to DBF and save it with the name of my new shapefile to the output folder too. ArcGIS will now load the shapefile along with my own DBF file and I don't even need to do any tablejoin at all!
Now my program runs through in only 50 seconds!
I am still interested in more solutions for the table join problem, maybe I will encounter that problem again in the future where the shapefile is NOT always static. I did not really understand Nan's solution, I am still at "advanced beginner" level in Python :)
Cheers | 1 | 2 | 1 | I have created a rather large CSV file (63000 rows and around 40 columns) and I want to join it with an ESRI Shapefile.
I have used ArcPy but the whole process takes 30! minutes. If I make the join with the original (small) CSV file, join it with the Shapefile and then make my calculations with ArcPy and continously add new fields and calculate the stuff it takes 20 minutes. I am looking for a faster solution and found there are other Python modules such as PySHP or DBFPy but I have not found any way for joining tables, hoping that could go faster.
My goal is already to get away from ArcPy as much as I can and preferable only use Python, so preferably no PostgreSQL and alikes either.
Does anybody have a solution for that? Thanks a lot! | DBF Table Join without using Arcpy? | 0 | 1 | 0 | 377 |
39,869,000 | 2016-10-05T08:30:00.000 | 0 | 0 | 0 | 0 | python,web.py,cx-oracle | 40,708,891 | 1 | false | 1 | 0 | XDB is an Oracle database component. It would appear that on your first PC, you're able to automatically log on to the database which is why you're not prompted. However, the second PC isn't able to, so you're prompted.
Compare using SQL*Plus (or other oracle client) from your two PCs & configure PC #2 so that it won't require a login (or modify your cx_oracle connect() call to provide the correct connection parameters (user, password, dsn, etc.) | 1 | 0 | 0 | I have a webservice (web.py+cx_Oracle) and now I will call it with localhost:8080/...!
On the local pc it is working. But after installation on a second pc for testing purposes it is not working there. All versions are the same!
On the second pc the browser is asking for a username and password from XDB. What is XDB and why is he asking only on the second pc?
On the first pc everything works fine and he is not asking for username and password...Can someone explain to me what is going on? | Asking for username and password from XDB | 0 | 0 | 1 | 809 |
39,869,681 | 2016-10-05T09:03:00.000 | 0 | 0 | 0 | 0 | python,django,django-admin | 39,869,931 | 2 | false | 1 | 0 | There is no built-in solution to this problem, if you want the fields to display dynamically you will always need a custom javascript/ajax solution! You might be able to hack the admin view and template to conditionally show/not show widgets for a field, but if you want to do it dynamically based on user behaviors in the admin, you'll be using javascript.
It's not so terrible, though. At least the Django admin templates have model- and instance-specific ids to give you granular control over your show/hide behavior. | 1 | 0 | 0 | I have a simple but problematic question for me. How can I disable checkbox, if input is already filled/checked? I must disable some fields after first filling them. Thank you for all your ideas.
Sierran | Disable checkbox in django admin if already checked | 0 | 0 | 0 | 917 |
39,871,632 | 2016-10-05T10:34:00.000 | 2 | 1 | 0 | 0 | python,sonarqube | 39,873,031 | 1 | false | 1 | 0 | SonarQube doesn't know the concept of "class". This is a logical element, whereas SonarQube manages only "physical" components like files or folders. The consequence is that the Web API allows you to query only components that are "physical". | 1 | 1 | 0 | I want to get the sonar result in the class wise classification or modularized format. I am using python and the sonar web API. Apart from the basic APIs are there any other APIs which give me the results per class | Is there a way to get the sonar result per class or per module | 0.379949 | 0 | 0 | 143 |
39,872,909 | 2016-10-05T11:35:00.000 | 0 | 0 | 0 | 1 | django,celery,python-3.5 | 39,879,475 | 1 | true | 1 | 0 | I would use a model. The user selects the tasks and orders them, creating records in the table. A celery task runs and executes the tasks from the table in the specified order. | 1 | 0 | 0 | I have page that allows user to select tasks which should be executed in selected order, one by one. So, it create group of tasks. User can create several of them. For each group I should make possible to look on tasks progress.
I'm looked into several things like chain, chord, group but it seems very tricky for me, and I don't see any possibility to look on each task progress.
What's good solution for this kind of problem? | Best practice for sequential execution of group of tasks in Celery | 1.2 | 0 | 0 | 361 |
39,877,804 | 2016-10-05T15:14:00.000 | 0 | 1 | 0 | 0 | python-2.7,smartsheet-api | 39,885,588 | 1 | false | 0 | 0 | Any attributes described in the Smartsheet API documentation (which do not specifically appear in a Python code example) represent how the attributes appear in the raw API (JSON) requests/responses. The Python SDK itself actually uses Python variable naming conventions: "lowercase with words separated by underscores as necessary to improve readability" (as described here: python.org/dev/peps/pep-0008). So, for example, the raw API response may contain the attribute "modifiedAt" -- but when using this attribute via the Python SDK, you'll refer to it as "modified_at".
So, try using created_at and modified_at (instead of createdAt and modifiedAt). | 1 | 0 | 0 | My Desktop is Debian 8.5 running firefox and running Mozilla Firefox 45.3 and SmartSheet latest version. Lately I have been trying to obtain attributes from a sheet, among them createdAt or modifiedAt, but when I run the code below:
!/usr/bin/python
import smartsheet
Token
planilha = smartsheet.Smartsheet(MyToken)
action = planilha.Sheets.list_sheets(include_all=True)
sheets = action.data
counter
xCount=0
for row in sheets:
xCount+=1
print row.id, row.createdAt
print xCount
I get
.......
print row.id, row.createdAt
File "/usr/local/lib/python2.7/dist-packages/smartsheet/models/sheet.py", line 175, in getattr
raise AttributeError(key)
AttributeError: createdAt
......
I just wonder why or I certainly miss something in Smartsheet API 2.0 docs..
thanks in advance | SmartSheetAPI using python: createdAt or modifiedAt attributes from a sheet | 0 | 0 | 0 | 186 |
39,879,034 | 2016-10-05T16:13:00.000 | 1 | 0 | 0 | 0 | python,django,nginx | 39,879,669 | 1 | false | 1 | 0 | Error 502 Bad Gateway means that the NGINX server used to access your site couldn't communicate properly with the upstream server (your application server).
This can mean that either or both of your NGINX server and your Django Application server are configured incorrectly.
Double-check the configuration of your NGINX server to check it's proxying to the correct domain/address of your application server and that it is otherwise configured correctly.
If you're sure this isn't the issue then check the configuration of your application server. Are you able to connect directly to the application server's address? If you are able to log in to the server running the application, you can try localhost:<port> using your app's port number to connect directly. You can try it with curl to see what response code you get back. | 1 | 0 | 0 | I am new to this. I took the image of running django application and spawned the new vm that points to a different database but I am getting this "502 Bad Gateway nginx/1.1.1"
when i tested this in development mode, it works fine but not otherwise.
i looked into /var/log/nginx/access.log and error.log but nothing found there.
Any help would be appreciated | 502 Bad Gateway nginx/1.1.19 on django | 0.197375 | 0 | 0 | 1,426 |
39,879,939 | 2016-10-05T17:06:00.000 | 2 | 0 | 0 | 0 | python,django,postgresql,django-models,sqlite | 40,100,350 | 1 | true | 1 | 0 | This may help you :
I think you have pre-stored migration files(migrate for sqlite database).
Now you have change the database configuration but still django looking for the existing table according to migration files you have(migrated for previous database).
Better you delete all the migration files in migration folder of your app, and migrate it again, by running commands python manage.py makemigrations and python manage.py migrate it may work fine. | 1 | 2 | 0 | I have been developing a Django project using sqlite3 as the backend and it has been working well. I am now attempting to switch the project over to use postgres as the backend but running into some issues.
After modifying my settings file, setting up postgres, creating the database and user I get the error below when running manage.py migrate
django.db.utils.ProgrammingError: relation "financemgr_rate" does not exist
financemgr is an app within the project. rate is a table within the app.
If I run this same command but specify sqlite3 as my backend it works fine.
For clarity I will repeat:
Environment Config1
Ubuntu 14.04, Django 1.10
Settings file has 'ENGINE': 'django.db.backends.sqlite3'
Run manage.py migrate
Migration runs and processes all the migrations successfully
Environment Config2
Ubuntu 14.04, Django 1.10
Settings file has 'ENGINE': 'django.db.backends.postgresql_psycopg2'
Run manage.py migrate
Migration runs and gives the error django.db.utils.ProgrammingError: relation "financemgr_rate" does not exist
Everything else is identical. I am not trying to migrate data, just populate the schema etc.
Any ideas? | django migrate failing after switching from sqlite3 to postgres | 1.2 | 1 | 0 | 955 |
39,880,906 | 2016-10-05T18:06:00.000 | 0 | 0 | 1 | 0 | python,package,environment-variables | 39,918,729 | 1 | false | 0 | 0 | I think I'll just detail in the readme file what to insert and where. I tried to find a difficult solution when it was really simple and straightforward | 1 | 0 | 0 | I created a python package for in-house use which relies upon some environmental variables (namely, the user and password to enter an online database). for my company, the convenience of installing a package rather than having it in every project is significant as the functions inside are used in completely separate projects and maintainability is a primary issue.
so, how do I "link" the package with the environmental variables? | In-house made package and environmental variables link | 0 | 0 | 0 | 23 |
39,881,589 | 2016-10-05T18:50:00.000 | 0 | 1 | 0 | 0 | python,virtual-machine,antivirus,malware-detection,cuckoo | 39,904,650 | 1 | false | 0 | 0 | You can do it by deleting entries from the mysql database for pending analysis. | 1 | 1 | 0 | I am testing malware detection on guest VM using Cuckoo sandbox platform. To speed up the analysis, I want to remove pending analysis but keep completed analysis.
Cuckoo have --clean option but it will clean all tasks and samples. Can you think of a way to only remove pending analysis?
Thanks | How to clear pending analysis in Cuckoo sandbox platform | 0 | 0 | 0 | 2,686 |
39,882,504 | 2016-10-05T19:49:00.000 | 2 | 0 | 0 | 0 | python,django,django-tables2 | 39,882,505 | 2 | true | 1 | 0 | Im posting this as a future reference for myself and other who might have the same problem.
After searching for a bit I found out that django-tables2 was sending a single query for each row. The query was something like SELECT * FROM "table" LIMIT 1 OFFSET 1 with increasing offset.
I reduced the number of sql calls by calling query = list(query) before i create the table and pass the query. By evaluating the query in the python view code the table now seems to work with the evaulated data instead and there is only one database call instead of hundreds. | 1 | 3 | 0 | im using django-tables2 in order to show values from a database query. And everythings works fine. Im now using Django-dabug-toolbar and was looking through my pages with it. More out of curiosity than performance needs. When a lokked at the page with the table i saw that the debug toolbar registerd over 300 queries for a table with a little over 300 entries. I dont think flooding the DB with so many queries is a good idea even if there is no performance impact (at least not now). All the data should be coming from only one query.
Why is this happening and how can i reduce the number of queries? | django-tables2 flooding database with queries | 1.2 | 1 | 0 | 348 |
39,882,632 | 2016-10-05T19:56:00.000 | 2 | 0 | 1 | 0 | python,numpy,image-processing | 39,883,453 | 1 | true | 0 | 0 | You could just use a list of numpy arrays. Assuming a scale factor of two, for the i,jth pixel at scale n:
The indices of its "parent" pixel at scale n-1 will be (i//2, j//2)
Its "child" pixels at scale n+1 can be indexed by (slice(2*i, 2*(i+1)), slice(2*j, 2*(j+1))) | 1 | 0 | 1 | Suppose I have N images which are a multiresolution representation of a single image (the Nth image being the coarsest one). If my finest scale is a 16x16 image, the next scale is a 8x8 image and so on.
How should I store such data to fastly be able to access, at a given scale and for a given pixel, its unique parent in the next coarser scale and its children at each finest scale? | How would you store a pyramidal image representation in Python? | 1.2 | 0 | 0 | 46 |
39,890,923 | 2016-10-06T08:16:00.000 | 1 | 0 | 0 | 0 | python,django,django-models,django-migrations | 39,891,704 | 3 | false | 1 | 0 | The django knows about applied migrations is only through migration history table. So if there is no record about applied migration it will think that this migration is not applied. Django does not check real db state against migration files. | 1 | 4 | 0 | I have migrations 0001_something, 0002_something, 0003_something in a third-party app and all of them are applied to the database by my own app. I simply want to skip these three migrations. One option is to run the following command
python manage.py migrate <third_party_app_name> 0003 --fake
But I don't want to run this command manually. I was thinking if there can be any method by which I can specify something in settings to skip these migrations. I would simply run python manage.py migrate and it would automatically recognize that 3 migrations need to be faked. Or if there is any way to always fake 0001, 0002 and 0003.
If this was in my own app, I could simply remove the migration files but it is a third party app installed via. pip and I don't want to change that. | Skip a list of migrations in Django | 0.066568 | 0 | 0 | 1,632 |
39,891,681 | 2016-10-06T08:54:00.000 | 1 | 0 | 0 | 0 | python,bitmap,wxpython | 39,901,257 | 1 | true | 0 | 1 | Take a look at the wx.lib.agw.supertooltip module. It should help you to create a tooltip-like window that displays custom rich content.
As for triggering the display of the tooltip, you can catch mouse events for the tree widget (be sure to call Skip so the tree widget can see the events too) and reset a timer each time the mouse moves. If the timer expires because the mouse hasn't been moved in that long then you can use tree.HitTest to find the item that the cursor is on and then show the appropriate image for that item. | 1 | 0 | 0 | So i'm programming python program that uses wxPython for UI, with wx.TreeCtrl widget for selecting pictures(.png) on selected directory. I would like to add hover on treectrl item that works like tooltip, but instead of text it shows bitmap picture.
Is there something that already allows this, or would i have to create something with wxWidgets?
I am not too familiar with wxWidgets, so if i have to create something like that how hard would it be, lot of code is already using the treectrl, so it needs to be able to work same way.
So how would i have to go about doing this? And if there might be something i might be missing id be happy to know. | wxpython treectrl show bitmap picture on hover | 1.2 | 0 | 0 | 160 |
39,900,282 | 2016-10-06T15:40:00.000 | 0 | 0 | 0 | 1 | python,docker,ansible | 39,901,858 | 1 | false | 0 | 0 | The error
GetPassWarning: Cannot control echo on the terminal
is raised by Python and indicates that the terminal you are using does not provide stdin, stdout and stderr. In this case its stderr.
As there is not much information provided in the question I guess it is tried to use interactive elements like prompt_vars inside a Dockerfile which is IMHO not possible. | 1 | 0 | 0 | I am using Ansible and Docker for automating the environment build process. I use prompt_vars to try to collect the username and password for the git repo but unfortunately i got this error:
GetPassWarning: Cannot control echo on the terminal
The docker ubuntu version is 14.04 and python version is 2.7 | Ansible prompt_vars error: GetPassWarning: Cannot control echo on the terminal | 0 | 0 | 0 | 941 |
39,900,449 | 2016-10-06T15:48:00.000 | 0 | 1 | 1 | 0 | python,amazon-web-services,pip,aws-lambda,psycopg2 | 39,900,507 | 2 | false | 0 | 0 | It's not possible to do with pip. You have to add the dependency to your zipped Lambda deployment file. You can't modify your Lambda deployment without uploading a new zipped deployment file. | 1 | 0 | 0 | I have deployed my zipped project without psycopg2 package. I want to install this package on my lambda without re-uploading my fixed project (i haven't access to my project right now). How can i install this package on my lambda? Is it possible to do it with pip? | Installing python package on AWS lambda | 0 | 0 | 0 | 958 |
39,902,458 | 2016-10-06T17:42:00.000 | 4 | 0 | 0 | 0 | python,windows-10,wireless,spyder | 40,372,092 | 1 | false | 0 | 0 | I had the same problem when Spyder was open, with all of my Internet browsers, on both Windows 7 and Windows 10. The newest update of Spyder has fixed most of this for me. Try opening up the command prompt and typing:
conda update spyder.
Hope this helps! | 1 | 3 | 0 | I have a repeatable problem with my laptop (an HP G4 250 that came with windows 10). I can be happily on the Internet, but opening Spyder causes the Internet to immediately die. Now, the system does something rather unusual. I am not disconnected from the router, and the wireless icon still says I am connected and have Internet access. But streams crash, webpages refuse to load and say there is no internet connection, and I can;t even access my router's config page.
Closing Spyder fixes the problem. Not instantly, but when Spyder is open, it creates several pythonw.exe network requests (seen from resource manager) and the Internet is restored when those processes close themselves upon exiting Spyder (typically 10 seconds to 2 minutes, depending on system load).
I have added Spyder to my firewall, but that has done nothing. I haven't added (nor found) pythonw.exe, but it's not Spyder that has the problem with connecting, it's my entire machine.
It's not coincidental. It's happened now, 2 days in a row, and is highly repeatable. After a while with Spyder being open, I can sometimes receive intermittent Internet function, but it frequently drops until I close the program.
After experiencing it last night, I purged my driver and reinstalled it fresh, and that has fixed nothing. I am running the latest wireless driver provided by HP for my machine. As this problem only occurs when running Spyder, I doubt it's a driver or hardware issue.
Any ideas? | Anaconda 3 Spyder appears to be causing internet outages on Windows 10 | 0.664037 | 0 | 1 | 1,453 |
39,903,506 | 2016-10-06T18:44:00.000 | 1 | 0 | 1 | 0 | python,nlp,text-analysis,microsoft-cognitive | 41,389,366 | 1 | false | 0 | 0 | I am from the team that develops the Text Analytics API.
Currently, we only return the key phrases found in the input document. These may occur multiple times in a given input document, which would also increase the likelihood that it is selected as a key phrase.
The demo shown on our website highlights any exact occurrence of a key phrase, which you could also do in your scenario. | 1 | 0 | 0 | I am using Microsoft cognitive APIs for finding relevant keywords in my paragraph. I need to know from which sentences were those keywords were selected. Is there any way to do that because demo by Microsoft highlights the places from where that keyword was selected. | How to find the locations from where the word was selected as keyword by Microsoft text analytics APIi? | 0.197375 | 0 | 0 | 56 |
39,906,167 | 2016-10-06T21:51:00.000 | 0 | 0 | 0 | 0 | java,python,rest,api | 39,906,371 | 2 | false | 1 | 0 | Furthermore, in the future you might want to separate them from the same machine and use network to communicate.
You can use http requests.
Make a contract in java of which output you will provide to your python script (or any other language you will use) send the output as a json to your python script, so in that way you can easily change the language as long as you send the same json. | 1 | 1 | 0 | I have a Java process which interacts with its REST API called from my program's UI. When I receive the API call, I end up calling the (non-REST based) Python script(s) which do a bunch of work and return me back the results which are returned back as API response.
- I wanted to convert this interaction of UI API -> JAVA -> calling python scripts to become end to end a REST one, so that in coming times it becomes immaterial which language I am using instead of Python.
- Any inputs on whats the best way of making the call end-to-end a REST based ? | Inputs on how to achieve REST based interaction between Java and Python? | 0 | 0 | 1 | 2,327 |
39,906,620 | 2016-10-06T22:31:00.000 | 1 | 0 | 1 | 0 | python,file,binaryfiles,file-writing | 39,906,690 | 1 | false | 0 | 0 | What you're doing wrong is assuming that it can be done. :-)
You don't get to insert and shove the existing data over; it's already in that position on disk, and overwrite is all you get.
What you need to do is to mark the insert position, read the remainder of the file, write your insertion, and then write that remainder after the insertion. | 1 | 0 | 0 | I've tried to do this using the 'r+b', 'w+b', and 'a+b' modes for open(). I'm using with seek() and write() to move to and write to an arbitrary location in the file, but all I can get it to do is either 1) write new info at the end of the file or 2) overwrite existing data in the file.
Does anyone know of some other way to do this or where I'm going wrong here? | how do I insert data to an arbitrary location in a binary file without overwriting existing file data? | 0.197375 | 0 | 0 | 56 |
39,906,836 | 2016-10-06T22:53:00.000 | 2 | 0 | 0 | 0 | proxy,python-requests,socks | 46,545,628 | 2 | false | 0 | 0 | I resolved this problem by removing "socks:" in_all_proxy. | 2 | 3 | 0 | Using proxy connection (HTTP Proxy : 10.3.100.207, Port 8080).
Using python's request module's get function, getting following error:
"Unable to determine SOCKS version from socks://10.3.100.207:8080/" | Unable to determine SOCKS version from socks | 0.197375 | 0 | 1 | 9,897 |
39,906,836 | 2016-10-06T22:53:00.000 | 9 | 0 | 0 | 0 | proxy,python-requests,socks | 40,343,534 | 2 | true | 0 | 0 | Try export all_proxy="socks5://10.3.100.207:8080" if you want to use socks proxy.
Else export all_proxy="" for no proxy.
Hope This works. :D | 2 | 3 | 0 | Using proxy connection (HTTP Proxy : 10.3.100.207, Port 8080).
Using python's request module's get function, getting following error:
"Unable to determine SOCKS version from socks://10.3.100.207:8080/" | Unable to determine SOCKS version from socks | 1.2 | 0 | 1 | 9,897 |
39,907,808 | 2016-10-07T00:57:00.000 | 1 | 0 | 0 | 0 | python,django | 39,910,769 | 1 | false | 1 | 0 | Django has a strict backwards compatibility policy. If it's raising a deprecation warning, then the new version works already in 1.9. You should just switch to it before you upgrade. | 1 | 1 | 0 | My project depends on an OSS reusable app, and that app includes a Django import which is deprecated in Django 1.10:
from django.db.models.sql.aggregates import Aggregate
is changing to:
from django.db.models.aggregates import Aggregate
We get a warning on Django 1.9, which will become an error on Django 1.10. This is blocking our upgrade, and I want to contribute a fix to the app so we can upgrade.
One option would be to modify the requirements in setup.py so that Django 1.10 is required. But I'm sure my contribution would be rejected since it would break for everyone else.
To maintain backwards compatibility, I can do the import as a try/except but that feels hacky. It seems like I need to do some Django version checking in the imports. Should I do a Django version check, which returns a string, convert that to a float, and do an if version > x? That feels hacky too.
What's the best practice on this? Examples? | How to modify deprecated imports for a reusable app? | 0.197375 | 0 | 0 | 29 |
39,908,430 | 2016-10-07T02:27:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn | 40,723,826 | 1 | true | 0 | 0 | To use Label Spreading you should follow these steps:
1. create a vector of labels (y), where all the unlabeled instances are set to -1.
2. fit the model using your feature data (X) and y.
3. create predict_entropies vector using stats.distributions.entropy(yourmodelname.label_distributions_.T)
4. create an uncertainty index by sorting the predict_entropies vector.
5. send the samples of lowest certainty for label query.
I hope this framework will help you get started. | 1 | 0 | 1 | I have a network edgelist and I want to use the Label Spreading/Label Propagation algorithm from scikit-learn. I have a set of nodes that are labeled and want to spread the labels on the unlabeled portion of the network. I can generate the adjacency matrix or confusion matrix if needed.
Can someone point me in the right direction using scikit? The documentation seems so limited in what I can do with it.
Thank you in advance. | Is it possible to use label spreading scikit algorithm on edgelist? | 1.2 | 0 | 0 | 620 |
39,908,781 | 2016-10-07T03:12:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,python-3.x | 39,908,801 | 2 | false | 0 | 0 | Semicolons serve the same purpose as the newline character. It is really just bad style to use a semicolon, often from people coming from languages where lines require it. | 1 | 4 | 0 | What's the difference with Python statements ending with ; and those does not? | What's the difference with Python statements ending with ;? | 0.197375 | 0 | 0 | 370 |
39,910,350 | 2016-10-07T05:54:00.000 | 1 | 0 | 0 | 0 | java,python,macos,python-2.7,cpython | 39,929,049 | 2 | true | 1 | 0 | If you have lot of dependcieis on Java/JVM, you can consider using Jython.
If you would like to develop a scalable/maintainable application, consider using microservices and keep Java and Python components separate.
If your call to Java is simple and it is easy to capture the output and failure, you can go ahead with this running the system command to invoke Java parts. | 1 | 0 | 0 | My major program is written in Python 2.7 (on Mac) and need to leverage some function which is written in a Java 1.8, I think CPython cannot import Java library directly (different than Jython)?
If there is no solution to call Java from CPython, could I integrate in this way -- wrap the Java function into a Java command line application, Python 2.7 call this Java application (e.g. using os.system) by passing command line parameter as inputs, and retrieve its console output?
regards,
Lin | CPython 2.7 + Java | 1.2 | 0 | 0 | 195 |
39,915,320 | 2016-10-07T10:40:00.000 | 0 | 0 | 1 | 0 | python,string | 39,918,138 | 2 | false | 0 | 0 | I think the safest way is to use a dictionary where the key is the function's name and the value is the function itself. | 1 | 0 | 0 | 1) Function 1 encodes the string
def Encode(String):
..
..code block
..
return String
2) Function 2 return the string, which actually forms function call of Function 1
def FunctionReturningEncodeFuntionCall(String):
..
..code block
..
return EncodeFunctionString
3) In Function 3 parse the string and pass to Function 2 to form Function 1 call and execute the Function 1 and store its returned value
def LastFuntionToAssignValue(String):
..
..code block
..
a = exec FunctionReturningMyFuntionCall("abcd")
print a
Thanks in advance | Python function returning string which is function name, need to execute the returned statement | 0 | 0 | 0 | 51 |
39,917,988 | 2016-10-07T13:02:00.000 | 1 | 0 | 1 | 0 | python | 39,918,601 | 4 | false | 0 | 0 | The Python interpreter analyzes each variable when the program runs. Before running, it doesn't know whether you've got an integer, a float, or a string in any of your variables.
When you have a statically typed language background (Java in my case), it's a bit unusual. Dynamic typing saves you a lot of time and lines of code in large scripts. It prevents you from having errors because you have forgotten to define some variable. However, static typing lets you have more control on how data is stored in a computer's memory. | 2 | 3 | 0 | In Python, if you want to define a variable, you don't have to specify the type of it, unlike other languages such as C and Java.
So how can the Python interpreter distinguish between variables and give it the required space in memory like int or float? | Understanding variable types, names and assignment | 0.049958 | 0 | 0 | 171 |
39,917,988 | 2016-10-07T13:02:00.000 | 2 | 0 | 1 | 0 | python | 39,918,089 | 4 | false | 0 | 0 | Python is dynamically typed language which means that the type of variables are decided in running time. As a result python interpreter will distinguish the variable's types (in running time) and give the exact space in memory needed. Despite being dynamically typed, Python is strongly typed, forbidding operations that are not well-defined (for example, adding a number to a string) .
On the other hand C and C++ are statically typed languages which means that the types of variables are known in compilation time.
Using dynamic typing in programming languages has the advantage that gives more potential to language, for example we can have lists with different types (for example a list that contains chars and integers). This wouldn't be possible with static typing since the type of the list should be known from the compilation time...).
One disadvantage of dynamic typing is that the compiler-interpreter in many cases must keeps a record of types in order to extract the types of variables, which makes it more slow in comparison with C or C++.
A dynamic typed language like python can be also strongly typed. Python is strongly typed as the interpreter keeps track of all variables types and is restrictive about how types can be intermingled. | 2 | 3 | 0 | In Python, if you want to define a variable, you don't have to specify the type of it, unlike other languages such as C and Java.
So how can the Python interpreter distinguish between variables and give it the required space in memory like int or float? | Understanding variable types, names and assignment | 0.099668 | 0 | 0 | 171 |
39,924,006 | 2016-10-07T18:48:00.000 | 0 | 0 | 1 | 0 | python,ibm-cloud | 39,925,339 | 1 | false | 0 | 0 | Solved it.
os.getenv('name of the key')
where name of the key is key defined in Bluemix UI. | 1 | 0 | 0 | How can we access USER-DEFINED variables in IBM Bluemix in Python? I have made a token in IBM Bluemix, but I am unable to access it from my Python script.
In the bluemix UI,
token = <actual value of token> | IBM Bluemix user defined variables | 0 | 0 | 0 | 100 |
39,924,599 | 2016-10-07T19:33:00.000 | 0 | 0 | 1 | 0 | python,logging,rotation,handler,hierarchy | 39,930,135 | 1 | false | 0 | 0 | Create a logging.Handler subclass which determines which file to write to based on the details of the event being logged, and write the formatted event to that file. | 1 | 0 | 0 | I have different loggers (log1, log2, log3, ..., logN) which are being logged to "registry.log" for a big N. I would like to divide "registry.log" into N different files as "registry.log" can become really large.
Is there a way to accomplish this automatically, for instance, with a rotating handler? | Python logging: Best way to organize logs in different files? | 0 | 0 | 0 | 278 |
39,924,826 | 2016-10-07T19:49:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook | 68,478,024 | 10 | false | 0 | 0 | You can do this via the command line:
jupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace *.ipynb | 3 | 94 | 0 | Does anyone know what is the keyboard shortcut to clear (not toggle) the cell output in Jupyter Notebook? | Keyboard shortcut to clear cell output in Jupyter notebook | 0 | 0 | 0 | 139,414 |
39,924,826 | 2016-10-07T19:49:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook | 69,380,076 | 10 | false | 0 | 0 | To delete/clear individual cell outputs in JupyterLab (without going to Edit > Clear Output), go to Settings>Advanced Settings Editor (Ctrl+,)>Keyboard Shortcuts and add this to "shortcuts": [...]
{
"command": "notebook:clear-cell-output",
"keys": [
"Shift D",
"Shift D"
],
"selector": ".jp-Notebook:focus"
}
And save it! (Ctrl + S)
Then when you are in the editor, just press Esc to escape the edit mode and press Shift + d + d. | 3 | 94 | 0 | Does anyone know what is the keyboard shortcut to clear (not toggle) the cell output in Jupyter Notebook? | Keyboard shortcut to clear cell output in Jupyter notebook | 0 | 0 | 0 | 139,414 |
39,924,826 | 2016-10-07T19:49:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook | 65,677,579 | 10 | false | 0 | 0 | I just looked and found cell|all output|clear which worked with:
Server Information:
You are using Jupyter notebook.
The version of the notebook server is: 6.1.5
The server is running on this version of Python:
Python 3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:37:02) [MSC v.1924 64 bit (AMD64)]
Current Kernel Information:
Python 3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:37:02) [MSC v.1924 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help. | 3 | 94 | 0 | Does anyone know what is the keyboard shortcut to clear (not toggle) the cell output in Jupyter Notebook? | Keyboard shortcut to clear cell output in Jupyter notebook | 0 | 0 | 0 | 139,414 |
39,925,325 | 2016-10-07T20:29:00.000 | 0 | 0 | 0 | 0 | python-3.x,machine-learning,nlp,stanford-nlp,opennlp | 39,926,150 | 3 | false | 0 | 0 | I think easy solution is remove navbar-inverse class and place this css.
.navbar {
background-color: blue;
} | 1 | 0 | 1 | I am engaged in a competition where we have to build a system using given data set. I am trying to learn the proceedings in linguistics research.
The main goal of this task is to identify the sentence level sentiment polarity of the code-mixed dataset of Indian languages pairs. Each of the sentences is annotated with language information as well as polarity at the sentence level.
Anyone interested to participate with me??
If anyone can help me over it. It will be great.
Please reach me out soon as possible. | NLP Code Mixed : Code Switching | 0 | 0 | 0 | 165 |
39,925,847 | 2016-10-07T21:09:00.000 | 3 | 0 | 0 | 0 | android,python,kivy | 39,929,515 | 1 | true | 0 | 1 | You should update to kivy 1.9.2-dev, the problem is fixed there. In buildozer.spec file, write requirement kivy==master. | 1 | 0 | 0 | I have written a simple application and deployed it on Sony Xperia Z and Galaxy Prime devices. On both it's really very hard (I've got to click many times before it reacts) to:
put focus on a TextInput
select a ToggleButton
click a Button
etc.
The same time a ScrollView (that is the container for the mentioned widgets) works perfectly smooth. And when run on desktop then it's alright.
I use kivy 1.9.1, python 2.7, build on Ubuntu 16 using buildozer. Don't know what else could I say... (Let me know, please)
Have you experienced such an issue? | Kivy - hard to select Widgets | 1.2 | 0 | 0 | 77 |
39,928,710 | 2016-10-08T04:33:00.000 | 0 | 0 | 1 | 0 | python,tkinter,module | 68,200,579 | 14 | false | 0 | 1 | just go on cmd and type pip intall Tk interface,
i thinks this is the full true name of tkinter module | 10 | 41 | 0 | I am having a problem during the installation of tkinter. I have version 2.7.11. I entered the pip install tkinter on dos but it shows the following message:
collecting tkinter
Could not find a version that satisfies the requirement tkinter (from versions:)
No matching distribution found for tkinter
I have successfully installed flask with the same procedure, but for tkinter it is showing problem. How can I get rid of this problem? | Why is there no tkinter distribution found? | 0 | 0 | 0 | 98,413 |
39,928,710 | 2016-10-08T04:33:00.000 | 0 | 0 | 1 | 0 | python,tkinter,module | 52,298,239 | 14 | false | 0 | 1 | I had to install python3-tk manually before it worked (via apt-get) | 10 | 41 | 0 | I am having a problem during the installation of tkinter. I have version 2.7.11. I entered the pip install tkinter on dos but it shows the following message:
collecting tkinter
Could not find a version that satisfies the requirement tkinter (from versions:)
No matching distribution found for tkinter
I have successfully installed flask with the same procedure, but for tkinter it is showing problem. How can I get rid of this problem? | Why is there no tkinter distribution found? | 0 | 0 | 0 | 98,413 |
39,928,710 | 2016-10-08T04:33:00.000 | 1 | 0 | 1 | 0 | python,tkinter,module | 67,446,536 | 14 | false | 0 | 1 | This just has to do with the version changes
python 2.x: import tkinter as tk
python 3.x: import Tkinter as tk | 10 | 41 | 0 | I am having a problem during the installation of tkinter. I have version 2.7.11. I entered the pip install tkinter on dos but it shows the following message:
collecting tkinter
Could not find a version that satisfies the requirement tkinter (from versions:)
No matching distribution found for tkinter
I have successfully installed flask with the same procedure, but for tkinter it is showing problem. How can I get rid of this problem? | Why is there no tkinter distribution found? | 0.014285 | 0 | 0 | 98,413 |
39,928,710 | 2016-10-08T04:33:00.000 | 2 | 0 | 1 | 0 | python,tkinter,module | 67,346,820 | 14 | false | 0 | 1 | On a MacBook use brew install python-tk
The error will be sorted out. | 10 | 41 | 0 | I am having a problem during the installation of tkinter. I have version 2.7.11. I entered the pip install tkinter on dos but it shows the following message:
collecting tkinter
Could not find a version that satisfies the requirement tkinter (from versions:)
No matching distribution found for tkinter
I have successfully installed flask with the same procedure, but for tkinter it is showing problem. How can I get rid of this problem? | Why is there no tkinter distribution found? | 0.028564 | 0 | 0 | 98,413 |
39,928,710 | 2016-10-08T04:33:00.000 | 2 | 0 | 1 | 0 | python,tkinter,module | 66,579,584 | 14 | false | 0 | 1 | import Tkinter as tk
Python 3 changed tkinter (from python 2.7) to Tkinter | 10 | 41 | 0 | I am having a problem during the installation of tkinter. I have version 2.7.11. I entered the pip install tkinter on dos but it shows the following message:
collecting tkinter
Could not find a version that satisfies the requirement tkinter (from versions:)
No matching distribution found for tkinter
I have successfully installed flask with the same procedure, but for tkinter it is showing problem. How can I get rid of this problem? | Why is there no tkinter distribution found? | 0.028564 | 0 | 0 | 98,413 |
39,928,710 | 2016-10-08T04:33:00.000 | 2 | 0 | 1 | 0 | python,tkinter,module | 55,162,847 | 14 | false | 0 | 1 | to find your package run:
sudo yum search python|grep tk
mine was:
yum install python3-tkinter.x86_64 | 10 | 41 | 0 | I am having a problem during the installation of tkinter. I have version 2.7.11. I entered the pip install tkinter on dos but it shows the following message:
collecting tkinter
Could not find a version that satisfies the requirement tkinter (from versions:)
No matching distribution found for tkinter
I have successfully installed flask with the same procedure, but for tkinter it is showing problem. How can I get rid of this problem? | Why is there no tkinter distribution found? | 0.028564 | 0 | 0 | 98,413 |
39,928,710 | 2016-10-08T04:33:00.000 | 0 | 0 | 1 | 0 | python,tkinter,module | 55,540,040 | 14 | false | 0 | 1 | pip shown
Could not find a version that satisfies the requirement python--tkinter (from versions: )
No matching distribution found for python--tkinter
You are using pip version 10.0.1, however version 19.0.3 is available.
You should consider upgrading via the python -m pip install --upgrade pip command. | 10 | 41 | 0 | I am having a problem during the installation of tkinter. I have version 2.7.11. I entered the pip install tkinter on dos but it shows the following message:
collecting tkinter
Could not find a version that satisfies the requirement tkinter (from versions:)
No matching distribution found for tkinter
I have successfully installed flask with the same procedure, but for tkinter it is showing problem. How can I get rid of this problem? | Why is there no tkinter distribution found? | 0 | 0 | 0 | 98,413 |
39,928,710 | 2016-10-08T04:33:00.000 | 1 | 0 | 1 | 0 | python,tkinter,module | 67,800,600 | 14 | false | 0 | 1 | import Tkinter as tk
Notice the T. This was changed in python 3.x | 10 | 41 | 0 | I am having a problem during the installation of tkinter. I have version 2.7.11. I entered the pip install tkinter on dos but it shows the following message:
collecting tkinter
Could not find a version that satisfies the requirement tkinter (from versions:)
No matching distribution found for tkinter
I have successfully installed flask with the same procedure, but for tkinter it is showing problem. How can I get rid of this problem? | Why is there no tkinter distribution found? | 0.014285 | 0 | 0 | 98,413 |
39,928,710 | 2016-10-08T04:33:00.000 | 3 | 0 | 1 | 0 | python,tkinter,module | 52,841,363 | 14 | false | 0 | 1 | Follow this guide to install "tkinter". However now with Python version 3.1 onwards, it is part of the standard python distribution.
You can also install it using sudo apt-get install python3-tk-dbg, if you are in virtualenv. (Same can be done for normal installation, not just virtualenv) | 10 | 41 | 0 | I am having a problem during the installation of tkinter. I have version 2.7.11. I entered the pip install tkinter on dos but it shows the following message:
collecting tkinter
Could not find a version that satisfies the requirement tkinter (from versions:)
No matching distribution found for tkinter
I have successfully installed flask with the same procedure, but for tkinter it is showing problem. How can I get rid of this problem? | Why is there no tkinter distribution found? | 0.042831 | 0 | 0 | 98,413 |
39,928,710 | 2016-10-08T04:33:00.000 | 1 | 0 | 1 | 0 | python,tkinter,module | 62,074,234 | 14 | false | 0 | 1 | I was able to fix this on Amazon Linux 2 with python2.7 by running this sudo yum install python-tools -y command. | 10 | 41 | 0 | I am having a problem during the installation of tkinter. I have version 2.7.11. I entered the pip install tkinter on dos but it shows the following message:
collecting tkinter
Could not find a version that satisfies the requirement tkinter (from versions:)
No matching distribution found for tkinter
I have successfully installed flask with the same procedure, but for tkinter it is showing problem. How can I get rid of this problem? | Why is there no tkinter distribution found? | 0.014285 | 0 | 0 | 98,413 |
39,929,166 | 2016-10-08T05:52:00.000 | 1 | 0 | 0 | 0 | python,opencv | 43,068,639 | 2 | false | 0 | 0 | I think you are right and Hector-the-Inspector of PyCharm IDE is wrong. So go to the line with warning and suppress the warning for this statement: put cursor on the statement, go to the bulb icon, click on triangle in the right corner, in the menu choose "Suppress for statement". | 1 | 4 | 0 | I have a short python script that will open the webcam and display a live feed on a local web site. I am using PyCharm IDE which offers corrections and notify's you in case of syntax error. When I pass an argument to VideoCapture it highlights it and says 'unexpected argument'.
self.video = cv2.VideoCapture(0)
This is in a class and the 'unexpected argument is caused by the 0 that is passed to the OpenCV function. Is there any way I can fix this?
By the way it works fine as is - when you run it, it works the way it should. If you remove the zero the error goes away but it no longer initializes the webcam. | CV2 Python VideoCapture(0) unexpected argument | 0.099668 | 0 | 0 | 1,371 |
39,930,952 | 2016-10-08T09:46:00.000 | 4 | 0 | 0 | 0 | python,ubuntu,tensorflow,anaconda,keras | 50,913,862 | 4 | false | 0 | 0 | I had pip referring by default to pip3, which made me download the libs for python3. On the contrary I launched the shell as python (which opened python 2) and the library wasn't installed there obviously.
Once I matched the names pip3 -> python3, pip -> python (2) all worked. | 2 | 29 | 1 | I'm trying to setup keras deep learning library for Python3.5 on Ubuntu 16.04 LTS and use Tensorflow as a backend. I have Python2.7 and Python3.5 installed. I have installed Anaconda and with help of it Tensorflow, numpy, scipy, pyyaml. Afterwards I have installed keras with command
sudo python setup.py install
Although I can see /usr/local/lib/python3.5/dist-packages/Keras-1.1.0-py3.5.egg directory, I cannot use keras library. When I try to import it in python it says
ImportError: No module named 'keras'
I have tried to install keras usingpip3, but got the same result.
What am I doing wrong? Any Ideas? | Cannot import keras after installation | 0.197375 | 0 | 0 | 129,794 |
39,930,952 | 2016-10-08T09:46:00.000 | 0 | 0 | 0 | 0 | python,ubuntu,tensorflow,anaconda,keras | 55,900,347 | 4 | false | 0 | 0 | Firstly checked the list of installed Python packages by:
pip list | grep -i keras
If there is keras shown then install it by:
pip install keras --upgrade --log ./pip-keras.log
now check the log, if there is any pending dependencies are present, it will affect your installation. So remove dependencies and then again install it. | 2 | 29 | 1 | I'm trying to setup keras deep learning library for Python3.5 on Ubuntu 16.04 LTS and use Tensorflow as a backend. I have Python2.7 and Python3.5 installed. I have installed Anaconda and with help of it Tensorflow, numpy, scipy, pyyaml. Afterwards I have installed keras with command
sudo python setup.py install
Although I can see /usr/local/lib/python3.5/dist-packages/Keras-1.1.0-py3.5.egg directory, I cannot use keras library. When I try to import it in python it says
ImportError: No module named 'keras'
I have tried to install keras usingpip3, but got the same result.
What am I doing wrong? Any Ideas? | Cannot import keras after installation | 0 | 0 | 0 | 129,794 |
39,930,958 | 2016-10-08T09:47:00.000 | 3 | 0 | 1 | 0 | python,pycharm,anaconda | 39,931,237 | 1 | false | 0 | 0 | I found the solution. There is a file under the project directory: .idea/workspace.xml which contains the older setting. Deleting this file caused pycharm to recreate it with the new settings. | 1 | 0 | 0 | I imported a python project from github into pycharm. I developed this project in windows but now i am using mac.
I changed the project interpreter in the settings but when running it says: "error running the project" and the error message says that it points to the python interpreter of my old windows directory.
I have already tried to delete my /user/USR_NAME/Library/Caches but it didn't help. I also changed the project interpreter but it didn't help
I am using python2.7 with anaconda
Is there any project properties file with the old settings? | Cannot change python interpreter | 0.53705 | 0 | 0 | 1,146 |
39,933,160 | 2016-10-08T13:47:00.000 | 0 | 0 | 1 | 0 | python,macos,comments,python-idle,beep | 39,938,365 | 1 | true | 0 | 0 | Various programs, including IDLE, sometimes ask the computer to 'beep' when the user does something that the program considers an error. In general, to not hear a beep, you can 1) stop doing whatever provokes the beep, 2) turn your speaker down or off, or 3) plug in earphones and not wear them while editing. | 1 | 0 | 0 | Why, when I type an end parantheses in a commented out area in IDLE, does mac sound the error beep, and how can I stop it? | Mac beeps when I type end parantheses in commented block of Python | 1.2 | 0 | 0 | 50 |
39,934,906 | 2016-10-08T16:51:00.000 | 1 | 0 | 1 | 0 | python,django,virtualenv | 53,875,262 | 5 | false | 1 | 0 | I had installed Django 2 via pip3 install Django, but I was running python manage.py runserver instead of python3 manage.py runserver. Django 2 only works with python 3+. | 1 | 17 | 0 | I cloned my Django Project from Github Account and activated the virtualenv using famous command source nameofenv/bin/activate
And when I run python manage.py runserver
It gives me an error saying:
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment? | Installed Virtualenv and activating virtualenv doesn't work | 0.039979 | 0 | 0 | 97,105 |
39,936,352 | 2016-10-08T19:12:00.000 | 1 | 0 | 0 | 0 | python,filemaker | 39,941,551 | 1 | true | 1 | 0 | This is not a standard requirement and there is no easy way of doing this. The best way to track changes is a Source Control system like git, but it is not applicable to FileMaker Pro as the files are binary.
You can try your approach, or you can try to add the new records in FileMaker instead of updating them and flag them as current or use only the last record
There are some amazing guys here, but you might want to take it to one of the FileMAker forums as the FIleMAker audience there is much larger then in SO | 1 | 0 | 0 | This is quite a general question, though I’ll give the specific use case for context.
I'm using a FileMaker Pro database to record personal bird observations. For each bird on the national list, I have extracted quite a lot of base data by website scraping in Python, for example conservation status, geographical range, scientific name and so on. In day-to-day use of the database, this base data remains fixed and unchanging. However, once a year or so I will want to re-scrape the base data to pick up the most recent published information on status, range, and even changes in scientific name (that happens).
I know there are options such as PyFilemaker or bBox which should allow me to write to the FileMaker database from Python, so the update mechanism itself shouldn't be a problem.
It would be rather dangerous simply to overwrite all of last year’s base data with the newly scraped data, and I'm looking for general advice as to how best to provide visibility for the changes before manually importing them. What I have in mind is to use pandas to generate a spreadsheet using the base data, and to highlight the changed cells. Does that sound a sensible way of doing it? I suspect that this may be a very standard requirement, and if anybody could help out with comments on an approach which is straightforward to implement in Python that would be most helpful. | Providing visibility of periodic changes to a database | 1.2 | 1 | 0 | 46 |
39,936,494 | 2016-10-08T19:28:00.000 | 0 | 0 | 0 | 0 | python,django,raspberry-pi | 39,936,550 | 1 | false | 1 | 0 | you can use urllib module or requests module in python to send POST request to your Django server.
And you can have Django view to respond to that POST request. Inside this view, you can have method to add the data which sent from your Raspberry Pi program into your database which is at the Django server side.
Therefore you don't need another GET method to deal with adding data into the database in this case. | 1 | 0 | 0 | I am working on a project that will have a raspberry PI collect data from a set of sensors and then send the data to a django server.
I need the server to then take that data and add it to a database and perform ARIMA time series forecasting on the updated dataset every x seconds after a number of new entries are added.
Can I use POST in the raspberry PI program to send the data to that url, and then use GET in a django view to add the incoming data into a database? | Using GET and POST to add data to a database in Django | 0 | 0 | 0 | 467 |
39,936,697 | 2016-10-08T19:49:00.000 | 1 | 0 | 0 | 0 | python,sqlite | 39,938,696 | 1 | true | 0 | 0 | The sqlite3 database files, can and may be read by many different readers at the same time. There is no problem with concurrency in that respect with sqlite3. The problems which is native to sqlite3 concerns writing to the file, only one writer is allowed.
So if you only read your fine.
If you are planning to lock the database and succeed with that, while you compute the hash, you become a writer with exclusive access. | 1 | 0 | 0 | I have a function like the following that I want to use to compute the hash of a sqlite database file, in order to compare it to the last backup I made to detect any changes.
def get_hash(file_path):
# http://stackoverflow.com/a/3431838/1391717
hash_sha1 = hashlib.sha1
with open(file_path, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_sha1.update(chunk)
return hash_sha1.hexdigest()
I plan on locking the database, so no one can write to it while I'm computing the hash. Is it possible for me to cause any harm while doing this?
// http://codereview.stackexchange.com/questions/78643/create-sqlite-backups
connection = sqlite3.connect(database_file)
cursor = connection.cursor()
cursor.execute("begin immediate")
db_hash = get_hash(args.database) | How to compute a hash of a sqlite database file without causing harm | 1.2 | 1 | 0 | 1,538 |
39,936,967 | 2016-10-08T20:19:00.000 | 0 | 1 | 0 | 0 | python,algorithm,opencv,raspberry-pi,computer-vision | 41,820,410 | 1 | false | 0 | 0 | I have done similar project in my Masters Degree.
I had used Raspberry Pi 3 because it is faster than Pi 2 and has more resources for image processing.
I had used KNN algorithm in OpenCV for Number Detection. It was fast and had good efficiency.
The main advantage of KNN algorithm is it is very light weight. | 1 | 1 | 1 | I'm trying to do object recognition in an embedded environment, and for this I'm using Raspberry Pi (Specifically version 2).
I'm using OpenCV Library and as of now I'm using feature detection algorithms contained in OpenCV.
So far I've tried different approaches:
I tried different keypoint extraction and description algorithms: SIFT, SURF, ORB. SIFT and SURF are too heavy and ORB is not so good.
Then I tried using different algorithms for keypoint extraction and then description. The first approach was to use FAST algorithm to extract key points and then ORB or SURF for description, the results were not good and not rotation invariant, then i tried mixing the others.
I now am to the point where I get the best results time permitting using ORB for keypoint extraction and SURF for description. But it is still really slow.
So do you have any suggestions or new ideas to obtain better results? Am I missing something?
As additional information, I'm using Python 3.5 with OpenCV 3.1 | Feature detection for embedded platform OpenCV | 0 | 0 | 0 | 406 |
39,939,523 | 2016-10-09T02:52:00.000 | 1 | 0 | 0 | 0 | android,python,security,kivy | 61,085,242 | 2 | false | 0 | 1 | You can't secure it because after, p4a compiles it into private.mp3.
You can rename that file from private.mp3 to private.tar.gz and still get access to all the codes and information. | 2 | 0 | 0 | I've made an app using the Kivy cross-platform tool and I built the apk file using python-for-android. I want to store a secret-key locally in the application but since the apk file can be disassembled, How can I make sure my secret-key is safe? | Securing android application made by kivy | 0.099668 | 0 | 0 | 353 |
39,939,523 | 2016-10-09T02:52:00.000 | 0 | 0 | 0 | 0 | android,python,security,kivy | 40,034,392 | 2 | true | 0 | 1 | After dissembling my apk file, I figured out that python-for-android stores all of its stuff including the python installation and the project itself in a binary file named private.mp3 so the source is not fully open and I might be good to go. | 2 | 0 | 0 | I've made an app using the Kivy cross-platform tool and I built the apk file using python-for-android. I want to store a secret-key locally in the application but since the apk file can be disassembled, How can I make sure my secret-key is safe? | Securing android application made by kivy | 1.2 | 0 | 0 | 353 |
39,940,303 | 2016-10-09T05:21:00.000 | 0 | 0 | 1 | 0 | python,windows-server-2008-r2,msvcr100.dll | 54,189,960 | 1 | false | 0 | 0 | The "Failed to write all bytes for (random DLL name)" error generally indicates that the disk is full. Would be nice if Microsoft had bothered to add an extra sentence indicating such, but this is usually the problem.
If your disk isn't full, then it may be a permissions issue -- make sure the user you're running the program as has write access to wherever it's trying to write to. | 1 | 0 | 0 | I made a python console exe. It cannot work on windows2008 R2 server.
I copy MSVCR100.dll and MSVCP100.dll from another computer onto the dir containing the exe file. It has been working correctly a long time.
Today, when start it show that "Failed to write all bytes for MSVCR100.dll"
I don't know what caused it and how to deal with it.
Thanks for any suggestions. | Failed to write all bytes for MSVCR100.dll | 0 | 0 | 0 | 2,132 |
39,944,576 | 2016-10-09T14:16:00.000 | 3 | 0 | 1 | 0 | python,macos,jupyter-notebook | 65,419,541 | 5 | false | 0 | 0 | That usually happens when I edit the program after leaving the output running mid-way and then run it again. This will cause the kernel to get struck. I generally just restart the kernel and then it works fine.
To restart the kernel,press Esc to enter command mode, then press 0 0 (zero) to restart the kernel, now your program will run.
To avoid this in the future, remember to complete the execution of your program before editing the code. | 2 | 14 | 0 | I'm using Jupyter notebook 4.0.6 on OSX El Capitan.
Once in a while I'll start running a notebook and the cell will simply hang, with a [ * ] next to it and no output.
When this happens, I find that only killing Jupyter at the command line and restarting it solves the problem. Relaunching the kernel doesn't help.
Has anyone else had this problem? If so, any tips? | What to do when Jupyter suddenly hangs? | 0.119427 | 0 | 0 | 19,641 |
39,944,576 | 2016-10-09T14:16:00.000 | -1 | 0 | 1 | 0 | python,macos,jupyter-notebook | 71,687,724 | 5 | false | 0 | 0 | I am facing the same problem recently, that jupyter notebook often stucks without any reason, even when executing very simple code (such as x = 1), so there was no possibility that I fell into an infinite loop.
I noticed that every time this situation occurred, there was a solid circle on the top right of the page which meant the kernel was busy and working, but it was abnormal that the jupyter took over twenty seconds to process the code like x = 1. I need to wait for several minutes to see if the kernel would come back, while sometimes it never come back, so I have to shut down and restart the kernel with losing all of my data. It is weird that although the kernel sometimes takes a long time to come back on itself, there would be literally NO output, instead "ln []" would be shown in front of that block. Besides, since I am using the extension ExecuteTime, so that I am able to see how long it takes jupyter to execute a code block, and when this awful situtation happens, it turns out
executed in 0ms, finished 01:38:14 2022-03-31
I guess it may be raised by problems of memory, because it seems to have higher frequency when I am processing large datasets or may be due to the unknown incompatibility with nbextensions, as references for other people who meet the same issues. | 2 | 14 | 0 | I'm using Jupyter notebook 4.0.6 on OSX El Capitan.
Once in a while I'll start running a notebook and the cell will simply hang, with a [ * ] next to it and no output.
When this happens, I find that only killing Jupyter at the command line and restarting it solves the problem. Relaunching the kernel doesn't help.
Has anyone else had this problem? If so, any tips? | What to do when Jupyter suddenly hangs? | -0.039979 | 0 | 0 | 19,641 |
39,945,389 | 2016-10-09T15:40:00.000 | 1 | 0 | 0 | 0 | python,django,django-rest-framework | 39,953,931 | 3 | false | 1 | 0 | There are some parts you can use without Django though it might need to be installed.
This question feels like this isn't the real question. Why would you need DRF without Django ? | 2 | 1 | 0 | Do I need to have a django website in order to use django rest framework, or can I use DRF by itself as a standalone app? Sorry but it is not so obvious to me. thanks for the help. | Django Rest Framework standalone? | 0.066568 | 0 | 0 | 323 |
39,945,389 | 2016-10-09T15:40:00.000 | -1 | 0 | 0 | 0 | python,django,django-rest-framework | 52,072,780 | 3 | false | 1 | 0 | django rest framework is a wrapper of django from rest APIs. django is required for django rest framework | 2 | 1 | 0 | Do I need to have a django website in order to use django rest framework, or can I use DRF by itself as a standalone app? Sorry but it is not so obvious to me. thanks for the help. | Django Rest Framework standalone? | -0.066568 | 0 | 0 | 323 |
39,949,845 | 2016-10-10T00:18:00.000 | 0 | 0 | 1 | 0 | python-3.x | 39,950,009 | 6 | false | 0 | 0 | Figured it out, if you just started python then you probably did not add python to your path.
To do so uninstall python and then reinstall it. This time click "add python to path" at the bottom of the install screen. | 5 | 4 | 0 | I just downloaded Python and Visual Studio. I'm trying to test the debugging feature for a simple "Hello World" script and I'm receiving this error:
Failed to launch the Python Process, please validate the path 'python'
followed by this in the debug console:
Error: spawn python ENOENT
Could someone please help me out and tell me how to fix this?
I'm running on windows 10.
Thanks! | Visual Studio Python "Failed to launch the Python Process, please validate the path 'python'' & Error: spawn python ENOENT | 0 | 0 | 0 | 27,825 |
39,949,845 | 2016-10-10T00:18:00.000 | 1 | 0 | 1 | 0 | python-3.x | 43,901,308 | 6 | false | 0 | 0 | Simply restart your VB studio code. Those show that some packages have been downloaded but not yet installed until reboot it. | 5 | 4 | 0 | I just downloaded Python and Visual Studio. I'm trying to test the debugging feature for a simple "Hello World" script and I'm receiving this error:
Failed to launch the Python Process, please validate the path 'python'
followed by this in the debug console:
Error: spawn python ENOENT
Could someone please help me out and tell me how to fix this?
I'm running on windows 10.
Thanks! | Visual Studio Python "Failed to launch the Python Process, please validate the path 'python'' & Error: spawn python ENOENT | 0.033321 | 0 | 0 | 27,825 |
39,949,845 | 2016-10-10T00:18:00.000 | 1 | 0 | 1 | 0 | python-3.x | 43,998,477 | 6 | false | 0 | 0 | Add python path by following these steps.
1. Go to uninstall a program.
2. Go to Python 3.6.1 (this is my python version). Select and click on Uninstall/change.
3.Click on Modify.
4. Click next > In advanced options > tick add Python to environment variable. Click install. Restart VS code. | 5 | 4 | 0 | I just downloaded Python and Visual Studio. I'm trying to test the debugging feature for a simple "Hello World" script and I'm receiving this error:
Failed to launch the Python Process, please validate the path 'python'
followed by this in the debug console:
Error: spawn python ENOENT
Could someone please help me out and tell me how to fix this?
I'm running on windows 10.
Thanks! | Visual Studio Python "Failed to launch the Python Process, please validate the path 'python'' & Error: spawn python ENOENT | 0.033321 | 0 | 0 | 27,825 |
39,949,845 | 2016-10-10T00:18:00.000 | 2 | 0 | 1 | 0 | python-3.x | 44,814,591 | 6 | false | 0 | 0 | For those who are having this error after the recent (May-June of 2017) update of Visual Studio Code.
Your old launch.json file might be causing this issue, due to the recent updates of launch.json file format and structure.
Try to delete launch.json file in the .vscode folder. The .vscode folder exists in your workspace where your source code exists, not to be confused with the one in your user home folder (C:\Users\{username}\.vscode).
This workaround worked fine for me with Windows10 + Visual Studio Code + Python extension. Just delete the existing launch.json and restart Visual Studio Code, and then start your debugging. The launch.json file might be regenerated again, but this time it should be in the correct shape. | 5 | 4 | 0 | I just downloaded Python and Visual Studio. I'm trying to test the debugging feature for a simple "Hello World" script and I'm receiving this error:
Failed to launch the Python Process, please validate the path 'python'
followed by this in the debug console:
Error: spawn python ENOENT
Could someone please help me out and tell me how to fix this?
I'm running on windows 10.
Thanks! | Visual Studio Python "Failed to launch the Python Process, please validate the path 'python'' & Error: spawn python ENOENT | 0.066568 | 0 | 0 | 27,825 |
39,949,845 | 2016-10-10T00:18:00.000 | 7 | 0 | 1 | 0 | python-3.x | 41,195,399 | 6 | false | 0 | 0 | Do not uninstall!
1) Go to location that you installed the program.
*example: C:\Program Files (x86)\Microsoft VS Code
copy the location.
2) right click on computer> properties >Advanced System Settings> Environment variables > under user variables find "path" click> edit> under variable value: go to the end of the line add ; then paste your location>ok > then go under system variables find "path"> do the same thing.... add ; then paste your location.
FOR EXAMPLE" ;C:\Program Files (x86)\Microsoft VS Code
3) Restart your Visual Studio Code | 5 | 4 | 0 | I just downloaded Python and Visual Studio. I'm trying to test the debugging feature for a simple "Hello World" script and I'm receiving this error:
Failed to launch the Python Process, please validate the path 'python'
followed by this in the debug console:
Error: spawn python ENOENT
Could someone please help me out and tell me how to fix this?
I'm running on windows 10.
Thanks! | Visual Studio Python "Failed to launch the Python Process, please validate the path 'python'' & Error: spawn python ENOENT | 1 | 0 | 0 | 27,825 |
39,950,769 | 2016-10-10T03:06:00.000 | 0 | 0 | 0 | 0 | python,github,configuration,travis-ci,configuration-files | 39,951,058 | 1 | false | 1 | 0 | Let's take Linux environment for example. Often, the user level configuration of an application is placed under your home folder as a dot file. So what you can do is like this:
In your git repository, track a sample configure file, e.g., config.sample.yaml, and put the configuration structure here.
When deploying, either in test environment or production environment, you can copy and rename this file as a dot-file, e.g., $HOME/.{app}.config.yaml. Then in your application, you can read this file.
If you are developing an python package, you can make the file copy operation done in the setup.py. There are some advantages:
You can always track the structure changes of your configuration file.
Separate configuration between test and production enviroment.
More secure, you do not need to code your import db connection information in the public file.
Hope this would be helpful. | 1 | 0 | 0 | I want to use a test db on my test environment, and the production db on production environment in my Python application.
How should I handle routing to two dbs? Should I have an untracked config.yml file that has the test db's connection string on my test server, and the production db's connection string on production server?
I'm using github for version control and travis ci for deployment. | Using different dbs on production and test environment | 0 | 1 | 0 | 68 |
39,951,211 | 2016-10-10T04:11:00.000 | 0 | 0 | 1 | 0 | windows-10,python-3.4 | 39,973,480 | 1 | false | 0 | 0 | how are the X and Y values separated in the file? | 1 | 0 | 0 | How would I go by taking data from a file, seperage the x's and y's, and still be able to use those numbers to find slope, y-intercept, and correlation coefficient. I have all the equations down, but I just can't seem to use the data as an integer. I'm not home so I don't have my program on me yet and if it'll make it easier then I can post what I already have when I get home but I've been stuck on this for 4 days and nothing will work. | Using data in file for equations. Python 3.4 | 0 | 0 | 0 | 18 |
39,952,675 | 2016-10-10T06:51:00.000 | 0 | 0 | 1 | 0 | python | 39,952,942 | 2 | false | 0 | 0 | If you don't know whether you need a new class then you should not write it. | 1 | 0 | 0 | So far I've running the main() module in a script way (different .py file and not in a class or whatever) and then call the different instances from my other modules.
Is this right or should I make a class just for the main too?
Regards and thank. | Should I put the main in a class in Python? | 0 | 0 | 0 | 374 |
39,954,942 | 2016-10-10T09:13:00.000 | 1 | 0 | 0 | 1 | ipython,zeromq,pyzmq,ipython-parallel | 39,958,173 | 1 | true | 1 | 0 | If you are using --reuse, make sure to remove the files if you change settings. It's possible that it doesn't behave well when --reuse is given and you change things like --ip, as the connection file may be overriding your command-line arguments.
When setting --ip=0.0.0.0, it may be useful to also set --location=a.b.c.d where a.b.c.d is an ip address of the controller that you know is accessible to the engines. Changing the
If registration works and subsequent connections don't, this may be due to a firewall only opening one port, e.g. 5900. The machine running the controller needs to have all the ports listed in the connection file open. You can specify these to be a port-range by manually entering port numbers in the connection files. | 1 | 0 | 0 | I am trying to use the ipyparallel library to run an ipcontroller and ipengine on different machines.
My setup is as follows:
Remote machine:
Windows Server 2012 R2 x64, running an ipcontroller, listening on port 5900 and ip=0.0.0.0.
Local machine:
Windows 10 x64, running an ipengine, listening on the remote machine's ip and port 5900.
Controller start command:
ipcontroller --ip=0.0.0.0 --port=5900 --reuse --log-to-file=True
Engine start command:
ipengine --file=/c/Users/User/ipcontroller-engine.json --timeout=10 --log-to-file=True
I've changed the interface field in ipcontroller-engine.json from "tcp://127.0.0.1" to "tcp://" for ipengine.
On startup, here is a snapshot of the ipcontroller log:
2016-10-10 01:14:00.651 [IPControllerApp] Hub listening on tcp://0.0.0.0:5900 for registration.
2016-10-10 01:14:00.677 [IPControllerApp] Hub using DB backend: 'DictDB'
2016-10-10 01:14:00.956 [IPControllerApp] hub::created hub
2016-10-10 01:14:00.957 [IPControllerApp] task::using Python leastload Task scheduler
2016-10-10 01:14:00.959 [IPControllerApp] Heartmonitor started
2016-10-10 01:14:00.967 [IPControllerApp] Creating pid file: C:\Users\Administrator\.ipython\profile_default\pid\ipcontroller.pid
2016-10-10 01:14:02.102 [IPControllerApp] client::client b'\x00\x80\x00\x00)' requested 'connection_request'
2016-10-10 01:14:02.102 [IPControllerApp] client::client [b'\x00\x80\x00\x00)'] connected
2016-10-10 01:14:47.895 [IPControllerApp] client::client b'82f5efed-52eb-46f2-8c92-e713aee8a363' requested 'registration_request'
2016-10-10 01:15:05.437 [IPControllerApp] client::client b'efe6919d-98ac-4544-a6b8-9d748f28697d' requested 'registration_request'
2016-10-10 01:15:17.899 [IPControllerApp] registration::purging stalled registration: 1
And the ipengine log:
2016-10-10 13:44:21.037 [IPEngineApp] Registering with controller at tcp://172.17.3.14:5900
2016-10-10 13:44:21.508 [IPEngineApp] Starting to monitor the heartbeat signal from the hub every 3010 ms.
2016-10-10 13:44:21.522 [IPEngineApp] Completed registration with id 1
2016-10-10 13:44:27.529 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (1 time(s) in a row).
2016-10-10 13:44:30.539 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (2 time(s) in a row).
...
2016-10-10 13:46:52.009 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (49 time(s) in a row).
2016-10-10 13:46:55.028 [IPEngineApp] WARNING | No heartbeat in the last 3010 ms (50 time(s) in a row).
2016-10-10 13:46:55.028 [IPEngineApp] CRITICAL | Maximum number of heartbeats misses reached (50 times 3010 ms), shutting down.
(There is a 12.5 hour time difference between the local machine and the remote VM)
Any idea why this may happen? | ipyparallel displaying "registration: purging stalled registration" | 1.2 | 0 | 0 | 264 |
39,957,657 | 2016-10-10T11:53:00.000 | 0 | 0 | 0 | 0 | python,image,algorithm,opencv,image-processing | 39,973,094 | 2 | false | 0 | 0 | I used several lists and list.append() for storing the image.
For finding the white regions in the black & white images I used cv2.findNonZero(). | 2 | 0 | 1 | I have several images (their number might increase over time) and their corresponding annotated images - let's call them image masks.
I want to convert the original images to Grayscale and the annotated masks to Binary images (B&W) and then save the gray scale values in a Pandas DataFrame/CSV file based on the B&W pixel coordinates.
So that means a lot of switching back and forth the original image and the binary images.
I don't want to read every time the images from file because it might be very time consuming.
Any suggestion which data structure should be used for storing several types of images in Python? | store multiple images efficiently in Python data structure | 0 | 0 | 0 | 783 |
39,957,657 | 2016-10-10T11:53:00.000 | 1 | 0 | 0 | 0 | python,image,algorithm,opencv,image-processing | 39,973,408 | 2 | true | 0 | 0 | PIL and Pillow are only marginally useful for this type of work.
The basic algorithm used for "finding and counting" objects like you are trying to do goes something like this: 1. Conversion to grayscale 2. Thresholding (either automatically via Otsu method, or similar, or by manually setting the threshold values) 3. Contour detection 4. Masking and object counting based on your contours.
You can just use a Mat (of integers, Mat1i) would be Data structure fits in this scenario. | 2 | 0 | 1 | I have several images (their number might increase over time) and their corresponding annotated images - let's call them image masks.
I want to convert the original images to Grayscale and the annotated masks to Binary images (B&W) and then save the gray scale values in a Pandas DataFrame/CSV file based on the B&W pixel coordinates.
So that means a lot of switching back and forth the original image and the binary images.
I don't want to read every time the images from file because it might be very time consuming.
Any suggestion which data structure should be used for storing several types of images in Python? | store multiple images efficiently in Python data structure | 1.2 | 0 | 0 | 783 |
39,958,650 | 2016-10-10T12:46:00.000 | 0 | 0 | 1 | 0 | python,multithreading,console-application | 39,958,943 | 1 | true | 0 | 0 | In a console, standard output (produced by the running program(s)) and standard input (produced by your keypresses) are both sent to screen, so they may end up all mixed.
Here your thread 1 writes 1 x by line every second, so if your take more than 1 second to type HELLO then that will produce the in-console output that you submitted.
If you want to avoid that, a few non-exhaustive suggestions:
temporarily interrupt thread1 output when a keypress is detected
use a library such as ncurses to create separates zones for your program output and the user input
just suppress thread1 input, or send it to a file instead. | 1 | 0 | 0 | I have a problem with console app with threading. In first thread i have a function, which write symbol "x" into output. In second thread i have function, which waiting for users input. (Symbol "x" is just random choice for this question).
For ex.
Thread 1:
while True:
print "x"
time.sleep(1)
Thread 2:
input = null
while input != "EXIT":
input = raw_input()
print input
But when i write text for thread 2 to console, my input text (for ex. HELLO) is rewroted.
x
x
HELx
LOx
x
x[enter pressed here]
HELLO
x
x
Is any way how can i prevent rewriting my input text by symbol "x"?
Thanks for answers. | One of threads rewrites console input in Python | 1.2 | 0 | 0 | 201 |
39,963,972 | 2016-10-10T17:50:00.000 | 3 | 0 | 0 | 0 | python | 39,964,037 | 2 | false | 1 | 0 | You can't do what you want. Beautiful soup is a text processor which has no way to run JavaScript. | 2 | 0 | 0 | I don't want to use selenium since I dont want to open any browsers.
The button triggers a Javascript method that changes something in the page.
I want to simulate a button click so I can get the "output" from it.
Example (not what the button actually do) :
I enter a name such as "John", press the button and it changes "John" to "nhoJ".
so I already managed to change the value of the input to John but I have no clue how I could simulate a button click so I can get the output.
Thanks. | Python: How to simulate a click using BeautifulSoup | 0.291313 | 0 | 1 | 5,581 |
39,963,972 | 2016-10-10T17:50:00.000 | 0 | 0 | 0 | 0 | python | 39,964,061 | 2 | false | 1 | 0 | BeautifulSoup is an HtmlParser you can't do such thing. Buf if that button calls an API, you could make a request to that api and I guess that would simulate clicking the button. | 2 | 0 | 0 | I don't want to use selenium since I dont want to open any browsers.
The button triggers a Javascript method that changes something in the page.
I want to simulate a button click so I can get the "output" from it.
Example (not what the button actually do) :
I enter a name such as "John", press the button and it changes "John" to "nhoJ".
so I already managed to change the value of the input to John but I have no clue how I could simulate a button click so I can get the output.
Thanks. | Python: How to simulate a click using BeautifulSoup | 0 | 0 | 1 | 5,581 |
39,964,360 | 2016-10-10T18:16:00.000 | 2 | 0 | 1 | 0 | python | 39,964,406 | 1 | true | 0 | 0 | Avoid it if possible.
You can get and set such attributes via getattr and setattr, but they can't be accessed with ordinary dot syntax (something like obj.class is a syntax error), so they're a pain to use.
As Aurora0001 mentioned in a comment, a convention if you "need" to use them is to append an underscore. The most common reason to "need" to have such attributes is that they're generated programatically from an external data source.
(Note that type is not a keyword, so you can do self.type just fine.) | 1 | 0 | 0 | I checked the Python style guide, and I found no specific references to having instance variable names with reserved words e.g. self.type, self.class, etc.
What's the best practice for this? | Instance variable name a reserved word in Python | 1.2 | 0 | 0 | 935 |
39,964,635 | 2016-10-10T18:35:00.000 | 11 | 0 | 1 | 1 | python,python-2.7,pip | 39,977,369 | 12 | false | 0 | 0 | As mentioned in the comments, you've got the virtualenv module installed properly in the expected environment since python -m venv allows you to create virtualenv's.
The fact that virtualenv is not a recognized command is a result of the virtualenv.py not being in your system PATH and/or not being executable. The root cause could be outdated distutils or setuptools.
You should attempt to locate the virtualenv.py file, ensure it is executable (chmod +x) and that its location is in your system PATH. On my system, virtualenv.py is in the ../Pythonx.x/Scripts folder, but this may be different for you. | 5 | 33 | 0 | This has been driving me crazy for the past 2 days.
I installed virtualenv on my Macbook using pip install virtualenv.
But when I try to create a new virtualenv using virtualenv venv, I get the error saying "virtualenv : command not found".
I used pip show virtualenv and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me.
Any ideas what might be going wrong here? | Error "virtualenv : command not found" but install location is in PYTHONPATH | 1 | 0 | 0 | 102,543 |
39,964,635 | 2016-10-10T18:35:00.000 | 85 | 0 | 1 | 1 | python,python-2.7,pip | 39,972,160 | 12 | true | 0 | 0 | The only workable approach I could figure out (with help from @Gator_Python was to do python -m virtualenv venv. This creates the virtual environment and works as expected.
I have custom python installed and maybe that's why the default approach doesn't work for me. | 5 | 33 | 0 | This has been driving me crazy for the past 2 days.
I installed virtualenv on my Macbook using pip install virtualenv.
But when I try to create a new virtualenv using virtualenv venv, I get the error saying "virtualenv : command not found".
I used pip show virtualenv and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me.
Any ideas what might be going wrong here? | Error "virtualenv : command not found" but install location is in PYTHONPATH | 1.2 | 0 | 0 | 102,543 |
39,964,635 | 2016-10-10T18:35:00.000 | 20 | 0 | 1 | 1 | python,python-2.7,pip | 54,281,271 | 12 | false | 0 | 0 | On macOS Mojave
First check python is in the path.
python --version
Second check pip is installed.
pip --version
If it is not installed.
brew install pip
Third install virtualenv
sudo -H pip install virtualenv | 5 | 33 | 0 | This has been driving me crazy for the past 2 days.
I installed virtualenv on my Macbook using pip install virtualenv.
But when I try to create a new virtualenv using virtualenv venv, I get the error saying "virtualenv : command not found".
I used pip show virtualenv and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me.
Any ideas what might be going wrong here? | Error "virtualenv : command not found" but install location is in PYTHONPATH | 1 | 0 | 0 | 102,543 |
39,964,635 | 2016-10-10T18:35:00.000 | 1 | 0 | 1 | 1 | python,python-2.7,pip | 64,741,790 | 12 | false | 0 | 0 | Had the same problem on Windows. Command not found and can't find the executable in the directory given by pip show.
Fixed it by adding "C:\Users{My User}\AppData\Roaming\Python\Python39\Scripts" to the PATH environment variable. | 5 | 33 | 0 | This has been driving me crazy for the past 2 days.
I installed virtualenv on my Macbook using pip install virtualenv.
But when I try to create a new virtualenv using virtualenv venv, I get the error saying "virtualenv : command not found".
I used pip show virtualenv and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me.
Any ideas what might be going wrong here? | Error "virtualenv : command not found" but install location is in PYTHONPATH | 0.016665 | 0 | 0 | 102,543 |
39,964,635 | 2016-10-10T18:35:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,pip | 57,953,946 | 12 | false | 0 | 0 | I tried to have virtualenv at a random location & faced the same issue on a UBUNTU machine, when I tried to run my 'venv'. What solved my issue was :-
$virtualenv -p python3 venv
Also,instead of using $activate try :-
$source activate
If you look at the activate script(or $cat activate), you will find the same in comment. | 5 | 33 | 0 | This has been driving me crazy for the past 2 days.
I installed virtualenv on my Macbook using pip install virtualenv.
But when I try to create a new virtualenv using virtualenv venv, I get the error saying "virtualenv : command not found".
I used pip show virtualenv and the location of the installation is "Location: /usr/local/lib/python2.7/site-packages" but I can't figure out where the executable is. I tried dozens other similar looking posts but those solutions do not work for me.
Any ideas what might be going wrong here? | Error "virtualenv : command not found" but install location is in PYTHONPATH | 0 | 0 | 0 | 102,543 |
39,968,806 | 2016-10-11T00:39:00.000 | 2 | 0 | 0 | 1 | python,google-cloud-dataflow | 39,970,763 | 1 | false | 0 | 0 | As a general approach, you should try running the pipeline locally, using the DirectPipelineRunner on a small dataset to debug your custom transformations.
Once that passes, you can use the Google Cloud Dataflow UI to investigate the pipeline state. You can particularly look at Elements Added field in the Step tab to see whether your transformations are producing output.
In this particular job, there's a step that doesn't seem to be producing output, which normally indicates an issue in the user code. | 1 | 0 | 0 | I setup a job on google cloud dataflow, and it need more than 7 hours to make it done. My Job ID is 2016-10-10_09_29_48-13166717443134662621. It didn't show any error in pipeline. Just keep logging out "oauth2client.transport : Refreshing due to a 401". Is there any problem of my workers or there is something wrong. If so, how can I solve it? | "oauth2client.transport : Refreshing due to a 401" what exactly this log mean? | 0.379949 | 0 | 0 | 557 |
39,968,808 | 2016-10-11T00:39:00.000 | 3 | 0 | 1 | 0 | python,multiprocessing | 39,969,183 | 1 | true | 0 | 0 | While best practice is to use as many threads as you have virtual cores available, you don't have to stick to that. Using less means you could be under-utilizing your available processor capacity. Using more means you'll be over-utilizing your available processor capacity.
Both these situations mean you'll be doing work at a slower rate than would otherwise be possible. (Though using more threads than you have cores has less of an impact than using fewer threads than your have cores.) | 1 | 3 | 0 | Say I start 10 process in a loop using Process() but I only have 8 cores available. How does python handle this? | Python multiprocessing start more processes than cores | 1.2 | 0 | 0 | 2,018 |
39,969,168 | 2016-10-11T01:29:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,recommendation-engine,data-science | 40,001,529 | 1 | true | 1 | 0 | I would keep it simple and separate:
Your focus is collaborative filtering, so your recommender should generate scores for the top N recommendations regardless of location.
Then you can re-score using distance among those top-N. For a simple MVP, you could start with an inverse distance decay (e.g. final-score = cf-score * 1/distance), and adjust the decay function based on behavioral evidence if necessary. | 1 | 0 | 0 | I am currently building a recommender engine in python and I faced the following problem.
I want to incorporate collaborative filtering approach, its user-user variant. To recap, its idea is that we have an information on different users and which items they liked (if applicable - which ratings these users assigned to items). When we have new user who liked couple of things we just find users who liked same items and recommend to this new user items which were liked by users similar to new user.
But I want to add some twist to it. I will be recommending places to users, namely 'where to go tonight'. I know user preferences, but I want to also incorporate the distance to each item I could recommend. The father the place I am going to recommend to the user - the least attractive it should be.
So in general I want to incorporate a penalty into recommendation engine and the amount of penalty for each place will be based on the distance from user to the place.
I tried to googleif anyone did something similar but wasn't able to find anything. Any advice on how to properly add such penalty? | Recommender engine in python - incorporate custom similarity metrics | 1.2 | 0 | 0 | 114 |
39,970,515 | 2016-10-11T04:29:00.000 | 0 | 0 | 0 | 0 | python,matplotlib | 39,971,118 | 3 | false | 0 | 0 | It seems to me heatmap is the best candidate for this type of plot. imshow() will return u a colored matrix with color scale legend.
I don't get ur stretched ellipses problem, shouldnt it be a colored squred for each data point?
u can try log color scale if it is sparse. also plot the 12 classes separately to analyze if theres any inter-class differences. | 2 | 1 | 1 | I have a sparse matrix X, shape (6000, 300). I'd like something like a scatterplot which has a dot where the X(i, j) != 0, and blank space otherwise. I don't know how many nonzero entries there are in each row of X. X[0] has 15 nonzero entries, X[1] has 3, etc. The maximum number of nonzero entries in a row is 16.
Attempts:
plt.imshow(X) results in a tall, skinny graph because of the shape of X. Using plt.imshow(X, aspect='auto) will stretch out the graph horizontally, but the dots get stretched out to become ellipses, and the plot becomes hard to read.
ax.spy suffers from the same problem.
bokeh seems promising, but really taxes my jupyter kernel.
Bonus:
The nonzero entries of X are positive real numbers. If there was some way to reflect their magnitude, that would be great as well (e.g. colour intensity, transparency, or across a colour bar).
Every 500 rows of X belong to the same class. That's 12 classes * 500 observations (rows) per class = 6000 rows. E.g. X[:500] are from class A, X[500:1000] are from class B, etc. Would be nice to colour-code the dots by class. For the moment I'll settle for manually including horizontal lines every 500 rows to delineate between classes. | Python: Plot a sparse matrix | 0 | 0 | 0 | 2,877 |
39,970,515 | 2016-10-11T04:29:00.000 | 0 | 0 | 0 | 0 | python,matplotlib | 40,127,976 | 3 | false | 0 | 0 | plt.matshow also turned out to be a feasible solution. I could also plot a heatmap with colorbars and all that. | 2 | 1 | 1 | I have a sparse matrix X, shape (6000, 300). I'd like something like a scatterplot which has a dot where the X(i, j) != 0, and blank space otherwise. I don't know how many nonzero entries there are in each row of X. X[0] has 15 nonzero entries, X[1] has 3, etc. The maximum number of nonzero entries in a row is 16.
Attempts:
plt.imshow(X) results in a tall, skinny graph because of the shape of X. Using plt.imshow(X, aspect='auto) will stretch out the graph horizontally, but the dots get stretched out to become ellipses, and the plot becomes hard to read.
ax.spy suffers from the same problem.
bokeh seems promising, but really taxes my jupyter kernel.
Bonus:
The nonzero entries of X are positive real numbers. If there was some way to reflect their magnitude, that would be great as well (e.g. colour intensity, transparency, or across a colour bar).
Every 500 rows of X belong to the same class. That's 12 classes * 500 observations (rows) per class = 6000 rows. E.g. X[:500] are from class A, X[500:1000] are from class B, etc. Would be nice to colour-code the dots by class. For the moment I'll settle for manually including horizontal lines every 500 rows to delineate between classes. | Python: Plot a sparse matrix | 0 | 0 | 0 | 2,877 |
39,972,261 | 2016-10-11T07:25:00.000 | 0 | 0 | 1 | 0 | python,pandas,upgrade,arcmap | 39,972,738 | 1 | false | 0 | 0 | I reinstalled python again directly from python.org and then installed pandas which seems to work.
I guess this might stop the ArcMap version of python working properly but since I'm not using python with ArcMap at the moment it's not a big problem. | 1 | 0 | 1 | I recently installed ArcGIS10.4 and now when I run python 2.7 programs using Idle (for purposes unrelated to ArcGIS) it uses the version of python attached to ArcGIS.
One of the programs I wrote needs an updated version of the pandas module. When I try to update the pandas module in this verion of python (by opening command prompt as an administrator, moving to C:\Python27\ArcGIS10.4\Scripts and using the command pip install --upgrade pandas) the files download ok but there is an access error message when PIP tries to upgrade. I have tried restarting the computer in case something was open. The error message is quite long and I can't cut and paste from command prompt but it finishes with
" Permission denied: 'C:\Python27\ArcGIS10.4\Lib\site-packages\numpy\core\multiarray.pyd' "
I've tried the command to reinstall pandas completely which also gave an error message. I've tried installing miniconda in the hope that I could get a second version of python working and then use that version instead of the version attached to ArcMap. However I don't know how to direct Idle to choose the newly installed version.
So overall I don't mind having 2 versions of python if someone could tell me how to choose which one runs or if there's some way to update the ArcMap version that would be even better. I don't really want to uninstall ArcMap at the moment.
Any help is appreciated! Thanks! | How to update pandas when python is installed as part of ArcGIS10.4, or another solution | 0 | 0 | 0 | 212 |
39,973,597 | 2016-10-11T08:52:00.000 | 0 | 0 | 1 | 0 | python,terminal,edit,python-idle,undo | 39,973,715 | 3 | false | 0 | 0 | In Idle (at least my version, Python 2.7.10 on windows), you can simply copy paste your code. In the python interpreter, you can't afaik, however you can use the up/down arrow keys to recall lines you previously "submitted" (i.e. typed and pressed enter). | 3 | 1 | 0 | When reading a book or just coding on terminal/IDLE it's common to make typo, forgot brace or comma etc. After I got error and all what I wrote before is lost.
Then I have to write down code again..
Is there any way/option to return back all what write before and just edit mistake and continue to code? | Python IDLE/Terminal return back after error | 0 | 0 | 0 | 779 |
39,973,597 | 2016-10-11T08:52:00.000 | 0 | 0 | 1 | 0 | python,terminal,edit,python-idle,undo | 39,974,277 | 3 | false | 0 | 0 | If I understood correctly, IDLE is a GUI (graphical user interface - a visual representation of a program rather just through text) made to have a bit more features for programming in Python. You can use IDLE interactively, like in Terminal (a.k.a command line), or use it to write your script rather than in a separate text editor. Then once you save your script/program you can do neat things like run it directly from IDLE. There's nothing more special about the Terminal, you just have to do some more work.
Furthermore, all the code you have written on your GUI is on the cache memory which is used in system to store information recently accessed by a processor. So, I suggest you write again your code you can't recover them without saving.
To avoid these kind of problems use Git!
Git is a version control system that is used for software development and other version control tasks. | 3 | 1 | 0 | When reading a book or just coding on terminal/IDLE it's common to make typo, forgot brace or comma etc. After I got error and all what I wrote before is lost.
Then I have to write down code again..
Is there any way/option to return back all what write before and just edit mistake and continue to code? | Python IDLE/Terminal return back after error | 0 | 0 | 0 | 779 |
39,973,597 | 2016-10-11T08:52:00.000 | 0 | 0 | 1 | 0 | python,terminal,edit,python-idle,undo | 39,990,408 | 3 | false | 0 | 0 | IDLE's Shell window is statement rather that line oriented. One can edit any line of a statement before submitting it for execution. After executing, one may recall any statement by either a) placing the cursor anywhere on the statement and hitting Enter, or b) using the history-next and history-prev actions. On Windows, these are bound, by default, to Alt-p and Alt-p. To check on your installation, Select Options => IDLE preferences on the menu. In the dialog, select the Keys tab. Under Custom Key Bindings, find the 'histor-xyz' actions in the alphabetical list.
For short, one-off scripts, I have a scratch file called tem.py. Since I use it often, it is usually accessible via File => Recent files. | 3 | 1 | 0 | When reading a book or just coding on terminal/IDLE it's common to make typo, forgot brace or comma etc. After I got error and all what I wrote before is lost.
Then I have to write down code again..
Is there any way/option to return back all what write before and just edit mistake and continue to code? | Python IDLE/Terminal return back after error | 0 | 0 | 0 | 779 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.