Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
47,217,743 | 2017-11-10T07:31:00.000 | 1 | 0 | 0 | 0 | python,grpc | 48,276,627 | 1 | false | 0 | 0 | revert fly tiger resolve this issue | 1 | 0 | 0 | Python Client throw "call dropped by load balancing policy grpc", if remote server restart. And connection never recovered afterwards.
The problem is hard to constant reproduce. But we confirmed that if a remote server restart, python client have chance starting send error message like this.
Other grpc client like java are working fine. I searched online, and it seems related to load balancing policy. And suggest to change from 'roundrobin' to 'pick first'. But I can not find where to add this args in python client. | python client throw error "call dropped by load balancing policy grpc" when remote server restart | 0.197375 | 0 | 1 | 267 |
47,219,769 | 2017-11-10T09:42:00.000 | 1 | 1 | 0 | 0 | python,pywinauto | 47,247,409 | 1 | true | 0 | 0 | Solved conventionally by manual installation/update of "comtypes" modules. | 1 | 0 | 0 | I developed automation soft on python 3.6.2 and used pywinauto module for that. I shared this soft with python 3.6.3 user. When tried to ran my app on 3.6.3 it crashed. The crash was on "from pywinauto.application import Application" line. It pointed on lack of some attributes in "comtype". I solved it simply by copy-paste all related files of "comtype" from my 3.6.2 to 3.6.3 version of user. Then it worked perfect. My question is: Is there any conventional way to solve this issue? | comtype error on python 3.6.3 | 1.2 | 0 | 0 | 341 |
47,219,945 | 2017-11-10T09:51:00.000 | 1 | 0 | 0 | 0 | python,django,cron | 47,220,220 | 5 | false | 1 | 0 | Too many moving pieces and consequently options to consider. But problem 1 means you need some form of external way to track success (otherwise one option would have been for your server task - say a bash script - to keep retrying N times and sleeping in between retries, until the report generation is successful).
If you want a full-blown solution you can use for many different future needs, you can look into the available third party schedulers like Jenkins or SOS Berlin.
If you are looking for a simpler solution, you can schedule the report script via cron to run many times (say every hour for a few days at the end of the month), then have it keep track of whether the report was generated and sent successfully (this could be as simple as creating a file and checking for its existence, or writing a value to a database). | 1 | 3 | 0 | Our customer wants us to create a report every month.
In the past, we used a @monthly cron job for this task.
But this is not reliable:
The server could be down in this minute. Cron does not re-run those jobs
If the server is up, the database could be unreachable in this moment.
If the server is up and DB is up, there could be a third party system which is not reachable
There could be a software bug.
What can I do, to be sure that the report gets created monthly?
It is a Django based web application | @monthly cron job is not reliable | 0.039979 | 0 | 0 | 1,394 |
47,220,213 | 2017-11-10T10:04:00.000 | 0 | 0 | 1 | 0 | python-3.x,windows-10 | 47,220,954 | 2 | false | 0 | 0 | Thanks @chaim. You are right. I checked my task manager again. the consumed memory is considered for python, not my eclipse. Now it is more than 1 GBs. So this is not the windows 10 limitation. | 1 | 0 | 0 | I have a python program which runs slowly. I added cache using @lru_cache(maxsize=2056) decorator to increase the cache of my process, but when i run my code the consumed memory of my program is 260 MB in task manager (not 2 GB). Is it a limitation of windows 10 that does not permit high cache?
I run my code using eclipse luna + pydev. The version of my python is 3.5. | Python caching in windows 10 using lru_cache | 0 | 0 | 0 | 683 |
47,224,175 | 2017-11-10T13:43:00.000 | 1 | 0 | 1 | 0 | python-3.x,import,anaconda,environment,pygrib | 47,246,262 | 1 | false | 0 | 0 | This is a linking error with the HDF5 library. Are you building pygrib from source or using the conda-forge channel to install it via conda? When I use the conda-forge build of pygrib I get the same issue. The GRIB API from ECMWF (on conda-forge it is listed as ecmwf_grib) is what pygrib depends on and the HDF5 dependency comes from netCDF4 being used in the GRIB API library. Specifically, using the latest HDF5 (1.10.0 at this time) is what is causing a problem. Using HDF5 1.8.* instead allows pygrib to import properly.
To force conda to grab a specific version, just do:
conda install pygrib hdf5=1.8
This will get conda to solve the package specifications again with the older HDF5 library and likely clear up the issue. This assumes you are in the conda environment that you installed pygrib into. You could also create a new environment with conda create -n <env name> pygrib hdf5=1.8 if you wanted to.
In general, when you see these errors where a library is not found, it is often a matter of getting the right version of a library installed. With conda, this sort of thing happens when updating packages and a newer version of a library gets installed that a package you are using has not been properly linked with. As long as you can track down the package/library that is causing trouble, you can use the above procedure to start requiring certain versions of things get installed and conda should then update or downgrade things so that things work together again. Hopefully this makes sense and helps.
This part may or may not interest you, but what I cannot say for certain is where this problem originates. My guess is that it is something with ecmwf_grib and how it is built. That is where ldd shows the old HDF5 dependency showing up for my installation. If I can figure out the exact issue, I'll update this answer. | 1 | 0 | 0 | I have the following issue: I have installed anaconda 3 and installed a package called "pygrib" into my anaconda environment. Now when importing pygrib in a file in my environment, it will show me this error:
import pygrib
ImportError: libhdf5.so.10: cannot open shared object file: No such file or directory
As I am a noobie, I dont really know what to do with this information. I installed the h5py package and some other related ones, but it didnt resolve the issue. What to do? | importing pygrib anaconda throws dependency issues | 0.197375 | 0 | 0 | 681 |
47,228,529 | 2017-11-10T17:52:00.000 | 0 | 0 | 0 | 0 | python,django,url,url-routing,google-search | 47,229,387 | 2 | false | 1 | 0 | You're completely overthinking this. There's nothing different here from any other request: the form submits via GET to the results view, the view gets the search query from request.GET["q"] or wherever, does the search and passes the results to the template. | 1 | 0 | 0 | I'm trying to use Google's Custom Search Engine product to display results from a query on a page for a personal web app.
How CSE is working is that it takes parameters from the URL as search terms: i.e., www.mydomain.com/results.html?q=hello+world would return results for a "hello world" query on the page. You put some JS code they give you on your page, so it's a bit of a black box.
However, with URL routing and render() on Django, I'm guessing the basics is that www.mydomain.com/results gets routed to a views.results calls which renders results.html as www.mydomain.com/results.
What is the best-practice for submitting a query through a form, and passing it to www.mydomain.com/results.html?q=hello+world instead of redirecting to www.mydomain.com/results and having Django render the results/html file?
Sorry, I'm relatively new. I can try to piece things together, but I feel there has got to be a very efficient way of handling this situation. Thanks for understanding | How to pass parameters into url after render() in Django? | 0 | 0 | 0 | 1,022 |
47,229,767 | 2017-11-10T19:19:00.000 | 1 | 0 | 1 | 0 | python,optimization,code-analysis,deep-copy | 47,231,020 | 1 | false | 0 | 0 | Such a tool doesn't and cannot exist until you give an objective definition to "(obvious) unnecessary calls to deepcopy".
E.g. the example you've given is plain wrong. If you read the original object instead of a deepcopy and that object is mutated, you'll be getting the new data when reading it instead of what it held at the moment the copy was made. | 1 | 0 | 0 | I'm for the first time working on a rather large python project and I noticed that I used a lot of deepcopys, probably more than I should.
I'm wondering if there is a code analysis tool/method that lets me find obvious unnecessary calls to deepcopy. For example if the result of a deepcopy is only read from and never changed, in which case just passing the original object would have been fine.
Thanks a lot. | How to find unnecessary deepcopys in python? | 0.197375 | 0 | 0 | 47 |
47,230,523 | 2017-11-10T20:20:00.000 | 1 | 0 | 1 | 0 | python,django | 53,996,953 | 2 | false | 1 | 0 | Late response, but I want to apport my workaround for this issue. Thanks to the Thomas response I understood the problem by looking at the activate script. So, was easier in my case restore the original path and follow these steps:
Before to move the project and with the virtual environment activated
pip freeze > requirements.txt
Then deactivate the environment and remove it
deactivate
rm -rf .env
Move the project to the desired directory
Generate a fresh new virtual env and activate it
python3 -m venv .env
source .env/bin/activate
Install the dependencies
pip install -r requirements.txt | 1 | 2 | 0 | Background: I installed Python 3.5.2 on my Mac (which already contained 2.7.10) and have been apparently running the two installations side-by-side without any apparent issues. Everything works fine until I move the project folder somewhere else, and then when I try to do anything I get the following error:
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
My normal setup workflow is as follows:
Install a virtual environment in the directory containing the Django project folder (not the directory containing manage.py - one level up from that) with python3 -m venv <venv-name>
Activate the virtual environment and install Django, Pillow, whatever I need for the project.
I know I'm missing something because I thought the way virtual environments worked was that you installed them locally and then as long as all of that accompanied your project folder, everything would be a-okay. But everything stops working when I move the directory, and if I move it back it works again.
Can anyone tell me what kind of issue I'm dealing with here based on this? Is this just normal behavior and I just need to get used to not moving Django project folders?
UPDATE: If I delete the virtual environment folder and re-install it once the folder is in the new location everything seems to work fine. I guess it's some issue with the creation of virtual environments and some kind of link to my Python installation? I have no idea. | Django not working after moving directory. Do I have a path/Python installation issue? | 0.099668 | 0 | 0 | 1,389 |
47,230,690 | 2017-11-10T20:32:00.000 | 4 | 0 | 0 | 0 | python,opencv,anaconda | 50,078,794 | 1 | true | 0 | 0 | you can try conda install -c conda-forge opencv, this one also will install opencv version 3.
For your error, you can fix it by install pango using conda install -c conda-forge pango | 1 | 2 | 1 | I have a working anaconda environment on my ubuntu 17.10 machine and installed opencv3 using conda install -c menpo opencv3
When I try to import cv2 the following error shows up
import cv2
ImportError: /usr/lib/x86_64-linux-gnu/libpangoft2-1.0.so.0: undefined symbol: hb_font_funcs_set_variation_glyph_func | OpenCV - undefined symbol: hb_font_funcs_set_variation_glyph_func | 1.2 | 0 | 0 | 2,740 |
47,231,950 | 2017-11-10T22:15:00.000 | 1 | 0 | 1 | 0 | python,python-3.x | 47,231,983 | 1 | true | 0 | 0 | asyncio only runs one coroutine at a time and only switches at points you define, so race conditions aren't really a thing. Since you're not worried about race conditions, you're not really worried about locks (although technically you could still get into a deadlock situation if you have 2 coroutines that wake each other, but you'd have to try really hard to make that happen) | 1 | 1 | 0 | I'm new to asyncio and I was wondering how you prevent race conditions from occuring. I don't see implementation for locks - is there a different way this is handled? | Race Conditions with asyncio | 1.2 | 0 | 0 | 704 |
47,232,558 | 2017-11-10T23:16:00.000 | 0 | 0 | 1 | 1 | python,python-3.x | 47,232,633 | 3 | false | 0 | 0 | You can read stdin (or any file) in chunks using f.read(n), where n is the integer number of bytes you want to read as an argument. It will return the empty string if there is nothing left in the file. | 1 | 2 | 0 | I want to read from standard input chunk by chunk until EOF. For example, I could have a very large file, and I want to read in and process 1024 bytes at a time from STDIN until EOF is encountered. I've seen sys.stdin.read() which saves everything in memory at once. This isn't feasible because there might not be enough space available to store the entire file. There is also for "line in sys.stdin", but that separates the input by newline only, which is not what I'm looking for. Is there any way to accomplish this in Python? | Python: How to read from stdin by byte chunks until EOF? | 0 | 0 | 0 | 2,344 |
47,235,597 | 2017-11-11T07:39:00.000 | -1 | 0 | 0 | 0 | python,flask | 47,235,883 | 2 | false | 1 | 0 | 500 that means there something wrong in your demo.may as well put your error message | 1 | 0 | 0 | simple question I hope...
I am getting 500 Internal Errors when developing and want to flash them to the browser screen for easier turn around time. Whats the easiest way to do this with Flask with Python?
Thanks. | Flashing 500 Internal Server Error to screen Flask with Python | -0.099668 | 0 | 0 | 495 |
47,235,661 | 2017-11-11T07:48:00.000 | 0 | 0 | 1 | 1 | python,linux,python-3.x,package | 47,235,704 | 1 | false | 0 | 0 | It is much easier to install or uninstall modules with the Pip, or Pip3 command. | 1 | 0 | 0 | I am working with python3 on Debian Stable Linux. I see that I can install most packages from Debian repository as well as using pip install packagename command.
Is there any difference between these two approaches and should I prefer one over the other? Are the location of packages installed different in two methods? Thanks for your answers/comments. | Intalling python packages with pip versus installing from Linux repository | 0 | 0 | 0 | 44 |
47,240,282 | 2017-11-11T16:41:00.000 | 0 | 0 | 1 | 1 | python,python-3.x | 47,240,379 | 3 | false | 0 | 0 | A possible strategy :
foo build todo.foo.
it renames it todo.bar when done.
bar look after todo.bar, compute it and delete it when done. | 1 | 2 | 0 | I have two scripts inside a bigger system I'm developing. Lets call one foo.py and the other one bar.py.
foo creates files that bar will afterwards read and delete. Now, if foo is running and working on a file, bar will mess things up by working on the same file before foo is finished.
foo and bar get started automatically by other stuff, not manually by someone that runs them. How can I make sure that, if foo is working on a file, bar will not start?
I have thought about editing both of them so that foo writes a 1 to a text file at the beginning of the execution and a 0 when it finishes, and to make bar read the text file to check wheter it can start. But I'm not really sure this is the most elegant solution, and also I don't want to block bar if foo ever fails in the middle of execution, leaving the file at 1. | Prevent python script from running if another one is being executed | 0 | 0 | 0 | 878 |
47,241,274 | 2017-11-11T18:16:00.000 | 1 | 0 | 1 | 0 | python,pip,installation,updates,setuptools | 47,252,287 | 1 | true | 0 | 0 | I very much doubt that eggs or wheels "installers" can do that. They are rather primitive distribution formats suitable for simple things (uninstall the previous version, install the new one overriding files) but that's all.
To do what you want you probably need a real installer (rpm or deb) — they can preserve changed config files. But they are complex, hard to create formats.
If you insist on using simple wheels I can recommend stop distributing config files at all. Instead distribute config files' templates and teach users to create config files from these templates. Then new versions will only override templates but not real config files. | 1 | 3 | 0 | I'm using the data_files argument of setuptools.setup() to install config files to /etc and the users home directory. However updating the package with pip install <package-name> uninstalls the old version and all config files before it installs the new version.
How do I keep the config files during updates if they have been changed? | How to keep data_files between package updates? | 1.2 | 0 | 0 | 103 |
47,247,790 | 2017-11-12T10:35:00.000 | 2 | 0 | 0 | 0 | python,sqlite,ms-access | 47,248,148 | 1 | true | 0 | 0 | Access has built-in ODBC support.
You can use ODBC to export tables to SQLite. You do need to create the database first. | 1 | 0 | 0 | I am looking for another method to convert .accdb to .db without using csv exporting and separator method to create the new database. Do Access has built-in option to export files into .db? | How to convert .accdb to db | 1.2 | 1 | 0 | 1,624 |
47,248,056 | 2017-11-12T11:07:00.000 | 0 | 0 | 0 | 1 | python,linux,bash,gcc | 47,265,562 | 3 | false | 0 | 0 | Today,I found the answer. I haved install a security software, which prohibited create and excute binary file in the system directory. So I uninstalled it and my system back to normal. | 1 | 1 | 0 | CentOS release 6.9 (Final) : Linux version 2.6.32-696.10.2.el6.i686 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC) ) #1 SMP Tue Sep 12 13:54:13 UTC 2017
Install command :
wget https://www.python.org/ftp/python/3.5.4/Python-3.5.4.tgz
tar -zxvf Python-3.5.4.tgz
cd Python-3.5.4
mkdir /usr/local/python3.5
./configure --prefix=/usr/local/python3.5
Error step :
make
gcc -pthread -c -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Werror=declaration-after-statement -I. -I./Include -DPy_BUILD_CORE \
-DGITVERSION="\"LC_ALL=C \"" \
-DGITTAG="\"LC_ALL=C \"" \
-DGITBRANCH="\"LC_ALL=C \"" \
-o Modules/getbuildinfo.o ./Modules/getbuildinfo.c
/bin/sh: /usr/bin/gcc: Permission denied
make: *** [Modules/getbuildinfo.o] Error 126
the power of /bin/sh and /usr/bin/gcc
[root@iZ2814clj1uZ Python-3.5.4]# ll /bin/sh
lrwxrwxrwx 1 root root 4 May 11 2017 /bin/sh -> bash
[root@iZ2814clj1uZ Python-3.5.4]# ll /bin/bash
-rwxr-xr-x 1 root root 872372 Mar 23 2017 /bin/bash
[root@iZ2814clj1uZ Python-3.5.4]# ll /usr/bin/gcc
-rwxr-xr-x 2 root root 234948 Mar 22 2017 /usr/bin/gcc
I haved tried chmod 777 /bin/sh , /bin/bash , /usr/bin/gcc and reboot the system, but it doesn't work. Has anyone else had this problem with this ?
Thanks for any help/suggestions.
update 2017-11-13: selinux audit log
type=USER_AUTH msg=audit(1510502012.108:840): user pid=4273 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:authentication acct="root" exe="/usr/sbin/sshd" hostname=[ip address1] addr=[ip address1] terminal=ssh res=success'
type=USER_ACCT msg=audit(1510502012.108:841): user pid=4273 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:accounting acct="root" exe="/usr/sbin/sshd" hostname=[ip address1] addr=[ip address1] terminal=ssh res=success'
type=CRYPTO_KEY_USER msg=audit(1510502012.108:842): user pid=4273 uid=0 auid=4294967295 ses=4294967295 msg='op=destroy kind=session fp=? direction=both spid=4274 suid=74 rport=31432 laddr=[ip address] lport=5676 exe="/usr/sbin/sshd" hostname=? addr=[ip address1] terminal=? res=success'
type=USER_AUTH msg=audit(1510502012.108:843): user pid=4273 uid=0 auid=4294967295 ses=4294967295 msg='op=success acct="root" exe="/usr/sbin/sshd" hostname=? addr=[ip address1] terminal=ssh res=success'
type=CRED_ACQ msg=audit(1510502012.109:844): user pid=4273 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred acct="root" exe="/usr/sbin/sshd" hostname=[ip address1] addr=[ip address1] terminal=ssh res=success'
type=LOGIN msg=audit(1510502012.109:845): pid=4273 uid=0 old auid=4294967295 new auid=0 old ses=4294967295 new ses=106
type=USER_START msg=audit(1510502012.111:846): user pid=4273 uid=0 auid=0 ses=106 msg='op=PAM:session_open acct="root" exe="/usr/sbin/sshd" hostname=[ip address1] addr=[ip address1] terminal=ssh res=success'
type=USER_LOGIN msg=audit(1510502012.189:847): user pid=4273 uid=0 auid=0 ses=106 msg='op=login id=0 exe="/usr/sbin/sshd" hostname=[ip address1] addr=[ip address1] terminal=ssh res=success | Centos6.9 install python3.5 display error permission denied after make | 0 | 0 | 0 | 162 |
47,249,474 | 2017-11-12T13:39:00.000 | 0 | 0 | 0 | 0 | python-3.x,tkinter,hebrew,bidi | 49,477,501 | 2 | false | 0 | 1 | As on of the main authors of FriBidi and a contributor to the bidi text support in Gtk, I strongly suggest that you don't use TkInter for anything Hebrew or any other text other than Latin, Greek, or Cyrillic scripts. In theory you can rearrange the text ordering with the stand alone fribidi executable on on Linux, or use the fribidi binding, but bidi and complex language support goes well beyond that. You might need to support text inserting, cut and paste, shaping, just to mention a few of the pitfalls. You are much better off using the excellent gtk or Qt bindings to python. | 1 | 3 | 0 | I'm working on a python GUI application, using tkinter, which displays text in Hebrew.
On Windows (10, python 3.6, tkinter 8.6) Hebrew strings are displayed fine.
On Linux (Ubuntu 14, both python 3.4 and 3.6, tkinter 8.6) Hebrew strings are displayed incorrectly - with no BiDi awareness - am I missing something?
I installed pybidi, and via bidi.algorithm.get_display(hebrew_string) - the strings are displayed correctly.
But then, on Windows, get_display(hebrew_string) is displayed incorrectly.
Is BiDi not supported on python-tkinter-Linux?
Must I wrap each string with get_display(string)?
Must I wrap get_display(string) with a only_on_linux(...) function? | Hebrew with tkinter - BiDi | 0 | 0 | 0 | 877 |
47,251,937 | 2017-11-12T17:45:00.000 | 1 | 0 | 0 | 0 | python,opencv,blob,contour | 50,935,626 | 1 | false | 0 | 0 | After a lot of programming, I realized the procedure is:
After a contour is extracted, create a black image with same size as the original image.
Draw the contour in black image.
Find the coordinate of the contour center by contour feature : moments, x0 and y0.
Use Floodfill() to white-fill the interior of the contour using (x0,y0) as seed point.
The resulting image contains the white BLOB to the corresponding contour. | 1 | 0 | 1 | Hi everyone,
I am trying to convert a contour to a blob in an image .There are several blobs in image ; the proper one is extracted by applying contour feature. The blob is required to mask a grayscale image.
I have tried extracting each non-zero pixels and pointPolygontest() in order to find the BLOB points, but it requires >70ms to complete the proccess. The application is in 30 fps videos, so I need to convert them within 30ms. I am using OpenCV in python. Is there a way to convert a contour into a Blob within 30ms in opencv?
Thanks in advance. | Convert Contour into BLOB OpenCV | 0.197375 | 0 | 0 | 532 |
47,252,394 | 2017-11-12T18:32:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,windows-subsystem-for-linux | 48,129,500 | 1 | true | 0 | 0 | Well, since no one is answering this question, I have to close the question. What I did to overcome the issue was just to uninstall the whole WSL and reinstall it. | 1 | 2 | 0 | I am using python 3.6.2 on WSL (Windows Linux subsystem) and trying to set tensorflow environment (and installing some other libraries as well). However, i always get an error when I exit and login again:
ModuleNotFoundError: No module named 'tensorflow'
So I have to reinstall the libraries again and the problem will be fixed until I logout again. This problem only happens with my python3. I also tried python3 and use import tensorflow to find the library, but it also returned the same error.
I think the problem may be related with system path because python cannot find the library in its original searching directory. when i enter sys.path it returns:
['', '/home/jeoker/anaconda3/lib/python36.zip', '/home/jeoker/anaconda3/lib/python3.6', '/home/jeoker/anaconda3/lib/python3.6/lib-dynload', '/home/jeoker/anaconda3/lib/python3.6/site-packages']
But when I do conda list, the result always show the files in /home/jeoker/anaconda2. I tried sudo pip3 install tensorflow, but it gived me this: Requiement already satisfied. It seems that the path where the libraries are installed is not the same as where python is looking into.
Does anyone know how can I fix this problem? thanks in advance!! | WSL python3 ModuleNotFoundError: No module named xxx | 1.2 | 0 | 0 | 1,381 |
47,253,169 | 2017-11-12T19:45:00.000 | 0 | 0 | 1 | 0 | python,function,numpy,heap-memory | 47,270,384 | 4 | true | 0 | 0 | I see what I missed now: Objects are created on the heap, but function frames are on the stack. So when methodB finishes, its frame will be reclaimed, but that object will still exist on the heap, and methodA can access it with a simple reference. | 1 | 2 | 1 | I am coding in Python trying to decide whether I should return a numpy array (the result of a diff on some other array) or return numpy.where(diff)[0], which is a smaller array but requires that little extra work to create. Let's call the method where this happens methodB.
I call methodB from methodA. The rub is that I won't necessarily always need the where() result in methodA, but I might. So is it worth doing this work inside methodB, or should I pass back the (much larger memory-wise) diff itself and then only process it further in methodA if needed? That would be the more efficient choice assuming methodA just gets a reference to the result.
So, are function results ever not copied when they are passed back the the code that called that function?
I believe that when methodB finishes, all the memory in its frame will be reclaimed by the system, so methodA has to actually copy anything returned by methodB in to its own frame in order to be able to use it. I would call this "return by value". Is this correct? | Are all objects returned by value rather than by reference? | 1.2 | 0 | 0 | 2,293 |
47,254,242 | 2017-11-12T21:33:00.000 | 3 | 0 | 1 | 0 | python,macos,jupyter-notebook | 47,254,623 | 1 | false | 0 | 0 | I figured it out. You can just paste the markdown code in to a cell in a jupyter notebook and change the dropdown menu below "Widgets" in the notebook from "code" to "markdown". Then hit shift enter and it renders the markdown. You can then download the notebook as pdf. | 1 | 4 | 0 | I'm working in a jupyter notebook, and I'm working on a markdown file. I'm using a mac with Sierra 10.12.16. I'd like to convert the markdown to a pdf and I'm wondering what the easiest way to do that would be. I've read a lot of posts about installing things like pandoc or using packages in atom. I'm wondering if there isn't a simple way with the jupyter notebook to convert the markdown file to pdf. It really seems like there should be, but I'm having trouble finding it. | Convert Markdown to pdf with Jupyter Notebook | 0.53705 | 0 | 0 | 4,353 |
47,254,930 | 2017-11-12T22:55:00.000 | 0 | 0 | 0 | 1 | python,wordpress,google-app-engine,charts,google-cloud-datastore | 47,257,054 | 1 | false | 1 | 0 | Welcome to "it depends".
You have some choices. Imagine the classic four-quadrant chart. Along one axis is data size, along the other is staleness/freshness.
If your time-series data changes rapidly but is small enough to safely be retrieved within a request, you can query for it on demand, convert it to JSON, and squirt it to the browser to be rendered by the JavaScript charting package of your choice. If the data is large, your app will need to do some sort of server-side pre-processing so that when the data is needed, it can be retrieved in sufficiently fewer requests that that the request won't time out. This might involve something data dependent like pre-bucketing the time series.
If the data changes slowly, you have the option of generating your chart on the server side, perhaps using matplotlib. When new data is ingested, or perhaps at intervals, spawn off a task to generate and cache the chart (or JSON to hand to the front-end) as a blob in the datastore. If the data is sufficiently large that a task will timeout, you might need to use a backend process. If the data is sufficiently large and you don't pre-process, you're in the quadrant of unhappiness.
In my experience, GAE memcache is best for caching data between requests where the time between requests is very short. Don't rely on generating artifacts, stuff them in memcache and hoping that they'll be there a few minutes later. I've rarely seen that work. | 1 | 0 | 0 | I've got a google app engine application that loads time series data real-time into a google datastore nosql style table. I was hoping to get some feedback around the right type of architecture to pull this data into a web application style chart (and ideally something I could also plug into a content management system like Word Press).
Most of my server-side code is python. What's a reasonable client-server setup to pull the data from the datastore database and display into my webpage? Ideally I'd have something that scales and doesn't cause an unnecessary number of reads on my database (potentially using google-app-engine's built in caching/etc).
I'm guessing this is a common use-case but I'd like to get an idea of what might be some best practices around this. I've seen some examples using client web side javascript/ajax with server side php to read the DB- is this really the best way? | Load chart data into a webapp from google datastore | 0 | 1 | 0 | 70 |
47,256,232 | 2017-11-13T02:19:00.000 | 3 | 0 | 1 | 0 | python,ide,spyder | 47,311,970 | 1 | true | 1 | 0 | You have to go to the menu
Source > Show todo list
to get the full list of TODO's in your file. | 1 | 3 | 0 | I use '#TODO: xxx' format for marking todo-s in Spyder. Is there any way to see full list of TODO-s in a file, in Spyder? | How to see list of #TODO-s in Spyder? | 1.2 | 0 | 0 | 2,523 |
47,257,766 | 2017-11-13T05:43:00.000 | 2 | 0 | 1 | 1 | python,python-3.x,compiler-errors,pycharm,cython | 61,422,234 | 4 | false | 0 | 0 | For python 3.7 sudo apt install libpython3.7-dev
solved my problem | 1 | 38 | 0 | Non-zero exit code (1):
_pydevd_bundle/pydevd_cython.c:13:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Please help me resolve this error of trying to install Cython in PyCharm. | Compile Cython Extensions Error - Pycharm IDE | 0.099668 | 0 | 0 | 8,886 |
47,258,026 | 2017-11-13T06:07:00.000 | 0 | 0 | 0 | 0 | javascript,python,node.js,microservices,financial | 47,258,699 | 2 | false | 1 | 0 | @wizard I would share here something .
If you have complex calculation or any type of calculation do it in db end if feasible.
The best approach
postgres
to do this type of process is to make function (PL\SQL) block . If you are using postgres create all calculation in postgres .
This is why node.js was design , no calculation in node.js end .
mongodb
if you are working with mongodb with this approach then i will recommend you to go with
Stored JavaScript in MongoDB | 1 | 0 | 0 | I'm developing a tourist web app with node.js and microservics.
I need to develop a pricing service which will do all the calculation for the guest (price for night, taxes, VAT, discounts etc).
Moreover, I need those calculation will be easy to dynamically change and control.
From my past experience, doing those calcs in common web programming languages and/or storing math formulas in the db end with a mess.
Are there any alternative solutions for this? From what I read so far language like python, R or Java can fit to the job. However, is there any specific library within those which aimed for pricing? | implement financial microservice using node.js | python | R | 0 | 0 | 0 | 155 |
47,259,157 | 2017-11-13T07:33:00.000 | -1 | 0 | 1 | 0 | python-2.7 | 47,675,840 | 3 | false | 0 | 0 | example = [{'apple':1,'mango':2},{'apple':3,'orange':10,'mango':15}]
new ={}
[[ks.update({k:v}) for s in [{listKey:0} for listKey in list(set(list(set(i for key_name in example for i in key_name)))-set(ks.keys()))] for k,v in s.items()] for ks in example]
[new.setdefault(k, []).append(v) for d in example for k,v in d.iteritems()]
print new | 1 | 0 | 0 | mylist=[{'apple':1,'mango':2},{'apple':3,'orange':10,'mango':15}]
i need like this
Heading
mylist=[{'apple':[1,3],'mango':[2,15],'orange':[0,10]}] | i need to combine unique keys and their values grouping as list in python | -0.066568 | 0 | 0 | 43 |
47,262,474 | 2017-11-13T10:44:00.000 | -1 | 1 | 1 | 0 | python,unit-testing,testing,syntax-checking | 47,262,706 | 3 | false | 0 | 0 | By default, Python completes a syntax check on the functions within your code. Additionally, this syntax check will look for any open brackets or unfinished lines of code to make sure the code will work properly. However, the syntax checker does not always highlight the problem directly. To fix this you will just have to look at the code yourself.
If you need to spell check things like variable names or strings I would advise copying the code into a spell checking program like Microsoft Word or Grammarly.
I hope this helps. | 3 | 2 | 0 | I've had a dispute with some colleges on should I take a syntax check after every code modification as a distinct step or I'd better rely on unit tests to catch possible syntax breakage ... ( of course unit tests do that ).
This is not a question whether to use or not tests for your code ( of course yes, but see below )
My use case is pretty simple, I am newbie in Python and try to play with some web application (MVC) by modifying model file. The changes are quite simple, indeed a couple or so lines at once, however as I am pretty unfamiliar with Python syntax, some stupid errors ( like wrong indentation, so on ) easily creep into.
The real hurdle is that I deploy an application via Google Cloud Platform ( as app engine ), so it takes a while for application comes up with new version and I can hit an endpoint and finally look into its error logs to understand what's happening. So I'd like to get a shorter and simple way - at least for these types of errors - just running syntax check before deploy.
The central part of the dispute was that I hadn't been advised to take a syntax check explicitly, but better relying on higher level tests for this ( units tests for example ), on what I say that:
1) And don't have unit tests yet ( and even if I had them I'd treat them as a next step in my test chain ).
2) I only edit a certain file ( one and only one ), with a minimum modifications, and MOSTLY the types of errors I have ( as I told as I am newbie in Python ) is syntax errors, not logic or semantic ones, so as I assume I am safe to have just syntax checking and then give it a deploy. And even though I hit a semantic/logic problems with my code later ( even syntax check pass ) this is a undoubtedly question for high level tests ( unit / acceptance so on) - but for now I'm just playing with a code written in a language I am not feel quite at home.
3) I don't commit my changes outside and don't trouble anyone.
4) the last argument is controversial, because it's a bit of the scope, however I'd like to leave it here - in some cases even lightweight unit tests might be overkill if ALL you need is to get a syntax check in a single file. Consider this example. One just edits the code in place, deployed at server trying to fix immediate issues, so no unit or other tests exists and you want to be sure that your modification won't break the code syntactically. | Could syntax check for your code be taken as distinct step? | -0.066568 | 0 | 0 | 300 |
47,262,474 | 2017-11-13T10:44:00.000 | 0 | 1 | 1 | 0 | python,unit-testing,testing,syntax-checking | 47,296,655 | 3 | false | 0 | 0 | Personally speaking, if you drill into your head the PEP8 than most of the time syntax will come naturally i.e. like touch typing. However, most of you know multiple languages so I can imagine it get hard remembering all :) | 3 | 2 | 0 | I've had a dispute with some colleges on should I take a syntax check after every code modification as a distinct step or I'd better rely on unit tests to catch possible syntax breakage ... ( of course unit tests do that ).
This is not a question whether to use or not tests for your code ( of course yes, but see below )
My use case is pretty simple, I am newbie in Python and try to play with some web application (MVC) by modifying model file. The changes are quite simple, indeed a couple or so lines at once, however as I am pretty unfamiliar with Python syntax, some stupid errors ( like wrong indentation, so on ) easily creep into.
The real hurdle is that I deploy an application via Google Cloud Platform ( as app engine ), so it takes a while for application comes up with new version and I can hit an endpoint and finally look into its error logs to understand what's happening. So I'd like to get a shorter and simple way - at least for these types of errors - just running syntax check before deploy.
The central part of the dispute was that I hadn't been advised to take a syntax check explicitly, but better relying on higher level tests for this ( units tests for example ), on what I say that:
1) And don't have unit tests yet ( and even if I had them I'd treat them as a next step in my test chain ).
2) I only edit a certain file ( one and only one ), with a minimum modifications, and MOSTLY the types of errors I have ( as I told as I am newbie in Python ) is syntax errors, not logic or semantic ones, so as I assume I am safe to have just syntax checking and then give it a deploy. And even though I hit a semantic/logic problems with my code later ( even syntax check pass ) this is a undoubtedly question for high level tests ( unit / acceptance so on) - but for now I'm just playing with a code written in a language I am not feel quite at home.
3) I don't commit my changes outside and don't trouble anyone.
4) the last argument is controversial, because it's a bit of the scope, however I'd like to leave it here - in some cases even lightweight unit tests might be overkill if ALL you need is to get a syntax check in a single file. Consider this example. One just edits the code in place, deployed at server trying to fix immediate issues, so no unit or other tests exists and you want to be sure that your modification won't break the code syntactically. | Could syntax check for your code be taken as distinct step? | 0 | 0 | 0 | 300 |
47,262,474 | 2017-11-13T10:44:00.000 | 3 | 1 | 1 | 0 | python,unit-testing,testing,syntax-checking | 47,421,593 | 3 | false | 0 | 0 | If syntax problems are causing you problems and the costs and delay to find those problems are annoying enough, check the syntax first. This isn't just about syntax checks. Anything that causes an annoying enough problem deserves some sort of mitigation. Computers are really handy for boring, repetitive tasks. I wouldn't not do this because some language community thought solving problems was stupid or weird (though every community seems to have some version of it except for the Smalltalkers ;)
Other people might think it's bad practice because their cost of doing it explicitly doesn't offset the cost of finding out later. I rarely do a syntax check explicitly but then I have unit tests and run those frequently.
Perhaps your solution is to automate your deployment and throw a syntax check in there. If that fails the deployment stops. You don't have an additional step because it's already baked into the step you're doing now. | 3 | 2 | 0 | I've had a dispute with some colleges on should I take a syntax check after every code modification as a distinct step or I'd better rely on unit tests to catch possible syntax breakage ... ( of course unit tests do that ).
This is not a question whether to use or not tests for your code ( of course yes, but see below )
My use case is pretty simple, I am newbie in Python and try to play with some web application (MVC) by modifying model file. The changes are quite simple, indeed a couple or so lines at once, however as I am pretty unfamiliar with Python syntax, some stupid errors ( like wrong indentation, so on ) easily creep into.
The real hurdle is that I deploy an application via Google Cloud Platform ( as app engine ), so it takes a while for application comes up with new version and I can hit an endpoint and finally look into its error logs to understand what's happening. So I'd like to get a shorter and simple way - at least for these types of errors - just running syntax check before deploy.
The central part of the dispute was that I hadn't been advised to take a syntax check explicitly, but better relying on higher level tests for this ( units tests for example ), on what I say that:
1) And don't have unit tests yet ( and even if I had them I'd treat them as a next step in my test chain ).
2) I only edit a certain file ( one and only one ), with a minimum modifications, and MOSTLY the types of errors I have ( as I told as I am newbie in Python ) is syntax errors, not logic or semantic ones, so as I assume I am safe to have just syntax checking and then give it a deploy. And even though I hit a semantic/logic problems with my code later ( even syntax check pass ) this is a undoubtedly question for high level tests ( unit / acceptance so on) - but for now I'm just playing with a code written in a language I am not feel quite at home.
3) I don't commit my changes outside and don't trouble anyone.
4) the last argument is controversial, because it's a bit of the scope, however I'd like to leave it here - in some cases even lightweight unit tests might be overkill if ALL you need is to get a syntax check in a single file. Consider this example. One just edits the code in place, deployed at server trying to fix immediate issues, so no unit or other tests exists and you want to be sure that your modification won't break the code syntactically. | Could syntax check for your code be taken as distinct step? | 0.197375 | 0 | 0 | 300 |
47,263,281 | 2017-11-13T11:29:00.000 | 0 | 0 | 1 | 0 | python-2.7,visual-studio-code,pyqt5,qtwidgets | 47,269,525 | 1 | true | 0 | 1 | I ran another QT5 program I was working on that I already knew worked in VSC and added a dialog box. This worked fine, so I created a UI using QT Designer to add it to. It seems that QFileDialog needs an instance of class Ui_Frame() to instantiate. The fact that it worked in Spyder and not VSC may be related to the fact that the UI of Spyder is built on QT. | 1 | 0 | 0 | I'm trying to use VSCode as my main IDE for Anaconda Custom Python 2.7.13 on MacOS High Sierra. I'm trying to make a file open dialogue box appear using PyQt5. In Spider the following works fine, but not in VS Code:
from PyQt5 import QtWidgets
files = QtWidgets.QFileDialog.getOpenFileNames()
The error I get in the VSC console is simply Not Available whereas in the context of a larger program I get
E1101:Module 'PyQt5.QtWidgets' has no 'QFileDialog' member.
I was wondering whether anyone had any idea where this issue is arising from?
Oli | Anaconda Python in Visual Studio Code PyQt5 import error | 1.2 | 0 | 0 | 664 |
47,264,196 | 2017-11-13T12:16:00.000 | 1 | 0 | 1 | 1 | python,macos,visual-studio-code | 47,381,497 | 1 | true | 0 | 0 | I manage to do it by going on VS Code Command Pallete (Command+Shift+P) and using the command Python: Select Interpreter and chose 3.6 | 1 | 1 | 0 | I'm using VSCode on OSX to start python development. I'm supposed to be using python 3.6.xxxx but when I use python -V i get 2.7.10
My path variable is
bash: /Library/Frameworks/Python.framework/Versions/3.6/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/share/dotnet:/Library/Frameworks/Mono.framework/Versions/Current/Commands:/Users/leonardo/.rbenv/shims:/Library/Frameworks/Python.framework/Versions/3.6/bin:/Users/leonardo/bin:/Users/leonardo/bin: No such file or directory
I also can't get pylint to work on VSCode because, when i try to install it, i complains that pip is unavailable...
what should i do? | How to force python 3.6 on MAC OSX | 1.2 | 0 | 0 | 420 |
47,267,716 | 2017-11-13T15:21:00.000 | 6 | 0 | 1 | 0 | python,installation,anaconda,spyder | 47,269,281 | 1 | true | 0 | 0 | The problem is that you have two Python versions installed:
C:\Users\afsan\Anaconda3\
C:\Users\afsan\AppData\Local\Programs\Python\Python36
Given that it seems you want to use Spyder with Anaconda, please remove your second Python version (manually, if necessary). That should fix your problem. | 1 | 3 | 1 | I am still getting this error: An error occurred while starting the kernel
Things I tried:
setuptools command
updating spyder
Uninstalled everything that had word python in it from Uninstall or change progam panel
Uninstalling and reinstalling anaconda
Reading people's response on how they tried to fix it
Tried not to get frustrated.
This started occurring after I updated the spyder which I shouldn't have, but now I am stuck with the issue. I will share the complete message that's coming up on my IPhython Console screen.
Traceback (most recent call last):
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 213, in main
from ipykernel.kernelapp import IPKernelApp
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\__init__.py", line 2, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\connect.py", line 18, in
import jupyter_client
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\__init__.py", line 4, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\connect.py", line 22, in
import zmq
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\__init__.py", line 34, in
from zmq import backend
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 40, in
reraise(*exc_info)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 27, in
_ns = select_backend(first)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\select.py", line 26, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in
from . import (constants, error, message, context,
ImportError: cannot import name 'constants'
Traceback (most recent call last):
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 213, in main
from ipykernel.kernelapp import IPKernelApp
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\__init__.py", line 2, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\connect.py", line 18, in
import jupyter_client
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\__init__.py", line 4, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\connect.py", line 22, in
import zmq
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\__init__.py", line 34, in
from zmq import backend
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 40, in
reraise(*exc_info)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 27, in
_ns = select_backend(first)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\select.py", line 26, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in
from . import (constants, error, message, context,
ImportError: cannot import name 'constants'
Traceback (most recent call last):
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 213, in main
from ipykernel.kernelapp import IPKernelApp
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\__init__.py", line 2, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\connect.py", line 18, in
import jupyter_client
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\__init__.py", line 4, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\connect.py", line 22, in
import zmq
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\__init__.py", line 34, in
from zmq import backend
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 40, in
reraise(*exc_info)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 27, in
_ns = select_backend(first)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\select.py", line 26, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in
from . import (constants, error, message, context,
ImportError: cannot import name 'constants'
Traceback (most recent call last):
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 213, in main
from ipykernel.kernelapp import IPKernelApp
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\__init__.py", line 2, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\connect.py", line 18, in
import jupyter_client
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\__init__.py", line 4, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\connect.py", line 22, in
import zmq
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\__init__.py", line 34, in
from zmq import backend
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 40, in
reraise(*exc_info)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 27, in
_ns = select_backend(first)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\select.py", line 26, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in
from . import (constants, error, message, context,
ImportError: cannot import name 'constants'
Traceback (most recent call last):
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 245, in
main()
File "C:\Users\afsan\Anaconda3\lib\site‑packages\spyder\utils\ipython\start_kernel.py", line 213, in main
from ipykernel.kernelapp import IPKernelApp
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\__init__.py", line 2, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\ipykernel\connect.py", line 18, in
import jupyter_client
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\__init__.py", line 4, in
from .connect import *
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\jupyter_client\connect.py", line 22, in
import zmq
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\__init__.py", line 34, in
from zmq import backend
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 40, in
reraise(*exc_info)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise
raise value
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\__init__.py", line 27, in
_ns = select_backend(first)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\select.py", line 26, in select_backend
mod = __import__(name, fromlist=public_api)
File "C:\Users\afsan\AppData\Local\Programs\Python\Python36\Lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in
from . import (constants, error, message, context,
ImportError: cannot import name 'constants' | Spyder :An error ocurred while starting the kernel | 1.2 | 0 | 0 | 19,126 |
47,270,103 | 2017-11-13T17:26:00.000 | 0 | 1 | 0 | 0 | c#,python,asp.net,django,raspberry-pi3 | 47,270,170 | 1 | false | 1 | 0 | Considering how easy that is with something like django, I would go that route. Unless you're a C# wizard (I am most certainly not) or need it for something else. | 1 | 0 | 0 | I am interested in making a website that activates a python script. I am most familiar with c# but I have a python scraping script I would like to use that currently runs well on my raspberry pi.
My question is:
How do I make a C# asp.net website that activates the python script? what is the best approach? should i just forget c# all together and use something like django to host the site? | website to activate my python scraper using c# and possibly a raspberry pi | 0 | 0 | 0 | 28 |
47,270,175 | 2017-11-13T17:30:00.000 | 0 | 0 | 0 | 0 | python,api,networking,request | 47,270,840 | 2 | false | 0 | 0 | GLOBAL ANSWER ( Read These methods, beside...you can try a lot of modules that can send/receive your current data to your target server ):
Check your internet connection speed, pause/stop/disable all of your downloading process.
Disable your firewall/antivirus/os_default_defender or any kind of security programs who scan your internet protocols and suspend your net IOs for a while.
Check your vpn/proxy or any kind of tunneler programs! ( because they can check and suspend your connections and make your internet connection slower ).
Check your default DNS ips, choose The fastest DNS server in the world like google = 8.8.8.8 or others ...
Clean your virtual ram buffer and check your os disk IO speed ( close your heavy programs such as converter programs ... )
Change your programming language !!! python is good but it is not fastest language for send/receive data from www. even you can try low level programming ( for your requests pyhton use your os internal services!!this bridge can make your connection slower, you will not have this bridge if you use your os friendly langs ( like .NET langs on windows ).
You can compress your data to make your request faster.
At last it depends on your target server response and your os!! linux has different net IOs handler from windows!!
Good Luck ... | 1 | 2 | 0 | I have a following question. Is there any possibility to enlarge the speed of sending the single api request. I want to send request and obtain json as a result. When i use requests.get(url) the time is about 150ms.
I would like to make this time lower. My aim is to speed up single request. Is there any possibility to do this? | Python, api requests | 0 | 0 | 1 | 79 |
47,270,367 | 2017-11-13T17:42:00.000 | 0 | 0 | 1 | 0 | python,file,networking | 47,270,457 | 2 | false | 0 | 0 | If you can remove the file without Python, then you can remove it with Python. Otherwise, the answer is "no". | 2 | 0 | 0 | Recently, I learnt how to write/delete/read a file in Python (I am a beginner). However, something has got me thinking: is it possible to write/delete/read a file in a different user account from the same network (i.e same ip address)? If so, how? Don't worry, I'll just try it at home. ;) | How can you write/delete/read a file in a different user account with the same ip address with python? | 0 | 0 | 0 | 35 |
47,270,367 | 2017-11-13T17:42:00.000 | 0 | 0 | 1 | 0 | python,file,networking | 47,270,609 | 2 | false | 0 | 0 | This is running into more of an OS question. The answer is if you have permission to do so then yes you can but if you do not have permissions over the file then you cannot. | 2 | 0 | 0 | Recently, I learnt how to write/delete/read a file in Python (I am a beginner). However, something has got me thinking: is it possible to write/delete/read a file in a different user account from the same network (i.e same ip address)? If so, how? Don't worry, I'll just try it at home. ;) | How can you write/delete/read a file in a different user account with the same ip address with python? | 0 | 0 | 0 | 35 |
47,270,827 | 2017-11-13T18:10:00.000 | 1 | 0 | 1 | 0 | django,github,python-venv,virtual-environment | 47,271,051 | 2 | false | 0 | 0 | You don't have to do that. What you can do is :
Remember your project python version.
Generate your Django project dependencies file requirement.txt.
-Create requirement.txt file use: pipreqs /path/to/your/project/ (I recommend pipreqs, it creates a project level requirement.txt file. You can also use pip freeze or other commands)
-Install all dependencies from it: pip install -r requirements.txt, make sure pip belongs to your virtualenv python other than OS default pip
Then you can easily install a brand new virtual env and install all dependencies. | 2 | 0 | 0 | Is it necessary to upload venv folder that itself contains 100's of files along with other folders and files of the same project to GitHub? | Uploading venv folder on github | 0.099668 | 0 | 0 | 1,462 |
47,270,827 | 2017-11-13T18:10:00.000 | 3 | 0 | 1 | 0 | django,github,python-venv,virtual-environment | 47,271,134 | 2 | false | 0 | 0 | Simple answer no. In your gitignore file add the venv to ignore all the files inside your venv fold. Basically your venv fold store all the dependency for your projects, you could use pip freeze to generate the requirement.txt which others can use this file to reproduce the same environment as you did. Plus, the files inside your venv will be huge because it contains entire packages you installed. | 2 | 0 | 0 | Is it necessary to upload venv folder that itself contains 100's of files along with other folders and files of the same project to GitHub? | Uploading venv folder on github | 0.291313 | 0 | 0 | 1,462 |
47,272,966 | 2017-11-13T20:26:00.000 | 5 | 0 | 1 | 0 | javascript,python,npm,pip,virtualenv | 47,284,674 | 2 | false | 0 | 0 | Node installs
local: npm install <pkg>
global: npm install -g <pkg>
Python installs
local: . <envName>/bin/activate then pip install <pkg>
global: pip install <pkg>
Node usage
local: npm start (w/ path to binary specified in package.json e.g. "start":"./node_modules/.bin/<pkg>")
global: <pkg> <cmd>
Python usage
local: . <envName>/bin/activate then <pkg> <cmd>
global: <pkg> <cmd>
main takeaway: once you activate virtualenv you don't have to worry about package commands slipping into global scope
NVM: way to specifiy Node version using .nvmrc file in project root | 1 | 9 | 0 | I've been using Python for a while and I've learned we should always use a virtual env for each project where we pip install <name> the packages as needed, etc
I'm new to JS but would downloading packages using npm install <name> without the -g option mean it will only download it in the specific project directory, similarly to how Python's virtual env is keeping the pip packages separate? or is there also some sort of virtual env that needs to be created?
Sorry if I'm misunderstanding anything here... just want to make sure that installing packages using npm install isn't going to mess w/ anything globally or something! | Is installing NodeJS packages locally equivalent to Python's virtualenv? | 0.462117 | 0 | 0 | 2,440 |
47,273,835 | 2017-11-13T21:28:00.000 | 0 | 0 | 0 | 0 | python,windows,printing,zebra-printers | 47,274,306 | 1 | false | 0 | 0 | Pull down the driver for a GK420d printer. Basically all Zebra ZPL printers use the same driver. If that driver does not recognize your printer automatically just manually install the GK420d driver and point it at your printer. | 1 | 0 | 0 | The title is the basic info. I made a Python program on my Windows 7 computer that prints out labels, and it works when I connect my own printer to it. However, I tried connecting a label printer that is the one I mentioned in the title, and for some reason it's not working. My device manager categorizes it as unspecified, and trying to print with it won't work. This is an old model that was discontinued, so I tried finding a driver for it. Even that didn't work. Does someone have any suggestions? Thanks. | How to connect a Zebra TLP 2844-Z label printer to Windows 7 laptop? | 0 | 0 | 0 | 343 |
47,274,094 | 2017-11-13T21:46:00.000 | 0 | 0 | 0 | 0 | python,pandas,csv,dataframe | 47,274,239 | 3 | false | 0 | 0 | Some solutions:
Go through all the files, change the columns names, then save the result in a new folder. Now when you read a file, you can go to the new folder and read it from there.
Wrap the normal file read function in another function that automatically changes the column names, and call that new function when you read a file.
Wrap column selection in a function. Use a try/except block to have the function try to access the given column, and if it fails, use the other form. | 1 | 1 | 1 | Recently I have been developing some code to read a csv file and store key data columns in a dataframe. Afterwards I plan to have some mathematical functions performed on certain columns in the dataframe.
I've been fairly successful in storing the correct columns in the dataframe. I have been able to have it do whatever maths is necessary such as summations, additions of dataframe columns, averaging etc.
My problem lies in accessing specific columns once they are stored in the dataframe. I was working with a test file to get everything working and managed this no problem. The problems arise when I open a different csv file, it will store the data in the dataframe, but the accessing the column I want no longer works and it stops at the calculation part.
From what I can tell the problem lies with how it reads the column name. The column names are all numbers. For example, df['300'], df['301'] etc. When accessing the column df['300'] works fine in the testfile, while the next file requires df['300.0']. If I switch to a different file it may require df['300'] again. All the data was obtained in the same way so I am not certain why some are read as 300 and the others 300.0.
Short of constantly changing the column labels each time I open a different file, is there anyway to have it automatically distinguish between '300' and '300.0' when opening the file, or force '300.0' = '300'?
Thanks | python distinguish between '300' and '300.0' for a dataframe column | 0 | 0 | 0 | 213 |
47,275,128 | 2017-11-13T23:16:00.000 | 3 | 0 | 1 | 0 | python,github,gitignore | 47,275,261 | 1 | true | 0 | 0 | Each line in the .gitignore file establishes a pattern. When deciding if a file should be ignored, Git looks to see if it matches one of the patterns in the .gitignore file.
If you added the text .gitignore into the .gitignore file, the .gitignore file itself would not be controlled within the Git version control system, and would not get uploaded to a remote repository.
Controlling the .gitignore file within Git is no bad thing, however, in my humble opinion. It means all users of the repository have access to the same configuration. | 1 | 2 | 0 | I have received feedback on my first ever programming assignment in Python. In general, it was pretty good but I don't fully understand one point. To mark the assignment my teacher cloned my repo via github. The feedback advised using "an appropriate .gitignore that doesn't upload .gitignore itself".
Can anyone expand on this point for me? Still trying to get familiar with git and don't fully understand gitignore. | Selecting an appropriate .gitignore so it doesn't upload itself. | 1.2 | 0 | 0 | 106 |
47,275,909 | 2017-11-14T00:47:00.000 | 1 | 0 | 1 | 0 | python,python-3.x | 47,275,939 | 3 | false | 0 | 0 | One way to do this is move x.py - but then create a new x.py (or in this case, __init__.py that resides in the x directory) that imports everything it used to expose.
For example, if x.py contained def example you could import this using from x import example. This would basically be a proxy to the real definition.
Let me know if I haven't explained this clearly enough. | 1 | 2 | 0 | Is it possible to change a Python module to a package without changing the import statements in other files?
Example: other files in the project use import x, and x is x.py, but I want to move it to something like ./x/x.py without changing the import statements in other files to from x import x.
Even if I use __all__ = ["x"] in __init__.py, it needs from x import x to work.
Is there a way or do I have to update the import statements in the other files? | How to change a Python 3 module to a package without changing the imports? | 0.066568 | 0 | 0 | 107 |
47,277,332 | 2017-11-14T03:49:00.000 | 0 | 0 | 0 | 0 | python,opencv,opticalflow,cv2 | 47,689,288 | 1 | false | 0 | 0 | Yes, it's possible. cv2.calcOpticalFlowPyrLK() will be the optical flow function you need. Before you make that function call, you will have to create an image mask. I did a similar project, but in C++, though I can outline the steps for you:
Create an empty matrix with same width and height of your images
Using the points from your ROI, create a shape out of it (I did mine using cv2.fillPoly()) and fill the inside of the shape with white (Your image mask should only be comprised of black and white color)
If you are planning on using corners as features, then call cv2.goodFeaturesToTrack() and pass in the mask you've made as one of its arguments.
If you're using the Feature2D module to detect features, you can use the same mask to only extract the features in that masked area.
By this step, you should now have a collection of features/points that are only within the bounds of the shape! Call the optical flow function and then process the results.
I hope that helps. | 1 | 0 | 1 | I am using OpenCV's Optical Flow module. I understand the examples in the documentation but those take the entire image and then get the optical flow over the image.
I only want to pass it over some parts of an image. Is it possible to do that? If yes, how do I go about it?
Thanks! | cv2 running optical flow on particular rectangles | 0 | 0 | 0 | 279 |
47,278,382 | 2017-11-14T05:36:00.000 | 1 | 1 | 0 | 0 | python,pip,discord.py | 47,361,577 | 1 | true | 0 | 0 | I had a similar problem on a different machine. what I did to have the python 3.6 interpreter as the default one for the python 3 command was this:
First, edit your .bashrc file to include the following line export PATH=/path/to/python/bin:$PATH(in this case, I will be using /home/pi/python). Then, download the python 3.6 usingwget https://www.python.org/ftp/python/3.6.3/Python-3.6.3.tgz. Unarchive it using tar -zxvf Python-3.6.3.tgz, and cding into the directory. Then, configure it by doing ./configure --prefix=$HOME/python(Or to the path you had used in .bashrc), and make it using make, and sudo make install. Afterwards, reboot the raspberry pi, and you should now be able to use python 3.6 with the python3 command | 1 | 0 | 0 | I wanted to run a discord bot from my raspberry pi so I could have it always running, so I transferred over the bot file. BTW this bot was made in python. I get an error saying no module named discord. This is because I don't have discord installed. Whenever I try to use pip3 install discord I get a message saying that it was successful, but was installed under Python 3.4. I need it to be installed under Python 3.5 so that my bot's code will run properly. If I try to use python3 -m pip install discord I get the error /usr/local/bin/python3: No module named pip. When I run pip -V I get 3.4. I want to make the version 3.5 instead of 3.4, but even after running the get-pip.py file I still am on pip 3.4. Any help? | Pip Not Working On Raspberry Pi For Discord Bot | 1.2 | 0 | 1 | 1,062 |
47,280,587 | 2017-11-14T08:08:00.000 | 0 | 0 | 1 | 0 | python-2.7,jupyter-notebook | 47,280,879 | 1 | true | 0 | 0 | check If your "Untitled.ipynb” is open or in use somewhere in system.If yes close it. | 1 | 0 | 0 | I am learning python and Installed Anaconda and Trying to generate new python 2 notebook using jupyter notebook. I am using windows OS. | I am new to Python and getting this error while creating new Python 2 notebook. The Error is "Permission denied: Untitled.ipynb". | 1.2 | 0 | 0 | 716 |
47,282,399 | 2017-11-14T09:46:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 51,707,973 | 2 | true | 0 | 0 | Recently I have come across this function in tensorflow API.
tf.keras.callbacks.EarlyStopping. tf version is r1.9.
Arguments:
monitor: quantity to be monitored
min_delta: minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement.
patience: number of epochs with no improvement after which training
will be stopped.
verbose: verbosity mode.
mode: one of {auto, min, max}. In min mode, training will stop when
the quantity monitored has stopped decreasing; in max mode it will
stop when the quantity monitored has stopped increasing; in auto
mode, the direction is automatically inferred from the name of the
monitored quantity. | 2 | 5 | 1 | I am using tensorflow v1.4. I want to use early stopping using the validation set with a patience of 5 epochs.
I have searched on the web and found out that there used to be a function called ValidationMonitor but it is depreciated now. So is there a way to achieve this ? | Early stopping using tensorflow tf.estimator ? | 1.2 | 0 | 0 | 1,059 |
47,282,399 | 2017-11-14T09:46:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 49,161,168 | 2 | false | 0 | 0 | There doesn't seem to be a good way of doing this, unfortunately. One method to consider is to save checkpoints quite often during training, and then later iterate over them and evaluating them. Then you can discard the checkpoints that does not have the best eval performance. This doesn't help you in saving time during training, but at least the resulting model you are left with is an early stopping model. | 2 | 5 | 1 | I am using tensorflow v1.4. I want to use early stopping using the validation set with a patience of 5 epochs.
I have searched on the web and found out that there used to be a function called ValidationMonitor but it is depreciated now. So is there a way to achieve this ? | Early stopping using tensorflow tf.estimator ? | 0 | 0 | 0 | 1,059 |
47,286,547 | 2017-11-14T12:58:00.000 | 1 | 0 | 0 | 0 | python,pandas,csv | 64,614,856 | 5 | false | 0 | 0 | Putting this into the read_csv function does work:
dtype={"count": pandas.Int64Dtype()}
i.e.
df = pd.read_csv('file.csv')
This type supports both integers and pandas.NA values so you can import without the floats becoming integers.
If necessary, you can then use regular DataFrame commands to clean up the missing values, as described in other answers here.
BTW, my first attempt to solve this changes the integers into strings. If that's of interest:
df = pd.read_csv('file.csv', na_filter= False)
(It reads the file without replacing any missing values with NaN). | 1 | 14 | 1 | I have some data in csv file.
Because it is collected from machine,all lines should be number but some NaN values exists in some lines.And the machine can auto replace these NaN values with a string '-'.
My question is how to set params of pd.read_csv() to auto replace '-'values with zero from csv file? | Is there a way in pd.read_csv to replace NaN value with other character? | 0.039979 | 0 | 0 | 31,191 |
47,289,424 | 2017-11-14T15:24:00.000 | 0 | 0 | 0 | 1 | python,amazon-web-services,cygwin,aws-cli | 47,595,992 | 2 | false | 0 | 0 | Ok, so I spent ages trying to do this as well because I wanted to get setup on the fast.ai course. Nothing seemed to work. However, I uninstalled Anaconda3 and installed Anaconda2 instead. That did the trick! | 2 | 0 | 0 | I am working on a Windows computer and have been using Git Bash up until now without a problem. However, Git Bash seems to be missing some commands that Cygwin can provide, so I switched to Cygwin.
I need to use AWS CLI with Cygwin but any time I input any aws command, I get the following error:
C:\users\myusername\appdata\local\programs\python\python36\python.exe:
can't open file
'/cygdrive/c/Users/myusername/AppData/Local/Programs/Python/Python36/Scripts/aws':
[Errno 2] No such file or directory
I've seen other questions about getting Cygwin working with AWS, but they seem to talk about AWS CLI being incompatible with Windows' Anaconda version of Python (which mine doesn't seem to be). Any thoughts on how to fix this? Thanks. | AWS CLI not working in Cygwin | 0 | 0 | 0 | 942 |
47,289,424 | 2017-11-14T15:24:00.000 | 0 | 0 | 0 | 1 | python,amazon-web-services,cygwin,aws-cli | 47,312,502 | 2 | false | 0 | 0 | You are mixing cygwin posix path with with a not cygwin Python.
C:\users\myusername\appdata\local\programs\python\python36\python.exe
Is not the cygwin python so it can't open the file as
/cygdrive/c/Users/myusername/AppData/Local/Programs/Python/Python36/Scripts/aws is not a windows path that it can understand . Only Cygwin programs understand it.
Two possible solutions:
1 Use a windows path
2 Use a cygwin Python | 2 | 0 | 0 | I am working on a Windows computer and have been using Git Bash up until now without a problem. However, Git Bash seems to be missing some commands that Cygwin can provide, so I switched to Cygwin.
I need to use AWS CLI with Cygwin but any time I input any aws command, I get the following error:
C:\users\myusername\appdata\local\programs\python\python36\python.exe:
can't open file
'/cygdrive/c/Users/myusername/AppData/Local/Programs/Python/Python36/Scripts/aws':
[Errno 2] No such file or directory
I've seen other questions about getting Cygwin working with AWS, but they seem to talk about AWS CLI being incompatible with Windows' Anaconda version of Python (which mine doesn't seem to be). Any thoughts on how to fix this? Thanks. | AWS CLI not working in Cygwin | 0 | 0 | 0 | 942 |
47,290,296 | 2017-11-14T16:09:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,path,anaconda,conda | 47,290,366 | 1 | true | 0 | 0 | C:\ProgramData\Anaconda3;C:\ProgramData\Anaconda3\Scripts;C:\ProgramData\Anaconda3\Library\bin;
These three settings should be set automatically in the path folder by Anaconda? If not done automatically, put them at the very beginning of the list. | 1 | 2 | 0 | I have anaconda 3 with python 3 that works perfectly in anaconda prompt but I want to make anaconda python my default python. file names 'python.exe' is located at 'C:\ProgramData\Anaconda3\python.exe' but when I go to PATH Variables there's no file named python in Anaconda3 folder. Any suggestions on how to do it? | How can I set Anaconda3 python my default on windows | 1.2 | 0 | 0 | 4,281 |
47,291,361 | 2017-11-14T17:01:00.000 | 1 | 0 | 0 | 1 | python,powershell,tfs | 47,295,271 | 1 | false | 0 | 0 | You'd have to make the Start-Process call asynchronously either using Jobs or Runspaces, and then constantly monitor the job/runspace child session for output to display. | 1 | 0 | 0 | Currently i have the stuff in place in my TFS like below :
TFS Job calls powershell script1
powershell script1 calls powershell script2 (having module)
Powershell module calls the python script using Start process which returns the standard output through -RedirectStandardOutput. And the same is capturing/returning to TFS.
But the thing is all the output from python script is returning in one go and not able to get the line by line logs to TFS instantly.
Can anyone suggest if there is way to return the output from python script to TFS instantly line by line logs? | Capturing the output from python script which called through powershell | 0.197375 | 0 | 0 | 138 |
47,292,171 | 2017-11-14T17:46:00.000 | -1 | 0 | 1 | 0 | python,visual-studio | 48,472,766 | 1 | false | 0 | 0 | I think you should change your python environment of the VS 2017 to python 3.5 or 3.6 for 64-bit or something like that (python 3 instead of python 2 in general). | 1 | 0 | 0 | Could someone please advise me how or direct me to an easy to follow resource for getting scikit learn set up as a library on VS 2017 Python? I am not talking about importing it into my own code, but actually allowing VS to recognise the library.
I have Python 2.7 on the 64x VS environment set up.
I've looked into the installation page on scikit learn website and it is very unclear to me.
I have already installed it on PyCharm , which was very simple with a few commands, yet VS is the sole IDE which is unable to see this library.
I'm fairly new to VS yet am finding it much more complicated to do simple things compared to other IDEs | Installing scikit learn on VS 2017 Python | -0.197375 | 0 | 0 | 731 |
47,296,071 | 2017-11-14T22:09:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,syntax | 47,296,127 | 5 | false | 0 | 0 | Seems like something mistakenly copied from another language such as C. It's not actually invalid syntax in Python but the parentheses around int literally do nothing and are ignored. | 1 | 1 | 0 | I saw this in an exercise problem and had never seen this syntax used in Python before. Haven't had any luck googling it | what is this syntax called? (int)(value) casts the same as int(value) | 0.07983 | 0 | 0 | 62 |
47,298,329 | 2017-11-15T02:16:00.000 | -1 | 0 | 1 | 0 | python,python-3.x,csv | 47,298,418 | 2 | false | 0 | 0 | I believe you will have to use a combination of the solutions that you have already found. Such as 'os.listdir(path)' to get the contents of a directory, 'os.lstat(path).st_size' to get file size, and 'os.path.isdir(path)' and 'os.path.isfile(path)' to determine the type. | 1 | 0 | 0 | After doing my research for this specific task I found at that most of the solution given for this kind of problem either return the list of all the files or the TOTAL size of the folder/file.
What I am trying to achieve is get an output in the CSV file stating the folder structure i.e. folders - sub folders - files (optional) along with the size information for EACH.
There is no specific format for the CSV. I just need to know the tree structure with the size of the folder/sub-folder.
The reason behind this is that we are moving from physical servers to the cloud. In order to verify whether all the data was retained correctly during conversion I need to make a similar list of all SHARED DRIVES which can later be validated.
Looking forward for meaningful insights. Thanks! | Get folder structure along with folder/file sizes in python | -0.099668 | 0 | 0 | 2,202 |
47,299,415 | 2017-11-15T04:34:00.000 | 2 | 0 | 0 | 0 | python,faker | 51,228,688 | 1 | true | 0 | 0 | I had this same issue and looked more into it.
In the en_US provider there about 1000 last names and 750 first names for about 750000 unique combos. If you randomly select a first and last name, there is a chance you'll get duplicates. But in reality, that's how the real world works, there are many John Smiths and Robert Doyles out there.
There are 7203 first names and 473 last names in the en profile which can kind of help. Faker chooses the combo of first name and last name meaning there are about 7203 * 473 = 3407019.
But still, there is a chance you'll get duplicates.
I solve this problem by adding numbers to names.
I need to generate huge dataset.
Keep in mind that in reality, any huge dataset of names will have duplicates. I work with large datasets (> 1 million names) and we see a ton of duplicate first and last names.
If you read the faker package code, you can probably figure out how to modify it so you get all 3M distinct names. | 1 | 6 | 0 | I have used Python Faker for generating fake data. But I need to know what is the maximum number of distinct fake data (eg: fake names) can be generated using faker (eg: fake.name() ).
I have generated 100,000 fake names and I got less than 76,000 distinct names. I need to know the maximum limit so that I can know how much we can scale using this package for generating data.
I need to generate huge dataset. I also want to know is Php faker, perl faker are all same for different environments?
Other packages for generating huge dataset will be highly appreciated. | Maximum Limit of distinct fake data using Python Faker package | 1.2 | 0 | 0 | 1,947 |
47,301,667 | 2017-11-15T07:28:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,python-2.7 | 47,301,713 | 1 | true | 0 | 0 | Here, int/int will return int only that is what happening here 22/7 gives 3 and you're type casting it to float(3) which is giving 3.0 but if you will perform float/int or int/float then it will result into float so you convert any of them to float as shown following.
replace float(22/7) with float(22)/7 | 1 | 0 | 0 | how to make output 22//7 on python 2.7 become 3.14159 ?i try use float(22/7) but it just give me 3.0. I try use Decimal but it just give me 3, use round(x, 6) only give 3.0 just like float. | different result 22/7 on python 2.7.12 and 3.6 | 1.2 | 0 | 0 | 780 |
47,307,441 | 2017-11-15T12:28:00.000 | 0 | 0 | 0 | 0 | python,algorithm,graph | 47,309,192 | 1 | false | 0 | 0 | There would be perfect algorithms for your problem of course, but you could prune some processing by finding a Strongly Connected Component of your graph. There is no chance for a node of a SCC to form a cycle with a node of another SCC. So you can then run your current algorithm for different SCCs separately which could save a lot of processing and traversing. | 1 | 0 | 0 | I want to ask whether it is possible to list all cycle in a really large directed graph. The size of the graph is around 600 node and around 7000 edges.
I have tried naive dfs algorithm and multi threaded it using python process. I set it to use 10 processes. The problem is, I still can not finish the program. It takes too long. I noticed that it listed many duplicates cycles. For example, I found a cycle consists of 200 nodes. It means it computes same cycle 200*200 times! I believe it is useless because it is all the same cycle.
I have tried Johnson algorithm but my 64GB RAM can not handle it. It said it runs out of memory.
Other things that I have done is to list whether a cycle exixts between 2 nodes.
I detect if node A can reach node C and node C can reach node A, it means there is at least a cycle that involves A and C. This algorithm is quite fast and I can list there are around 600,000 possibilities although I know there are many duplicates too. For example if A can reach B, B can reach C, and C can reach A, it means there is a cycle between A and C and B and C.
What should I do? | Listing all cycles in a really large directed graph | 0 | 0 | 0 | 318 |
47,307,882 | 2017-11-15T12:49:00.000 | 2 | 0 | 1 | 1 | python,anaconda | 47,308,553 | 1 | true | 0 | 0 | Problem
pdb package comes with Anaconda 3's Python installation, so when you run import pdb in the command line it successfully loads it.
Running pdb <script> from command line makes Windows look for a pdb.exe executable in any of the paths listed in your PATH environment variable.
As far as I know, pdb does not come with a script executable. ipdb does.
Solution
PDB:
Run python -m pdb myscript.py from an Anaconda terminal.
IPDB:
Install the ipdb package in your Anaconda Python environment.
Try running your script in the command line with ipdb <script>.
If the second step does not work for you, this is probably because the anaconda script folder does not appear in your PATH environment variable.
Press winkey
Search for Edit the system environment variables
Click on environment variables....
Look for a PATH variable in either your user or system variables. (If it doesn't exist, create a new variable).
Double click it and add a new path pointing to your anaconda Scripts folder (for me it's in %localappdata%\Continuum\anaconda3\Scripts).
And you're done. | 1 | 1 | 0 | I installed Anaconda (python 3) on a Windows 10 machine. Whilst import pdb works inside a script I cant use pdb <script> on the command line. I would have thought is should work right out of the box. If not, what is the right way to install it on an Anaconda environment? I would also like to use ipdb. | Should pdb and ipdb be installed by Anaconda on Windows? | 1.2 | 0 | 0 | 797 |
47,309,651 | 2017-11-15T14:12:00.000 | 0 | 0 | 1 | 0 | python,function | 47,309,838 | 3 | false | 0 | 0 | When to create a function depends a lot on the specifics of the situation. It isn't just about saving lines of code - often, more lines of code is better because it makes it easier to maintain or easier to understand.
One reason to make a function is that it means if you need to change your code, there is just one place you have to change it.
It's unclear to me from your situation if you should create a function or not. It depends on the line of code. If the line of code is just calling another function, then wrapping that in another function doesn't make sense. If it's some complicated line that has lots of parts subject to change, then it might make sense. | 2 | 2 | 0 | I'm relatively new to python and I was wondering if it's always better to never write something twice. I'm currently making a program where at two points in the program the same sentence may need to be saved to a file (consisting only of that sentence).
Is it worth it to make a function for that? I was wondering since it now takes up 2 lines of code, but making a function would make it 4 (2 to define the function and 2 to call it). | Should you use a function for only one line of code? | 0 | 0 | 0 | 1,559 |
47,309,651 | 2017-11-15T14:12:00.000 | 4 | 0 | 1 | 0 | python,function | 47,309,787 | 3 | true | 0 | 0 | Wrap even a one-liner in a function if
Calling the function looks simpler than the code it wraps;
It is likely to be repeated several times in your code;
It is likely to change (then you change only the body of the function and not every place where it is used).
AFAIK, unlike in languages like C where simple functions can get inlined by the compiler, in Python there will be a (sometimes important) performance overhead in calling a wrapper function. | 2 | 2 | 0 | I'm relatively new to python and I was wondering if it's always better to never write something twice. I'm currently making a program where at two points in the program the same sentence may need to be saved to a file (consisting only of that sentence).
Is it worth it to make a function for that? I was wondering since it now takes up 2 lines of code, but making a function would make it 4 (2 to define the function and 2 to call it). | Should you use a function for only one line of code? | 1.2 | 0 | 0 | 1,559 |
47,310,767 | 2017-11-15T15:05:00.000 | 4 | 0 | 1 | 0 | python,pycharm | 47,310,871 | 3 | false | 0 | 0 | One way you could try is: Run --> Edit Configurations --> set your interpreter there. That "sticks" for me... | 2 | 3 | 0 | Every time I open PyCharm I get this message:
No Python interpreter configured for the project.
So I set the interpreter, everything works fine, then I close PyCharm and reopen it, and the message pops up again.
Reinstalling Python and Pycharm didn't fix the problem. | Pycharm: "No Python interpreter configured for the project" everytime | 0.26052 | 0 | 0 | 21,937 |
47,310,767 | 2017-11-15T15:05:00.000 | 1 | 0 | 1 | 0 | python,pycharm | 47,310,982 | 3 | false | 0 | 0 | If you are opening individual files then try setting the default settings.
File > Default Settings > Project Interpreter.
I think when you are opening individual files you are not loading the project. This sets a default interpreter so opening a file without loading the project should give it this interpreter.
If this doesn't work then open pycharm normally and select the project that you are working on. | 2 | 3 | 0 | Every time I open PyCharm I get this message:
No Python interpreter configured for the project.
So I set the interpreter, everything works fine, then I close PyCharm and reopen it, and the message pops up again.
Reinstalling Python and Pycharm didn't fix the problem. | Pycharm: "No Python interpreter configured for the project" everytime | 0.066568 | 0 | 0 | 21,937 |
47,312,007 | 2017-11-15T16:01:00.000 | 2 | 0 | 1 | 0 | python,parsing,numbers,integer,token | 47,312,146 | 1 | true | 0 | 0 | Python 3 standardized how all integer literals (other than base 10) were defined: 0?dddd..., where ? is a single letter indicating the base, and each d is interpreted as a digit in the appropriate base. 0... was retained as an exception, since 0 is 0 in any base, and was accepted as such before explicit base specifiers were required.
The biggest change related to this is that a number with a leading zero but no explicit base specifier is no longer assumed to be an octal number. Python 2 accepts both 042 and 0o42 as octal representations of decimal 34. (Early in Python's history, there were only three valid literals, with hexadecimal being the only one with a specifier. 0o... and 0b... were both later additions | 1 | 1 | 0 | I noticed the following fact with CPython3 and Pypy3, contrasting with the behaviour of both CPython2 and Pypy2:
In Python3, it looks like leading zeros when parsing code yield an error except for a very single number wich is 0. Thus 00 is valid but not 042.
In Python2, leading zeros are allowed for all integers. Thus 00 and 042 are valid.
Why did Python change its behaviour between both versions? | Odd behaviour of Python tokenizer when parsing integer numbers | 1.2 | 0 | 0 | 19 |
47,312,023 | 2017-11-15T16:02:00.000 | 0 | 0 | 1 | 0 | python,linux,virtual-machine,python-3.5 | 51,295,721 | 1 | true | 0 | 0 | Although I'm not sure the source of the issue, I did a full purge of python3 and reinstalled it and all packages with it and that fixed the issue! | 1 | 4 | 1 | I am on a virtual machine running on Ubuntu 16.04. I have installed pandas, sklearn, and conda using pip3. When I try to run a python3 program using these packages, I get the error "Illegal instruction (core dump)."
Not sure how to fix this. Simple python3 programs (aka no imports) run fine. I also tried importing but not using these packages, and that works.
The only other thing I have done on this VM is opencv development with C++. | python3 Illegal instruction (core dump) | 1.2 | 0 | 0 | 2,258 |
47,313,243 | 2017-11-15T17:02:00.000 | 2 | 0 | 1 | 0 | python,compression | 47,313,623 | 1 | false | 0 | 0 | Sometimes you can help gzip and other algorithms preprocessing the data before compression.
For instance, if you have an image, instead of compressing the raw pixel data, you can try to compress the differences between the current pixel and the previous one.
So, instead of just compressing your string data, try pre-processing it before using your knowledge about the data itself.
Do not just compute deltas between characters: try normalizing stuff to reduce variance (remove unneeded characters, whitespace between the last character and end of line, unneeded whitespace, etc).
If your string data is composed by fields (usually it is), another technique that works is compressing the columns instead of the rows. Columnar data tends to have less variance and gzip will exploit that easily.
Keep in mind going from 2GB compressed data to 300M compressed will be quite hard to achieve, and you might need to process the data after decompressing in order to be usable. | 1 | 0 | 0 | I've one million of string data and writing into a file, I'm using Python gZip compression which comes around 2gb of size, I want to reduce into 250-300 MB?
Is there any way to compress more and bring it to 300 MB?
Any help would be much appreciated.
Thanks! | How to compress 2GB to 300 mb using python library | 0.379949 | 0 | 0 | 130 |
47,313,447 | 2017-11-15T17:12:00.000 | 2 | 0 | 0 | 0 | python,python-2.7,zebra-printers,zpl | 47,314,727 | 1 | true | 0 | 0 | OK I am not a python expert here but the general process is:
Open a TCP connection to port 9100
Write ZPL to your connection
Close your connection
You will want to look at the ^DF and ^XF commands in the ZPL programming guide to make sure you are using the templates right, but it is a pretty simple process.
If you are concerned about whether or not the printer is ready to print you could look at the ~hs command to get the current status.
In the end there is a C# and Java SDK available for the printer which has helper functions to push variables store in Maps to a template, but the JNI calls are probably more involved than just opening a TCP connection... | 1 | 0 | 0 | What I have
Printer internal IP
ZPL code on printer
parameters to plug into ZPL code
Is there a way to define a label and send it to the printer via Python? I would need to specify which label type to use on the printer since it can have multiple .zpl label codes stored on it.
are there dedicated libraries? Otehrwise what are some basic socket functions to get me started | Python send label to Zebra IP printer | 1.2 | 0 | 1 | 2,231 |
47,316,562 | 2017-11-15T20:22:00.000 | 1 | 0 | 1 | 0 | python | 47,316,644 | 1 | false | 0 | 0 | You have to read it, change the specific line and write a new file or overwrite the old one. | 1 | 0 | 0 | I want to just change a single value, do I have to make a whole new file? Say if I am collecting data about people and someones information changes or they want to change their password, what do I do? | How do i change one value in a csv file? | 0.197375 | 0 | 0 | 44 |
47,317,503 | 2017-11-15T21:25:00.000 | 2 | 0 | 1 | 0 | python,logging | 47,317,554 | 2 | false | 0 | 0 | You will have a performance penalty because list(x for x in long_generator) will be created before calling logging.debug() even if debug messages are ignored.
There is no lazy evaluation in Python. Every line is evaluated/executed before the next line. It means that expressions are evaluated when they are bound to variables. | 1 | 3 | 0 | How lazy is evaluation in Python? If I have code that looks like
logging.debug('My very long list: %s' % list(x for x in long_generator))
and the logging effective level is such that debug messages are ignored, do I incur a performance penalty for having this line? | Performance penalty of long list in python logging.debug() | 0.197375 | 0 | 0 | 235 |
47,319,322 | 2017-11-15T23:54:00.000 | 1 | 0 | 0 | 0 | java,python,excel,api,bloomberg | 47,331,288 | 1 | true | 0 | 0 | I have only used the Python API, and via wrappers. As such I imagine there are ways to get data faster than what I currently do.
But for what I do, I'd say I can get a few years of daily data for roughly 50 securities in a matter of seconds.
So I imagine it could improve your workflow to move to a more robust API.
Regarding intraday data on the other hand I don't find much improvement. But I am not using concurrent calls (which I'm sure would help my speed on that front). | 1 | 0 | 0 | I've used Excel in the past to fetch daily price data on more than 1000 equity securities over a period of a month and it was a really slow experience (1 hour wait in some circumstances) since I was making a large amount of calls using the Bloomberg Excel Plugin.
I've always wondered if there was a substantial performance improvement to do the same task if I was using Python or Java to pull data from Bloomberg's API instead.
Would like to know from those who have had experience with both Excel and a programming language before I dive head first into trying to implement a Python or Java solution. | Accessing Bloomberg's API through Excel vs. Python / Java / other programming languages | 1.2 | 1 | 0 | 418 |
47,319,351 | 2017-11-15T23:58:00.000 | 0 | 0 | 0 | 0 | python,css,web,automation,bots | 47,319,389 | 3 | false | 1 | 0 | One possible solution:
Crawl website for .css files, save change dates and/or filesize.
After each crawl compare information and if changes detected use slack API to notify. I haven't worked with slack that, for this part of the solution maybe someone else can give advice. | 2 | 0 | 0 | I wish to monitor the CSS of a few websites (these websites aren't my own) for changes and receive a notification of some sort when they do. If you could please share any experience you've had with this to point me in the right direction for how to code it I'd appreciate it greatly.
I'd like this script/app to notify a Slack group on change, which I assume will require a webhook.
Not asking for code, just any advice about particular APIs and other tools that may be of benefit. | How to monitor changes to a particular website CSS? | 0 | 0 | 1 | 197 |
47,319,351 | 2017-11-15T23:58:00.000 | 0 | 0 | 0 | 0 | python,css,web,automation,bots | 47,319,375 | 3 | false | 1 | 0 | I would suggest using Github in your workflow. That gives you a good idea of changes and ways to revert back to older versions. | 2 | 0 | 0 | I wish to monitor the CSS of a few websites (these websites aren't my own) for changes and receive a notification of some sort when they do. If you could please share any experience you've had with this to point me in the right direction for how to code it I'd appreciate it greatly.
I'd like this script/app to notify a Slack group on change, which I assume will require a webhook.
Not asking for code, just any advice about particular APIs and other tools that may be of benefit. | How to monitor changes to a particular website CSS? | 0 | 0 | 1 | 197 |
47,321,613 | 2017-11-16T04:27:00.000 | 3 | 0 | 0 | 0 | python,rest,google-cloud-storage | 47,358,658 | 1 | false | 1 | 0 | Turns out that you can safely remove all object ending with / if you don't need them once empty. "Content" will not be deleted.
If you are using Google Console, you must create a folder before uploading to it. That folder is therefore an explicit object that will remain even if empty. Same behavior apparently when uploading with tools like Cyberduck.
But if you upload the file using REST API and its full path i.e. bucket/folder/file, the folder is implicit visually but it's not getting created as such. So when removing the file, there is no folder left behind since it wasn't there in the first place.
Since the expected behavior for my use case is to auto-remove empty folders, I just have a pre-processing routine that deletes all blobs ending with / | 1 | 3 | 0 | When I delete through the Console all files from a "folder" in a bucket, that folder is gone too since there is no such thing as directories - the whole path after the bucket is the key.
However when I move (copy & delete method) programmatically these files through REST API, the folder remains, empty. I must therefore write additional logic to check for these and remove explicitly.
Isn't that a bug in the REST API handling ? I had expected the same behavior regardless of the method used. | Empty 'folder' not removed in GCS | 0.53705 | 0 | 1 | 1,252 |
47,321,964 | 2017-11-16T05:01:00.000 | 1 | 0 | 0 | 0 | python,build,sdk,oculus | 48,023,152 | 1 | false | 1 | 1 | I was having the exact same problem. Same output and everything.
After tracing the issue down for hours, I finally found my problem to be that my PATH did not include the java bin directory. The error was being caused because the keytool.exe program could not be reached.
On Windows 10, right click on This PC, select Advanced System Settings, select Environment Variables. In the System variables window on the lower half of the screen, find the Path variable. Ensure the java bin directory is in the list.
In my case I had to add:
C:\Program Files\Java\jdk1.8.0_112\bin
The problem was not immediately corrected. I still had Android Studio open. I shut down the program and restarted the computer. After reboot and opening Android Studio again, everything started to build!
Hope this helps somebody else. | 1 | 0 | 0 | I am trying to develop a native gear VR application in android studio using oculus native mobile SDK version 1.9.0. On running the VR samples enclosed within the SDK, I ran into the error mentioned below and the build failed.
Traceback (most recent call last):
File "C:\ovr_sdk_mobile_1.9.0\bin\scripts\build\ovrbuild_keystore.py", line 86, in
genDebugKeystore()
File "C:\ovr_sdk_mobile_1.9.0\bin\scripts\build\ovrbuild_keystore.py", line 84, in genDebugKeystore
debug_props['storepass'], debug_props['keypass'], replace=False)
File "C:\ovr_sdk_mobile_1.9.0\bin\scripts\build\ovrbuild_keystore.py", line 71, in create_keystore
return execfn(cmd)
File "C:\ovr_sdk_mobile_1.9.0\bin\scripts\build\ovrbuild_keystore.py", line 82, in
create_keystore(lambda x: ovrbuild.call(x),
File "ovrbuild.py", line 169, in call
gradleTask = "clean" if command_options.should_clean else "assembleDebug" if command_options.is_debug_build else "assembleRelease"
NameError: global name 'command_options' is not defined
:VrSamples:Native:VrTemplate:Projects:Android:genDebugKeystore FAILED
FAILURE: Build failed with an exception.
*Where:
Script 'C:\ovr_sdk_mobile_1.9.0\VrApp.gradle' line: 314
*What went wrong:
Execution failed for task ':VrSamples:Native:VrTemplate:Projects:Android:genDebugKeystore'.
Process 'command 'C:\ovr_sdk_mobile_1.9.0/bin/scripts/build/ovrbuild_keystore.py.bat'' finished with non-zero exit value 1
Could anyone of you help me to solve this error?
This error is also present in oculus SDK version 1.7.0. | Build error in oculus native mobile sdk version 1.9.0 | 0.197375 | 0 | 0 | 330 |
47,322,558 | 2017-11-16T05:53:00.000 | 0 | 0 | 1 | 0 | python-3.x,pyinstaller | 47,362,941 | 1 | true | 0 | 0 | It seems that the problem happens when you install both 32bit and 64bit of PyInstaller. And PyInstaller would fail on selecting which version of dependencies are required for the current build. The problem in my situation was VCRUNTIME140.dll. I couldn't find a way to replace the vcruntime140.dll, but I found a workaround by adding the correct file manually to C:\Users\<User>\AppData\Roaming\pyinstaller directory and rebuild with Pyinstaller then It will be replaced with the new one just copied. This will fix the issue temporary and the directory should not be deleted. | 1 | 0 | 0 | I want to create a 32bit executable app from my script to run on Windows 10 with X86 or X64 architectures. I've generated the X64 version of my script and it worked fine. My host machine is X64 but I installed Python X86 version to generate X86 app. Then I generated the executable with Pyinstaller but when I run the executable it throws the following error:
C:\Users\Name\Appdata\local\Temp_MEI51162\VCRUNTIME140.dll is
either not designed to run on Windows or it contains an error...
and in the console I see this error:
Error loading Python DLL
'C:\Users\Name\AppData\Local\Temp_MEI51162\python36.dll'.
LoadLibrary:
I've checked the _MEI51162, both VCRUNTIME140.dll and python36.dll is there but the python36.dll has a size of about 1 MB instead of 3 MB. It doesn't matter if I generate the app as a standalone executable or not and still give me the same error. | Pyinstaller missing dll files | 1.2 | 0 | 0 | 2,932 |
47,324,077 | 2017-11-16T07:39:00.000 | 26 | 0 | 1 | 0 | python,r,rstudio,spyder | 47,341,200 | 1 | false | 0 | 0 | (Spyder maintainer here) There's no function similar to view() in Spyder. To view the contents of a Dataframe, you need to double-click on it in the Variable Explorer. | 1 | 16 | 1 | I have been using R Studio for quite some time, and I find View() function very helpful for viewing my datasets.
Is there a similar View() counterpart in Spyder? | Viewing dataframes in Spyder using a command in its console | 1 | 0 | 0 | 18,421 |
47,324,511 | 2017-11-16T08:07:00.000 | 0 | 0 | 0 | 0 | c#,python,cntk | 47,371,621 | 3 | false | 0 | 0 | Checked that CNTKLib is providing those learners in CPUOnly package.
Nestrov is missing in there but present in python.
There is a difference while creating the trainer object
with CNTKLib learner function vs Learner class.
If a learner class is used,
net parameters are provided as a IList.
This can be obtained using netout.parameter() ;
If CNTKLib is used,
parameters are provided as ParameterVector.
Build ParameterVector while building the network.
and provide it while creating Trainer object.
ParameterVector pv = new ParameterVector ()
pv.Add(weightParameter)
pv.Add(biasParameter)
Thanks everyone for your answers. | 2 | 0 | 1 | I am using C# CNTK 2.2.0 API for training.
I have installed Nuget package CNTK.CPUOnly and CNTK.GPU.
I am looking for following learners in C#.
1. AdaDelta
2. Adam
3. AdaGrad
4. Neterov
Looks like Python supports these learners but C#
package is not showing them.
I can see only SGD and SGDMomentun learners in C# there.
Any thoughts, how to get and set other learners in C#.
Do I need to install any additional package to get these learners?
Appreciate your help. | Learners in CNTK C# API | 0 | 0 | 0 | 447 |
47,324,511 | 2017-11-16T08:07:00.000 | 0 | 0 | 0 | 0 | c#,python,cntk | 47,324,718 | 3 | false | 0 | 0 | Download the NCCL 2 app to configure in c# www.nvidia. com or google NCCL download | 2 | 0 | 1 | I am using C# CNTK 2.2.0 API for training.
I have installed Nuget package CNTK.CPUOnly and CNTK.GPU.
I am looking for following learners in C#.
1. AdaDelta
2. Adam
3. AdaGrad
4. Neterov
Looks like Python supports these learners but C#
package is not showing them.
I can see only SGD and SGDMomentun learners in C# there.
Any thoughts, how to get and set other learners in C#.
Do I need to install any additional package to get these learners?
Appreciate your help. | Learners in CNTK C# API | 0 | 0 | 0 | 447 |
47,324,895 | 2017-11-16T08:30:00.000 | 0 | 0 | 0 | 0 | python,django,python-3.x,python-2.7,django-models | 48,651,402 | 1 | true | 1 | 0 | I have solved this problem by editing the installation script.
If it's not the installation or in other words the user is updating the loaddata is not run.
I think the only solution is this instead of using migration. | 1 | 0 | 0 | I'm deploying a Django application as a deb file. The user is installing it using dpkg. When there is an update the user is installing it using dpkg and the application is updated.
In every install process the is user is loading the default data from fixtures automatically.
Consider s/he changed the default admin password. When s/he updates the deb package the password is set to default.
I have tried to check if there is an old version already intalled in the system. So I can pass over loaddata issues.
However the solution I provided above is not a good solution. Does Django provides a mechanism or choice for this? | How to prevent fixtures to update the DB in Django? | 1.2 | 0 | 0 | 106 |
47,327,413 | 2017-11-16T10:36:00.000 | 0 | 1 | 1 | 0 | python,python-3.x,pypy | 47,327,550 | 1 | false | 0 | 0 | Are you using PyPy3?
If you use PyPy it will execute your code with a Python 2.7 interpreter. And if you install a library for python3 it does not mean that the library is installed for python2 too. You need to specify for wich interpreter are you installing the library ( like using pip or pip3 ) | 1 | 0 | 0 | I am trying to run my code written in Python 3 with pypy, but I got an error :
ImportError: No module named 'osgeo'
but running it with python3 test.py works (but slow) .
My installed version of python is 3.5.2
pypy comes with it's python version 3.5.3
Is there anyway you could tell pypy to use my own version of python, and will it resolve the problem? | How to run pypy with an existing installation of Python | 0 | 0 | 0 | 526 |
47,328,170 | 2017-11-16T11:11:00.000 | 1 | 0 | 1 | 0 | python,visual-studio-code | 47,328,462 | 1 | true | 0 | 0 | Guess i have found solution i have just copied folder from .extensions into C:\Program Files\VS Code ....extensions. Now they are showing twice in extension manager but they are working. Can someone provide explanation to this? | 1 | 1 | 0 | Hi i have installed Visual Studio Code. i can't download plugins from internal extension manager however, i am able to install it by using files downloaded from marketplace. After installation it shows in installed extensions, (Python and Code-Runner) but doesn't work.
For example not single one shortcut from code-runner works. I tried to reinstall but didn't help. Can someone please advice on this? | Visual studio code extension not working | 1.2 | 0 | 0 | 15,922 |
47,331,033 | 2017-11-16T13:33:00.000 | 0 | 0 | 0 | 0 | python-2.7,selenium-chromedriver | 47,506,732 | 1 | true | 0 | 0 | Implicit_wait was set to 10 mins, which caused this behaviour. Reducing the value gave expected results. | 1 | 1 | 0 | Using Python 2.7.12, Selenium 3.4.3, Chrome Version 62.0.3202.94 (Official Build) (32-bit).
While trying to ensure presence of element as follows but no exception is raised when x is not present:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait as WDW
from selenium.webdriver.support import expected_conditions as EC
try:
WDW(driver, 20).until(EC.presence_of_element_located((By.XPATH, x)))
except Exception e:
print(e)
2nd option:
try:
we = driver.find_element(By.XPATH, x)
except Exception e:
print(e)
Is there a issue with the syntax or a logical issue? | Python Selenium 3.4.3 do not raise timeouts, and goes to hung state | 1.2 | 0 | 1 | 30 |
47,334,254 | 2017-11-16T16:08:00.000 | 0 | 0 | 0 | 0 | python-2.7,pyomo,coin-or-cbc | 47,336,205 | 2 | false | 0 | 0 | You could try to set the bound gap tolerance such that it will accept the other answer. I'm surprised that the solver status is coming back with error if there is a feasible solution found. Could you print out the whole results object? | 1 | 0 | 1 | I am trying to solve Optimisation problem with pyomo (Pyomo 5.3 (CPython 2.7.13 on Linux 3.10.0-514.26.2.el7.x86_64)) using CBC solver (Version: 2.9.8) and specifying a time limit in solver of 60 sec. The solver is getting a feasible solution (-1415.8392) but apparently not yet optimal (-1415.84) as you can see below.
After time limit ends model seemingly exits with an error code. I want to print or get values of all variables of feasible solution using CBC in specified time limit. Or is there any other way by which I can set, if Model gets 99% value of an Optimal solution, to exit and print the feasible solution.
The error code is posted below.
Cbc0004I Integer solution of -1415.8392 found after 357760 iterations and 29278 nodes (47.87 seconds)
Cbc0010I After 30000 nodes, 6350 on tree, -1415.8392 best solution, best possible -1415.84 (48.87 seconds)
Cbc0010I After 31000 nodes, 6619 on tree, -1415.8392 best solution, best possible -1415.84 (50.73 seconds)
Cbc0010I After 32000 nodes, 6984 on tree, -1415.8392 best solution, best possible -1415.84 (52.49 seconds)
Cbc0010I After 33000 nodes, 7384 on tree, -1415.8392 best solution, best possible -1415.84 (54.31 seconds)
Cbc0010I After 34000 nodes, 7419 on tree, -1415.8392 best solution, best possible -1415.84 (55.73 seconds)
Cbc0010I After 35000 nodes, 7824 on tree, -1415.8392 best solution, best possible -1415.84 (57.37 seconds)
Traceback (most recent call last):
File "model_final.py", line 392, in
solver.solve(model, timelimit = 60*1, tee=True)
File "/home/aditya/0r/lib/python2.7/site-packages/pyomo/opt/base/solvers.py", line 655, in solve
default_variable_value=self._default_variable_value)
File "/home/aditya/0r/lib/python2.7/site-packages/pyomo/core/base/PyomoModel.py", line 242, in load_from
% str(results.solver.status))
ValueError: Cannot load a SolverResults object with bad status: error
When I run the model generated by pyomo manually using the same command-line parameters as pyomo /usr/bin/cbc -sec 60 -printingOptions all -import /tmp/tmpJK1ieR.pyomo.lp -import -stat=1 -solve -solu /tmp/tmpJK1ieR.pyomo.soln it seems to exit normally and also writes the solution as shown below.
Cbc0010I After 35000 nodes, 7824 on tree, -1415.8392 best solution, best possible -1415.84 (57.06 seconds)
Cbc0038I Full problem 205 rows 289 columns, reduced to 30 rows 52 columns
Cbc0010I After 36000 nodes, 8250 on tree, -1415.8392 best solution, best possible -1415.84 (58.73 seconds)
Cbc0020I Exiting on maximum time
Cbc0005I Partial search - best objective -1415.8392 (best possible -1415.84), took 464553 iterations and 36788 nodes (60.11 seconds)
Cbc0032I Strong branching done 15558 times (38451 iterations), fathomed 350 nodes and fixed 2076 variables
Cbc0035I Maximum depth 203, 5019 variables fixed on reduced cost
Cbc0038I Probing was tried 31933 times and created 138506 cuts of which 0 were active after adding rounds of cuts (4.431 seconds)
Cbc0038I Gomory was tried 30898 times and created 99534 cuts of which 0 were active after adding rounds of cuts (4.855 seconds)
Cbc0038I Knapsack was tried 30898 times and created 12926 cuts of which 0 were active after adding rounds of cuts (8.271 seconds)
Cbc0038I Clique was tried 100 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Cbc0038I MixedIntegerRounding2 was tried 30898 times and created 13413 cuts of which 0 were active after adding rounds of cuts (3.652 seconds)
Cbc0038I FlowCover was tried 100 times and created 4 cuts of which 0 were active after adding rounds of cuts (0.019 seconds)
Cbc0038I TwoMirCuts was tried 30898 times and created 15292 cuts of which 0 were active after adding rounds of cuts (2.415 seconds)
Cbc0038I Stored from first was tried 30898 times and created 15734 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Cbc0012I Integer solution of -1411.9992 found by Reduced search after 467825 iterations and 36838 nodes (60.12 seconds)
Cbc0020I Exiting on maximum time
Cbc0005I Partial search - best objective -1411.9992 (best possible -1415.4522), took 467825 iterations and 36838 nodes (60.12 seconds)
Cbc0032I Strong branching done 476 times (1776 iterations), fathomed 1 nodes and fixed 18 variables
Cbc0035I Maximum depth 21, 39 variables fixed on reduced cost
Cuts at root node changed objective from -1484.12 to -1415.45
Probing was tried 133 times and created 894 cuts of which 32 were active after adding rounds of cuts (0.060 seconds)
Gomory was tried 133 times and created 1642 cuts of which 0 were active after adding rounds of cuts (0.047 seconds)
Knapsack was tried 133 times and created 224 cuts of which 0 were active after adding rounds of cuts (0.083 seconds)
Clique was tried 100 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.001 seconds)
MixedIntegerRounding2 was tried 133 times and created 163 cuts of which 0 were active after adding rounds of cuts (0.034 seconds)
FlowCover was tried 100 times and created 5 cuts of which 0 were active after adding rounds of cuts (0.026 seconds)
TwoMirCuts was tried 133 times and created 472 cuts of which 0 were active after adding rounds of cuts (0.021 seconds)
ImplicationCuts was tried 25 times and created 41 cuts of which 0 were active after adding rounds of cuts (0.003 seconds)
Result - Stopped on time limit
Objective value: -1411.99922848
Lower bound: -1415.452
Gap: 0.00
Enumerated nodes: 36838
Total iterations: 467825
Time (CPU seconds): 60.13
Time (Wallclock seconds): 60.98
Total time (CPU seconds): 60.13 (Wallclock seconds): 61.01
The top few lines of the CBC solution file are:
Stopped on time - objective value -1411.99922848
0 c_e_x1454_ 0 0
1 c_e_x1455_ 0 0
2 c_e_x1456_ 0 0
3 c_e_x1457_ 0 0
4 c_e_x1458_ 0 0
5 c_e_x1459_ 0 0
6 c_e_x1460_ 0 0
7 c_e_x1461_ 0 0
8 c_e_x1462_ 0 0
Can anyone tell me how can I get these values without generating any error?
Thanks in advance. | Error reported while running pyomo optimization with cbc solver and using timelimit | 0 | 0 | 0 | 1,241 |
47,335,080 | 2017-11-16T16:47:00.000 | 1 | 1 | 0 | 0 | python,windows,python-import | 47,335,172 | 2 | false | 0 | 0 | A common way to fix this quickly is to use
sys.path.append("path/to/module")
Be careful with '\\' if you are using Windows.
Not exactly answering your question but this could fix the problem. | 1 | 1 | 0 | I am running Python 3.6.1 on a Windows 7 machine. I have some scripts in H:\Myname\Python\myscripts.
I have created the user variable PYTHONPATH and set it to the folder above. I do not have admin rights so I can create a user variable only.
In that folder, I have a myscripts.py file with some functions.
If I try to access it running import myscripts from a file stored elsewhere, it doesn't work: I get a ModuleNotFoundError
If I print sys.path, the folder I have set in PYTHONPATH is not there.
Why? What am I doing wrong? Isn't sys.path supposed to show PYTHONPATH?
Does the fact that H is a network drive have anything to do with it?
I can't seem to find anything on the web for this problem in relation to Windows (but lots for Unix systems). | Cannot import module - why does sys.path not find PYTHONPATH? | 0.099668 | 0 | 0 | 2,246 |
47,335,731 | 2017-11-16T17:25:00.000 | 3 | 0 | 0 | 1 | python,docker,virtual-machine,jupyter,gcp | 47,336,914 | 1 | true | 1 | 0 | I started the Jupyter server within the docker image with
What was the docker command you ran? A common gotcha here would be not mapping a host port to a container port.
For example, if you did this:
docker run -p 8888 jupyter/notebook
Then docker would assign a random host port mapping to port 8888 in the container. In this case, you can see what port was mapped by running docker ps. The port will be much higher than 8888 though, so you won't be able to reach jupyter because your firewall will block the traffic.
What you probably want to do is go ahead and map a host port like so:
docker run -p 8888:8888 jupyter/notebook
This should map any traffic reaching the GCE host on port 8888 to port 8888 in your jupyter container. | 1 | 2 | 0 | I'm running an Ubuntu 16.04 VM on Google Compute Engine. I've created a static IP address <my_static_ip_address> and my firewall settings allow tcp:80-8888.
I started the Jupyter server within the docker image with
jupyter notebook --ip=0.0.0.0 --port=8888 --no-browser --allow-root
and got this URL
http://0.0.0.0:8888/?token=8b26c453d278eae1da71b80f26a4ef8ea06734e5c636d897
I'm not able to access from external browser with http://<my_static_ip_address>:8888
What am I missing? | Running Jupyter notebook in docker image on Google Cloud | 1.2 | 0 | 0 | 1,502 |
47,337,197 | 2017-11-16T18:57:00.000 | 3 | 0 | 0 | 0 | python-3.x,tkinter,tkinter-canvas | 47,337,880 | 1 | true | 0 | 1 | You can't pause the update of the canvas without pausing the entire GUI.
A simple solution would be for you to not draw to the canvas until you're ready for the update. Instead of calling canvas commands, push those commands onto a queue. When you're ready to refresh the display, iterate over the commands and run them.
You could also do your own double-buffering, where you have two canvases. The one you are actively drawing would be behind the visible one. When you are ready to display the results, swap the stacking order of the canvases. | 1 | 0 | 0 | I'm writing a graphics program in Python and I would like to know how to make a Canvas update only on-demand; that is, stop a canvas from updating every run of the event loop and instead update only when I tell it to.
I want to do this because in my program I have a separate thread that reads graphics data from standard input to prevent blocking the event loop (given that there's no reliable, portable way to poll standard input in Python, and polling sucks anyway), but I want the screen to be updated only at intervals of a certain amount of time, not whenever the separate thread starts reading input. | How to make tkinter Canvas update only on-demand? | 1.2 | 0 | 0 | 467 |
47,337,762 | 2017-11-16T19:37:00.000 | 4 | 1 | 0 | 0 | python,python-3.x,hash | 47,337,902 | 1 | true | 0 | 0 | You probably don't want to provide the actual keys and other private information at all, hashed or otherwise. Besides being a security issue, it probably violates all sorts of agreements you implicitly made with the provider.
Anyone using the application should be able to get their own key in the same manner you did. You can (and should) provide instructions for how to do this.
Your application should expect the key information in a well documented format, usually a configuration file, that the user must provide once they have obtained the keys in question. You should document the exact format you expect to read the key in as well.
Your documentation could also include a dummy file with empty or obvious placeholder values for the user to fill in. It may be overkill, but I would recommend making sure that any placeholders you use are guaranteed not to be valid keys. This will go a long way to avoiding any accidents with the agreements you could violate otherwise. | 1 | 4 | 0 | I would like to release the code for an application I made in Python to the public, just as a pastebin link. However, the app needs a client ID, secret and user agent to access the API (for reddit in this case).
How can I store the key strings in the python code without giving everyone free reign on my api access? Maybe a hashing function or similar? | How to release a Python application that uses API keys to the public? | 1.2 | 0 | 1 | 51 |
47,340,391 | 2017-11-16T22:45:00.000 | 2 | 1 | 0 | 0 | python,praw | 47,340,599 | 1 | true | 0 | 0 | Didn't find it in the docs, but a friend who knows a bit of raw helped me out.
Use for message in r.inbox.messages() (where r is an instance of reddit) to get the messages.
Use message.text to get the content, and message.subject to get the title. | 1 | 1 | 0 | I'm creating a Reddit bot with praw, and I want to have the bot do things based on private messages sent to it. I searched the documentation for a bit, but couldn't find anything on reading private messages, only sending them. I want to get both the title and the content of the message. So how can I do this? | How to read private messages with PRAW? | 1.2 | 0 | 1 | 527 |
47,341,608 | 2017-11-17T00:53:00.000 | 0 | 0 | 1 | 0 | python-2.7,windows-7,anaconda,pythonpath | 47,356,934 | 1 | false | 0 | 0 | Okay thanks to my instructor I found this to work for me.
open a command prompt (just regular win cmd) and add a path like this where you see the J:\pythonTest you can add your own path this will put it into your path.
SETX PYTHONPATH "%PYTHONPATH%;J:\pythonTest"
One thing I noticed, because I had been trying all sort of ways to solve this problem, my user variables and system variables and a few duplicates of my custom path. I would advise removing them once you run the above command. I suggest removing them because I think I discovered some conflicts and my modules wouldn't run, removing the duplicates seemed to fix this conflict. | 1 | 0 | 0 | This question has been asked, but oddly enough the answers are out of date or not sufficient.
I have installed anaconda, I have setup an environment running py2.7. I launch a win7 cmd prompt and type activate python27 (my custom python environment). I then import sys and then sys.path to see what my python paths are. They all point to variations of E:\Users\myname\Anaconda3\....
I want to add a custom path to this list so that it becomes permanent. Doing sys.path.append is not good enough as its not permanent. I have been reading that adding PYTHONPATH to the environment variables is not done any more, and that I should be adding my custom paths to the PATH system variables.
So could someone advise where and how I can assign my custom paths. | assigning permanent custom paths to anaconda | 0 | 0 | 0 | 55 |
47,345,119 | 2017-11-17T07:08:00.000 | 1 | 1 | 0 | 0 | python-3.x,jpeg,steganography | 47,356,538 | 1 | true | 0 | 0 | The best place to hide a message in JPEG is in the blocks that extend beyond the edge of the image (unless the image dimensions are multiples of 8). | 1 | 0 | 0 | I am working on a writing a program that embeds data in a photo (steganography). This worked completely fine when I was using png lossless compression however I would like this to work in JPEG file format. Previously I would read in my image file and replace the last two bits in every color channel with a part of my message. Then I would compress it and output it. With lossy compression however I am assuming I cannot embed a message pre compression because without a doubt, the message would be unreadable.
My question is, do I need to embed the message post compression/encoding somewhere in the SOS YCbCr data? If not there then where must I store the message? Thank you in advance. | Hiding a message in a JPEG image | 1.2 | 0 | 0 | 704 |
47,346,321 | 2017-11-17T08:33:00.000 | 0 | 1 | 0 | 0 | python,r,logistic-regression | 59,307,014 | 2 | false | 0 | 0 | Try the 'VGAM' package. There is a function called vglm.
example: vglm(Var2~factor(Var1),cumulative(parallel = F),data) generalized order model
cumulative(parallel=T) will perform a proportional odds model.
parallel=F is the default. | 1 | 3 | 1 | I would like to fit a generalized ordered logit model to some data I have. I first tried to use the ordered logit model using the MASS package from R, but it seems that the proportional odds assumption is violated by the data. Indeed, not all independent variables do exert the same effect across all categories of the dependent variable. Hence I am a bit blocked. I say that I could use the generalized ordered logit model instead, but could not find how to use it. Indeed, I can not find any package on either R or python that coud help me on that.
If someone has any hints on packages I could use, it would be of great help!
Thank you very much! | generalized ordered logit in R or python | 0 | 0 | 0 | 1,939 |
47,349,740 | 2017-11-17T11:35:00.000 | 0 | 0 | 0 | 0 | python,io,stream | 47,747,060 | 1 | true | 0 | 0 | Actually in my case this was appearing due to unexpectedly something else was also getting in the data after getting pickled and before being sent to other end. So I was trying to unpickle some garbage data. Thats why this contained some unidentified character. | 1 | 1 | 1 | I am creating a tuple of two objects and pickling them. And when I try to unpickle them on the server side after passing it through sockets. It gives the above mentioned error. | _pickle.UnpicklingError: invalid load key, 'xff' | 1.2 | 0 | 0 | 862 |
47,350,977 | 2017-11-17T12:45:00.000 | 2 | 0 | 0 | 0 | python,elasticsearch,bulkinsert,rollback | 47,353,564 | 1 | true | 0 | 0 | There is no such thing, a bulk queries contains a list of queries to insert / update or delete documents.
elasticsearch is not a database (no ACID / no transaction etc..), so you must create a rollback feature yourself. | 1 | 0 | 0 | Use case: Right now I am using Python elasticsearch API form bulk insert.
I have try except wrapped around my bulk insert code but whenever any exception comes elastisearch doesn't rollback inserted document.
And also doesn't try to insert the remaining documents other than the corrupted document which throws an exception. | Rollback on elasticsearch bulk insert failure | 1.2 | 0 | 0 | 1,957 |
47,351,601 | 2017-11-17T13:20:00.000 | 1 | 0 | 0 | 0 | python,r,stan | 47,353,749 | 1 | true | 0 | 0 | As far as I know, they are not any Python libraries that do what the brms and rstanarm R packages do, i.e. come with precompiled Stan models that allow users to draw from the predictive distribution in the high-level language. There are things like prophet and survivalstan but those are for a pretty niche set of models. | 1 | 0 | 1 | I'm trying to move from using stan in R (I've mainly used the brms package) to pystan, but having trouble locating any methods in pystan that will give me predictive posterior distributions for new data. In R, I've been using brms::posterior_linpred() to do this. Does anyone know if this is possible in pystan?
If not, can it be bolted on without too much problem? | how to obtain predictive posteriors in pystan? | 1.2 | 0 | 0 | 174 |
47,353,479 | 2017-11-17T14:55:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,tensorflow,deep-learning,data-science | 47,355,283 | 1 | false | 0 | 0 | Very briefly, you have three sets of data: training, testing, and validation. These inputs should be distinct; any duplication between data sets will reduce some desirable aspect of your final model.
Training data is used just the way you think: these are the inputs that are matched with the expected outputs (after filtering through your model). The model weights are adjusted until you've converged at the greatest accuracy you can manage.
Testing data is used during training to determine when training is done. If the testing accuracy is still dropping, then your model is still learning things; when it starts to rise, you're over-fitting.
Validation data is the final check on whether your training phase got to a useful point. This data set is entirely independent of the previous two, as is representative of not-previously-seen, real-world input.
The short answer is no. If you fold your testing and validation data into your training set (which is what I think you're trying to ask) and re-train the model, you've changed the training conditions, and you have no evidence that the resulting training is still valid. How do you know ho many epochs you need to train with this larger data set? When you're ready to deploy the trained model, how do you know your new accuracy? | 1 | 0 | 1 | I would like to know if the validation set influences in any way the training of the network, or if it's sole purpose is to monitor the training progress (with the tensorflow for poets project in question).
If doesn't, is it okay/good-practice to reduce the validation and training sets to zero once I know the accuracy for my dataset? So in that way I can get a model trained with the most amount of data I can provide. | Is the validation set used for anything besides monitoring progress on the Tensorflow for poets google code lab's project? | 0 | 0 | 0 | 31 |
47,357,097 | 2017-11-17T18:32:00.000 | 1 | 0 | 1 | 0 | python-3.x,first-class-functions | 47,357,178 | 2 | false | 0 | 0 | Usually, when we say "language X has first-class functions", we actually mean that "language X treats functions as first-class" (just like it does with other types). It usually means that you can pass functions as arguments, store them in arrays, etc.
I don't see any reason why a particular language (that treated functions as first-class) would also have second-class functions (i.e. those that couldn't be passed as arguments), but can't say whether Python does or doesn't. | 2 | 3 | 0 | When people say that Python, for instance, has first class functions, it makes it sound like Python also has second class functions. However, I've never yet (knowingly) encountered a second class function in Python. When we say this, does it really mean "All Python functions are first class?" Or is there an example of a second class function in Python? | The vocabulary of first class functions | 0.099668 | 0 | 0 | 202 |
47,357,097 | 2017-11-17T18:32:00.000 | 6 | 0 | 1 | 0 | python-3.x,first-class-functions | 47,357,250 | 2 | true | 0 | 0 | First, to clarify the nomenclature.
The term "first class functions" means that it is "first class" in the type system: that is, that functions themselves are values. In languages where functions are not first class, functions are defined as a relationship between values, which makes them "second class".
Put another way, "first class" functions implies that you can use a function as a value, which means (among other things) that you can pass functions into functions. You couldn't do this in Java before Java 7 (reflection doesn't count), so that's an example of a programming language only having "second class" functions. In order to "pass a function", you had to define a type where that function could live (as a method), and then pass an instance of that type.
So, in Python, all functions are first class, because all functions can be used as a value.
Now, there is another concept which you might be getting confused with. There is also a concept of a "higher order function". A higher order function is a function that takes a function as an argument. Not all functions in Pythons are higher-order, because not all functions take another function as an argument. But even the functions that are not higher-order are first class. (It's a square/rectangle thing.) | 2 | 3 | 0 | When people say that Python, for instance, has first class functions, it makes it sound like Python also has second class functions. However, I've never yet (knowingly) encountered a second class function in Python. When we say this, does it really mean "All Python functions are first class?" Or is there an example of a second class function in Python? | The vocabulary of first class functions | 1.2 | 0 | 0 | 202 |
47,359,677 | 2017-11-17T21:38:00.000 | 1 | 1 | 1 | 0 | python,python-3.x,cx-freeze | 47,395,605 | 1 | false | 0 | 0 | If you are using cx_Freeze 5.1, there was a bug that resulted in this error. It has been corrected in the source so if you checkout the latest source and compile it yourself it should work for you. If not, let me know! | 1 | 1 | 0 | When I run my compiled program (cx_Freeze) it says __init__line 31. no module named codecs.
I have Python 3.6, does anybody know why it says that, and how to maybe fix it?
I have seen the other questions here on StackOverflow, but they don't seem to solve the problem for me and possibly other too.
Thanks in advance! | Python cx_Freeze __init__ "no module named codecs" | 0.197375 | 0 | 0 | 541 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.