Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
48,084,973
2018-01-03T20:53:00.000
1
0
1
0
python
48,085,172
2
false
0
0
In Python, every input is a string, which can be converted to a list of characters. When a list is converted to a set, it keeps the unique elements. So, when your code turns the string of digits into a set, it only keeps the unique digits. Then, when you find the length, it returns the number of unique digits.
1
2
0
Quick question, I was coding a small program in Python and was looking for a way to make a condition to only allow the input of 4 numbers if that input has the same len that I want and to make sure that all the numbers in it are unique. After searching for a bit on the web I found several people giving the same solution, it was len(set(input)) == size (size of my number) in my case. It works well, but I fail to understand how it really works and I want to know. I've also tried removing the len from that and it worked anyway so I have little clue what are the effects of each thing in this small piece of code. So even though it allows me to create a condition that makes sure each of the 4 numbers are unique I would love it if someone could explain to me how it works. Thanks in advance. PS: If anyone wonders what it is I'm practicing by making my own version of Cows & Bulls. So given a random number using random.sample the user then has to try to guess what the number is. The conditions I mentioned were used upon the user's input.
Explanation for len(set())
0.099668
0
0
11,652
48,085,374
2018-01-03T21:28:00.000
5
0
0
0
javascript,python,plotly
48,085,729
1
true
1
0
I found the solution.. Following code hides the rangeslider graph on the bottom of candle stick chart.. xaxis : {fixedrange: true, rangeslider: {visible: false}}
1
1
0
By default, candlestick and ohlc charts display a rangeslider. It seems like there's no parameter to change the setting. So I've looked at javascript code in html file but was not able to find a clue to remove it. Can someone explain how to remove the rangeslider from candlestick chart?
Plotly - How to remove the rangeslider
1.2
0
0
2,723
48,088,137
2018-01-04T03:10:00.000
0
0
0
0
python,excel,pivot-table,openpyxl
52,813,212
3
false
0
0
Worksheets("SheetName").PivotTables("PivotTableName").PivotCache().Refresh()
1
0
0
I have a workbook that has several tabs with pivot tables. I can put data on the tab that holds the data for each pivot. My problem is that I don't know how to refresh the pivot tables. I would assume that I would need to cycle through each sheet, check to see if there is a pivot table, and refresh it. I just can't find how to do that. All of the examples I find use win32 options, but I'm using a Mac and Linux. I would like to achieve with openpyxl if possible.
Refresh Excel Pivot Tables
0
1
0
3,004
48,089,780
2018-01-04T06:25:00.000
0
0
0
0
python,sockets,tcp,tcpclient
48,090,538
2
false
0
0
The problem is not related to Python but is caused by the underlying socket machinery that does its best to hide low level network events from the program. The best I can imagine would be to try a higher level protocol handshake (send a hello string and set a timeout for receiving the answer) but it would make no difference between the following problem: connection is queued on peer and still not accepted connection has been accepted, but for any other reason the server could not process it in allocated time (only if timeout is very short) congestion on machines (including sender) and network added a delay greater that the timeout My advice is simply that you do not even want to worry with such low level details. As problems can arise server side after the connection has been accepted, you will have to deal with possible higher level protocol errors, timeouts or connection loss. Just say that there is no difference between a timeout after connection has been accepted and a timeout to accept the connection.
1
1
0
In python, tcp connect returns success even though the connect request is in queue at server end. Is there any way to know at client whether accept happened or in queue at server?
How to know the status of tcp connect in python?
0
0
1
1,179
48,092,110
2018-01-04T09:20:00.000
1
0
1
0
python,python-3.x,exe,executable
48,123,001
1
false
0
1
They should all work. Py2exe and Py2app are the ones that don't. If they don't work they you haven't used them properly. Particularly cx_Freeze that requires you to "tune things manually". Here are some debug steps that will help you resolve your error: When freezing for the first time don't hide the console. This will hide any errors that occur. You need to see those. When building look for any errors that appear at the end. These may give you a clue as to how to solve the problem. If you have errors the terminal will appear shortly then close. Run the executable through the terminal and the terminal will stay open allowing you to read the messages. This can be done in the following way: C:\Location>cd \Of\App C:\Location\Of\App>NameOfExecutable cd is a command that stands for change dictionary and assuming your .exe is called NameOfExecutable. Under PowerShell you would use the same but ./NmeOfExecutable to execute instead. See what errors that appear. If you get an error that says a package is missing includes often does the trick (remember to include the top level package as well as the exact one missing. If you use external files or images remember to use include_files to add them along as well. Note that you can add runtimes (or DLLs) in this way too Attempt a build folder before going for an msi. Get the build folder working first then go for the msi.
1
1
0
I am kind a stuck. I have tried to make Tetris game with music to .exe, but I really don't know how to do it. Can someone give some tips, how to make .py to .exe? I have tried Pyinstaller, cx_Freeze and none of them work.
Converting Python 3.6.1 Tetris game with music, to exe
0.197375
0
0
88
48,092,543
2018-01-04T09:45:00.000
1
0
0
0
python,html,python-2.7,gtk,webkitgtk
48,123,502
1
true
0
1
Done. I've used Flask, Socket.io and gtk to make an app, showing a html file in full screen, with python variables in it.
1
0
0
for a project i have to make a GUI for python. It should show some variables (temp etc). But I don't know how I can pass variables trough GTK to the window. Any answers appreciated :) some info: I am using a RPi3, but that's nothing which is important, or is it? I have a 7" display attached, on which the program should be seen in full screen. In the end, there should stand sth like temp, humidity, water etc I don't exactly know which GTK i use, but it's in python. So I think it's pygtk Thanks for reading, Fabian
Give python variables to GTK+ with an html file
1.2
0
0
104
48,093,726
2018-01-04T10:55:00.000
1
0
1
1
python,cygwin
48,096,083
1
true
0
0
just make sure you are in admin mode. i.e. right click on Cygwin, select running as administrator. then install your package specifically using pip3, for python3. i.e. pip3 install your_package with updated version, do pip3 install --upgrade your_package
1
1
0
I'm working on a windows 7 and using Cygwin for unix-like functionality. I can write and run Python scripts fine from the Cygwin console, and the installation of Python packages using pip installis successful and the installed package appears under pip list. However, if I try to run a script that imports these packages, for example the 'aloe' package, I get the error "no such module named 'aloe'". I have discovered that the packages are being installed to c:\python27\lib\site-packages, i.e. the computer's general list of python packages, and not to /usr/lib/python3.6/site-packages, i.e. the list of python packages available within Cygwin. I don't know how to rectify this though. If I try to specify the install location using easy_install-3.6 aloe I get the error [Errno 13] Permission denied: '/usr/lib/python3.6/site-packages/test-easy-install-7592.write-test'. In desperation also tried directly copying the 'aloe' directory to the Cygwin Python packages directory using cmd with cp -r \python27\lib\site-packages\aloe \cygwin\lib\python3.6\site-packages and the move was successful, but the problem persists and when I check in the Cygwin console using ls /usr/lib/python3.6/site-packages I can't see 'aloe'. I have admin rights to the computer in general (sudo is not available in Cygwin anyway) so really can't figure out what the problem is. Any help would be greatly appreciated. Thanks.
permission denied when installing python packages through cygwin
1.2
0
0
3,417
48,095,790
2018-01-04T12:53:00.000
1
0
1
0
python,pycharm,pylint
48,096,798
3
false
0
0
Under Run -> Edit Configurations... you can see the Configuration on the right Side. At the Bottom is a Section called Before launch: Activate tool window where you can hit the green plus Button and configure pylint to be executed before the run.
1
0
0
I'm trying to integrate pylint with pyCharm, but I want it to be an online tool. What do I mean ? I want it to detect errors and checking code standards when I write the code. Until now, I have done it by clicking "Tools --> External Tools --> pylint". There is an option to do this ? or maybe call the Pylint when I run the script ? Thanks.
Integrate Pylint with PyCharm
0.066568
0
0
2,617
48,099,019
2018-01-04T15:59:00.000
1
0
0
0
python,flask,server-side,client-side,webassembly
48,099,607
2
false
1
0
You can use Brython in the browser, it's pretty spiffy. Full dom-manipulation from python; fully compatible with libraries written in pure python. Really neat stuff. As for the server-side, if you want to keep it full-python, you'll need to use something like flask, bottle, cherrypy, aiohttp,... If you find yourself struggling; maybe try starting out writing a simple socket-based microservice? You'll then be able to either farm requests out to it from any other server; or incorporate the code in your (python) server code. Good luck!
1
0
0
So basically, what I want to do is have a user input some data in an HTML form or something (on client end). Have that data be carried over to a server, where the data is put through some python code and the processed result is sent back to the client. I know, I could use javascript to do this on the user side itself, but I want to experiment a bit and make use of some libraries like tensorflow, matplotlib and so on. Also, is there some way, you know like Web Assembly to run python code on the client side. Like maybe, send data from server or have it fed by the user, and on some virtual environment type setup and processed ?? Note: I know flask exists and I've tried it, but I can't see the same flexibility as you know regular python code. Thanks in advance
Running Python on Server and sending results to Client and vice versa
0.099668
0
0
400
48,101,387
2018-01-04T18:25:00.000
0
0
1
0
python,linux
48,101,465
3
false
0
0
Not sure if you would consider switching editors. You can customize VIM so it handles auto indentation and easily set tabs to spaces vice versa using the vimrc file. Vim also has a convenient fix indentation command for your issue.
1
1
0
I edit all my Python files on a remote Centos server using nano. This seems to work well. Occasionally I need to make a small change that changes the indention over the entire file. Is there an easy way to convert all the one space indents to 4 space etc? I have looked at PythonTidy.py but it seems to change too many things.
Python Tidy Code
0
0
0
386
48,109,228
2018-01-05T07:42:00.000
1
0
1
0
python,normalize
48,109,501
3
false
0
0
You can use sklearn.preprocessing for a lot of types of pre-processing tasks including normalization.
1
6
1
I am a new in Python, is there any function that can do normalizing a data? For example, I have set of list in range 0 - 1 example : [0.92323, 0.7232322, 0,93832, 0.4344433] I want to normalize those all values to range 0.25 - 0.50 Thank you,
Normalizing data to certain range of values
0.066568
0
0
18,367
48,111,026
2018-01-05T09:51:00.000
0
1
0
1
python,python-2.7,raspberry-pi
49,945,959
1
false
0
0
The lib is not compatible with Raspberry Pi as the architectures are different. You need to find an ARM based version of that driver if you want to use it on the Pi.
1
1
0
Here I am trying to import 'libEpsonFiscalDriver.so' file in raspberry pi using python 2.7. Here are my steps in python >>>import ctypes >>>ctypes.cdll.LoadLibrary('/home/pi/odoo/my_module/escpos/lib/libEpsonFiscalDriver.so') Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/ctypes/__init__.py", line 443, in LoadLibrary return self._dlltype(name) File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__ self._handle = _dlopen(self._name, mode) OSError: /home/pi/odoo/my_module/escpos/lib/libEpsonFiscalDriver.so: cannot open shared object file: No such file or directory So here I am getting this error. Extra Information: Header Information of libEpsonFiscalDriver.so file. readelf -h libEpsonFiscalDriver.so ELF Header: Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: DYN (Shared object file) Machine: Intel 80386 Version: 0x1 Entry point address: 0x5de0 Start of program headers: 52 (bytes into file) Start of section headers: 125176 (bytes into file) Flags: 0x0 Size of this header: 52 (bytes) Size of program headers: 32 (bytes) Number of program headers: 7 Size of section headers: 40 (bytes) Number of section headers: 29 Section header string table index: 26 For more, I have tested this same code with my other system with Ubuntu installed having Intel Processor and it works fine there. As, lib header listed Machine as Intel 80386. I dought that this lib will only work with Intel Architecture. Is it the thing or am I missing something? Any help will be more than appreciated! Thank you.
cannot open shared object file: No such file or directory in raspberry pi with python
0
0
0
1,606
48,115,753
2018-01-05T14:37:00.000
5
0
0
0
python,redis,iterator,redis-py
48,117,192
1
true
0
0
Scanning the Sorted Set with an iterator does not guarantee any order. Use ZREVRANGEBYSCORE for that.
1
1
1
I have a sorted set in Redis with priorities starting from 0 up to 3. I would like to traverse this sorted set from highest to lowest priority using the python iterator zscan_iter. However, using zscan_iter gives me the items starting from 0. Is there a way to reverse the order? Unfortunately, reverse() only works on iterators and not on python generators. I see two solutions: Use negative priorities (so instead of 3 use -3) Paginate through slices of keys using ZREVRANGEBYSCORE, however I would prefer to use an iterator. Are there any other ways of doing this?
How to traverse a sorted set in Redis in reverse order using zscan?
1.2
0
0
884
48,116,484
2018-01-05T15:21:00.000
0
0
0
0
python,neural-network,keras,kernel,conv-neural-network
48,118,131
2
false
0
0
The actual kernel values are learned during the learning process, that's why you only need to set the number of kernels and their size. What might be confusing is that the learned kernel values actually mimic things like Gabor and edge detection filters. These are generic to many computer vision applications, but instead of being engineered manually, they are learned from a big classification dataset (like ImageNet). Also the kernel values are part of a feature hierarchy that can be used directly as features for a variety of computer vision problems. In that terms they are also generic.
1
1
1
I am new to Convolutional Neural Networks. I am reading some tutorial and testing some sample codes using Keras. To add a convolution layer, basically I just need to specify the number of kernels and the size of the kernel. My question is what each kernel looks like? Are they generic to all computer vision applications?
Does convolution kernel need to be designed in CNN (Convolutional Neural Networks)?
0
0
0
605
48,116,487
2018-01-05T15:21:00.000
0
0
0
0
python,selenium-webdriver,python-behave
48,119,458
1
true
1
0
Somehow, somewhere, you have to put in code to say the things are different. Add another variable to the statement (I fill in "field" on "page" with "text") Use two statements (I fill in homepage "field" with "text" && I fill in property "field" with "text") Add logic to the method to dynamically figure out what page it is looking at to know what field is which All have advantages and drawbacks. You'll need to decide which works best for your situation. The problem with having a generic statement is that it can be applied to too many areas and eventually something has to give.
1
0
0
I have two scenarios for two pages (homepage and property page). And in these scenarios I have the same steps (I fill in "field" with "text"). I need to implement one for homepage and one for property page. But behave sees only one implementation. How could I do the different implementation for the same named steps? I dont want to do hardcode and call the same action differently. My stack: behave + python + selenium + pageObject
How to create several implementation for the same named steps (behave)
1.2
0
0
75
48,118,849
2018-01-05T17:54:00.000
1
0
0
1
python-2.7,google-app-engine,windows-10,gcloud
48,203,280
2
true
1
0
I tried executing the project in local in pycharm, so i got the above error(google.appengine.api error). Basically it has to be executed on a server.The server can be started using your terminal. 1) Go to the project path(root folder of all files in the project where the app.yaml file is located,eg: appengine) 2) start the server using $ dev_appserver.py app.yaml. It starts server at localhost port 8000 as the default one. 3) In the server start depends on the handler and its path specified (like '/' or '/testjob') try localhost:8000/ or localhost:8000/testjob 4) All the logs written in the program will be shown in the terminal. For logs try using 'logging' module , make sure to mention the logging level else basic level logs are not shown
1
0
0
I want to use google appengine.api in local machine. i have installed google cloud SDK and started it ,the authentication is successful . I have executed $dev_appserver.py app.yaml at the project path which has started a google app engine server at localhost:8000 . when i want to execute the program it gives an error message " ImportError: No module named appengine.api " I appreciate your help .
ImportError: No module named appengine.api -windows 10,local env, python 2.7
1.2
0
0
536
48,119,073
2018-01-05T18:10:00.000
1
0
1
0
python,sympy
48,119,110
2
false
0
0
Not really; you're asking to alter the default display of integers to be something outside the standard set of choices. Regardless of the implementation details, this will boil down to you writing a function that accepts an integer and produces the exponent form you want to see, as a character string.
1
4
0
I would like to set the output in sympy for expression 2**3 * 2**4 = to 2**7 instead of 128. Is there an easy way?
set output in sympy for 2**3 * 2**4 = 2**7 instead of 128
0.099668
0
0
77
48,119,447
2018-01-05T18:38:00.000
0
1
1
1
python,atom-editor
48,119,925
1
false
0
0
I use Atom for python. The best terminal plugin I have found is Platformio. I have a plugin named script that allows me to run python scripts from the editor window. The command also has a shortcut to speed up the process. Load script and it will appear under the packages menu. It also defines the shortcut for 'run script' as the command-I keys. Lately I've been running python script using the Hydrogen plugin. This allows in-line plotting along with running your script. Answers and plots appear in the editor pane. Its very nice if you have to plot.
1
0
0
Right now I'm running Python in Atom with Platformio as my terminal. First off, what's the keyboard shortcut to run the file? Secondly, is there a better terminal package out there? I'd like more features such as right clicking and selecting run the file or something that makes running the file easier (maybe a run button). I switch between files quite frequently so I'd like an alternative other than using the up arrow to run previous command.
Python- Alternative terminal in Atom text editor
0
0
0
685
48,122,283
2018-01-05T22:38:00.000
1
0
0
0
python,apache,flask,virtualhost
48,142,441
2
false
1
0
It seems that having SERVER_NAME set in the os environment was causing this problem in conjunction with subdomains in blueprint registration. I removed SERVER_NAME from /etc/apache2/envvars and the subdomain logic and it worked.
1
2
0
I have a Flask app and want it to work for www.domain-a.net and www.domain-b.net behind Apache + WSGI. I can get it to work for one or the other, but can't find a way to get it to work for both. It seems that the domain which registers first is the only one that works. Preferably this would work by having two Apache VirtualHosts set up to use the same WSGI config. I can get that part to work. But Flask just returns 404 for everything sent from the second VirtualHost.
how do I get Flask blueprints to work with multiple domains?
0.099668
0
0
392
48,124,257
2018-01-06T04:26:00.000
12
0
1
0
java,python,concurrency,hashmap
53,954,750
1
false
0
0
In Python, due to the Global Interpreter Lock (GIL), a process can only execute one python bytecode at a time, regardless of how many threads it has. This means inserting/updating/reading a key to a dictionary is thread-safe, which is what people usually mean by saying a dictionary's get/put are "atomic".† But this means that, exactly as you suspected, multiple threads trying to update different keys to the same dictionary will not be concurrent. Java, of course, doesn't have the GIL issue so multiple threads can update different keys in the ConcurrentHashMap at the same time. This doesn't always happen; it's just possible. The ConcurrentHashMap implementation shards the set of keys and locks each shard. Each shard can be read concurrently but only one thread can write at a time. †: Sometimes it's pointed out objects with a __hash__ method written in Python will require multiple Python bytecodes, so then puts and gets are not atomic per se; however simple puts and gets will still be thread-safe in the sense that they will not cause crashes or garbage values, although you can still have race-conditions.
1
6
0
I know that dictionaries are atomic in python but (correct me if I'm wrong) that means only a single addition to the dictionary can be completed at a time. According to the Java page for the concurrentHashMap : "The table is internally partitioned to try to permit the indicated number of concurrent updates without contention." Wouldn't solely atomic insertion in python not compare in speed to the Java implementation EDIT: When I wrote "that means only a single addition to the dictionary can be completed at a time," I meant to say that the state of the dictionary is going to be discretized based on the individual dictionary addition
Python equivalent of concurrentHashMap from Java?
1
0
0
4,025
48,125,175
2018-01-06T07:21:00.000
0
0
0
0
python,amazon-web-services,boto,boto3
48,161,874
2
false
1
0
We just need to use Yaml decoder in place of json decoder as follows :- with open(template_path) as yaml_data: template = yaml.load(yaml_data) template_body = yaml.dump(template) Don`t forget to import Yaml.
1
0
0
I am using boto to deploy json templates in AWS. My question is Can i use the same boto library to deploy Templates in Yaml as well.When i tried i got the error no json could be decoded. If there is a way then please share an example Thanks.
Can we use Boto to deploy Yaml templates?
0
0
0
1,498
48,126,563
2018-01-06T10:48:00.000
3
0
0
1
python,pandas,pip
48,126,571
3
false
0
0
Please change your Python path from the IDLE
1
1
0
After installing Python I have it located in two different directories. One is visible which is the installing path I chose during installation and the second one is hidden (C:Users\username....). I have now problem using pandas because the program is working in the hidden directory. Is it possible to change the working directory somehow. I am using Python IDLE to run my python scripts. I get this message when I run my scripts with pandas: module 'pandas' from 'C:\Users\Alex\AppData\Local\Programs\Python\Python35\lib\site-packages\pandas\__init__.py
Python in two different directories( one specified and one hidden)
0.197375
0
0
655
48,126,663
2018-01-06T11:01:00.000
4
0
1
0
python,struct,pickle
48,126,846
3
false
0
0
Because they do quite different things. You can serialize objects in different ways: text serialization formats: here the serialized object is human readable. Common formats are json and xml, or csv for lists of simple rows. But except for very simply objects (arrays, dictionaries and simple data), you need to define a marshalling protocol to save the relevant part of an object and then rebuild the object from its serialized version binary serialization formats: pickle is intended to automatically serialize an object, and allow it to be automatically deserialized back provided the class is available at deserialization time. Its major drawback is that it is only useable from Python struct is the opposite: you must specifically decide what you save and in what format. And at deserialization time, you also have to know what format was used. But it can be used to exchange binary streams with any other language, provided the format is clearly defined TL/DR the question is not about performance (even if some conversions could be slightly more resource consuming than others) but more on what it the objective of serialization: pickle for local backups, struct for external exchanges
1
4
0
I am unable to understand the use of pickle module vs. the struct module. Both convert a Python object into a byte stream. It seems easier to use pickle than to do the packing and unpacking of the struct module. So when is pickle used and when is struct used?
Why not use pickle instead of struct?
0.26052
0
0
2,731
48,126,838
2018-01-06T11:25:00.000
-2
0
0
0
python,computational-geometry,intersection,plane
48,129,094
3
false
0
0
This is solved by elementary vector computation and is not substantial/useful enough to deserve a library implementation. Work out the math with Numpy. The line direction is given by the cross product of the two normal vectors (A, B, C), and it suffices to find a single point, say the intersection of the two given planes and the plane orthogonal to the line direction and through the origin (by solving a 3x3 system). Computation will of course fail for parallel planes, and be numerically unstable for nearly parallel ones, but I don't think there's anything you can do.
1
8
1
I need to calculate intersection of two planes in form of AX+BY+CZ+D=0 and get a line in form of two (x,y,z) points. I know how to do the math, but I want to avoid inventing a bicycle and use something effective and tested. Is there any library which already implements this? Tried to search opencv and google, but with no success.
Plane-plane intersection in python
-0.132549
0
0
8,472
48,126,924
2018-01-06T11:34:00.000
1
0
0
0
python,neural-network,deep-learning
48,127,014
1
true
0
0
Learning rate: that code does not use a learning rate, or rather it uses a learning rate of 1. Lines 48,49 just add the adjustment (gradient) value without any rate. That is an uncommon choice and can sometimes work in practice, though in general it is not advised. Technically this is the simplest version of gradient descent, and there are many elaborations to simple gradient descent, most of which involve a learning rate. I won't elaborate further as this comment answers the question that you are asking, but you should know that building an optimizer is a big area of research, and there are lots of ways to do this more elaborately or with more sophistication (in the hopes of better performance). Error threshold: instead of stopping optimization when an error threshold is reached, this algorithm stops after a fixed number of iterations (60,000). That is a common choice, particularly when using something like stochastic gradient descent (again, another big topic). The basic choice here is valid though: instead of optimizing until a performance threshold is reached (error threshold), optimize until a computational budget is reached (60,000 iterations).
1
1
1
I'm new to coding and I've been guided to start with Python because it is good for a beginner and very versatile. I've been watching some tutorials online on how to create a neural network with Python, however I've just got stuck in this example. I've seen and worked out tutorials where you have the learning rate and the error threshold which are constant variables. For example learning rate = 0.1 and error threshold = 0.1, however in this particular example there are no constant learning rate and error threshold variables that I can see. Can someone explain why the learning rate and error threshold aren't being used?
Trying to understand neural networks
1.2
0
0
90
48,128,705
2018-01-06T15:17:00.000
0
0
0
0
python-3.x,postgresql,flask-sqlalchemy
48,283,838
1
true
1
0
It figured out that the problem has nothing to do with the session, but the filter() method: # Neccessary import for string input into filter() function from sqlalchemy import text # Solution or workaround model_list = db.session.query(MyModel).filter(text('foreign_key = ' + str(this_id))).all() I could not figure out the problem with: filter(MyModel.foreign_id == this_id) but that's another problem. I think this way is better than executing raw SQL.
1
1
0
I'm rather new to the whole ORM topic, and I've already searched forums and docs. The question is about a flask application with SQLAlchemy as ORM for the PostgreSQL. The __init__.py contains the following line: db = SQLAlchemy() the created object is referenced in the other files to access the DB. There is a save function for the model: def save(self): db.session.add(self) db.session.commit() and also an update function: def update(self): for var_name in self.__dict__.keys(): if var_name is not ('_sa_instance_state' or 'id' or 'foreign_id'): # Workaround for JSON update problem flag_modified(self, var_name) db.session.merge(self) db.session.commit() The problem occurs when I'm trying to save a new object. The save function writes it to DB, it's visible when querying the DB directly (psql, etc.), but a following ORM query like: model_list = db.session.query(MyModel).filter(MyModel.foreign_id == this_id).all() gives an empty response. A call of the update function does work as expected, new data is visible when requesting with the ORM. I'm always using the same session object for example this: <sqlalchemy.orm.scoping.scoped_session object at 0x7f0cff68fda0> If the application is restarted everything works fine until a new object was created and tried to get with the ORM. An unhandsome workaround is using raw SQL like: model_list = db.session.execute('SELECT * FROM models_table WHERE foreign_id = ' + str(this_id)) which gives a ResultProxy with latest data like this: <sqlalchemy.engine.result.ResultProxy object at 0x7f0cf74d0390> I think my problem is a misunderstanding of the session. Can anyone help me?
SQLAlchemy scoped_session is not getting latest data from DB
1.2
1
0
766
48,131,867
2018-01-06T20:59:00.000
5
0
1
0
python,automation,pycharm,python-import
48,132,854
1
true
0
0
Do File | Settings | Tools | Startup Tasks (or Ctrl-Alt-S). Then: Build, Execution, Deployment > Console > Python Console This gives you a dialogue with an edit box Starting script. Put your import code there. That will run every time you open a new console.
1
2
0
I have just started using PyCharm because I want to get more into Python (have more experience with other languages), and have run into a conundrum that there has to be a solution for. Some custom function I created relies on a module being imported in a particularly verbose way, so I don't want to have to copy-paste the same two lines every time I want to code, as opposed to simply typing import numpy as np. So is there a way to automatically run some code on my environment's start? Particularly to import some modules? I looked around PyCharm's settings, and have found nothing.
Automatically run code on Pycharm environment start
1.2
0
0
1,548
48,139,087
2018-01-07T16:17:00.000
-1
0
0
0
python,qt,datetime,pyqt,pyqt5
48,147,393
2
false
0
1
Short answer is No you cannot. Here's the long answer. You cannot set the date to 00.00.0000 00:00. Because it makes no damn sense. By default, it shows an unknwon date-time as 12:00 AM of the first day of the first month of year 0000. If you really badly want to show "00.00.00 0:00", then do this: dt.setDisplayFormat( "00.00.00 0:00" ). What's the problem with this? Yeah, it shows 00.00.00 0:00 for any date-time you set. You'll have to set a sensible format once the user starts interacting with your widget or you want to set a valid date-time. So you want some crappy behaviour from standard Qt widgets, subclass them and implement your own code.
1
1
0
I need to set empty value to datetimeedit widget (01.01.00 0:00 is not really what I want to see). Using dtwidget.setDateTime(QDateTime()) or dtwidget.clear() is not affected. How can I do this?
How to clear value in QDateTimeEdit?
-0.099668
0
0
1,423
48,139,998
2018-01-07T17:57:00.000
1
1
1
0
python,ipython
48,140,853
1
false
0
0
Do not use %load, instead type %run. From their docstrings: %load Load code into the current frontend. %run Run the named file inside IPython as a program.
1
2
0
I'm running ipython through an ssh terminal. %load in ipython prints the code to the screen. Is there a way of loading a script while surpressing the verbosity (preferably with few keystrokes)?
How to %load a script in ipython without printing code to the screen?
0.197375
0
0
616
48,140,133
2018-01-07T18:12:00.000
3
0
0
0
python,tensorflow,machine-learning
48,141,696
1
false
0
0
A gradient descent minimizer will typically try to find the minimum loss irrespective of the sign of the loss surface. It sounds like you either want to a) assign a large loss to encourage your model to pick something else or b) assign a fifth no-action category.
1
1
1
I am implementing a reinforcement agent that takes actions based on classes. so it can take action 1 or 2 or 3 or 4. So my question is can I use negative loss in tensorflow to stop it from outputting an action. Example: Let's say the agent outputs action 1 I want to very strongly dissuade it from taking action 1 in that situation again. but there is not a known action that it should have taken instead. So I can't just choose a different action to make it learn that. So my question is: does tensorflow gradient computation handle negative values for loss. And if it does will it work the way I describe?
use negative loss in tensorflow
0.53705
0
0
1,647
48,140,731
2018-01-07T19:23:00.000
0
0
1
0
python,conda,gurobi
48,141,155
1
false
0
0
I suggest the following: conda list to see what's in the environment, and how it was installed. If gurobi was installed as a conda package then use conda uninstall gurobi, if using pip then use pip uninstall gurobi.
1
0
0
I am using gurobi with anaconda and python, and recently downloaded an updated version (7.5.2) to update the already installed 7.0.2 version that is on my computer. I can find the conda command line prompts to remove the conda installed package, but cannot find any code anywhere to remove the 7.0.2 version from my computer so that it doesn't keep referencing 7.0.2 when I try to install new version via conda again. If anyone can offer any advice it would be much appreciated! Interesting that there is nothing in the gurobi docs that states how to do this.
How to remove old version of gurobi in Windows
0
0
0
1,726
48,142,421
2018-01-07T23:11:00.000
1
0
0
0
python,tensorflow,machine-learning,neural-network,mouse
48,142,657
2
false
0
0
Using a neural network for this task seems like total over kill to me. It seems like you have 2 inputs, each which have an X and a Y coordinate with one representing the initial position and one representing the final position of the mouse. There are a ton of ways you can introduce randomness into this path in hard to detect ways that are much simpler than a neural network. Use some weird random number generator with bizarre personal logic in if statements to determine the amounts to add within some range to the current value on each iteration. You could use a neural net, but again i think it's overkill. As far as what type of neural net you need to use, i would just start with an out of the box one from a tutorial online (tensorflow and sklearn are what i've used) and tweak the hyper parameters to see what makes the model better.
2
3
1
I am trying to make a fuction that takes in 2 (x,y) coordinates and returns and array of coordinates for where the mouse should be every 0.05 second (or about that)... (The whole aim of the program is to have random/not smooth mouse movement to mimic a human's mouse movement) E.G. INPUTS : [(800,600), (300,400)] OUTPUTS : [(800,600),(780,580),...,(300,400)] I would like to know which type of neural network I should use to get the correct outputs. I'm kind of new to the subject of neural nets but I have a decent understanding and will research the suggestions given. A pointer in the right direction and some links would be greatly appreciated!
What Neural Network to use for AI Mouse Movement
0.099668
0
0
2,533
48,142,421
2018-01-07T23:11:00.000
0
0
0
0
python,tensorflow,machine-learning,neural-network,mouse
48,143,029
2
false
0
0
If you're trying to predict where the mouse should be based on the position of something else, a simple ANN will do the job. Are you trying to automate tasks, like have a script that controls a game? A Recurrent Neural Network like an LSTM or GRU will take history into account. I know you're doing this as a learning exercise, but if you're just trying to smooth mouse movement a simple interpolation algorithm might work.
2
3
1
I am trying to make a fuction that takes in 2 (x,y) coordinates and returns and array of coordinates for where the mouse should be every 0.05 second (or about that)... (The whole aim of the program is to have random/not smooth mouse movement to mimic a human's mouse movement) E.G. INPUTS : [(800,600), (300,400)] OUTPUTS : [(800,600),(780,580),...,(300,400)] I would like to know which type of neural network I should use to get the correct outputs. I'm kind of new to the subject of neural nets but I have a decent understanding and will research the suggestions given. A pointer in the right direction and some links would be greatly appreciated!
What Neural Network to use for AI Mouse Movement
0
0
0
2,533
48,143,892
2018-01-08T03:30:00.000
0
0
1
0
python,date,datetime
49,190,456
1
false
0
0
Easy dt = dateutil.parser.parse("2017-12-12T00:00:00+01:00") timestamp = int(time.mktime(dt.timetuple())) now you got seconds, I guess you can convert it to milliseconds your self
1
0
0
I have this date string in python "2016-12-12T00:00:00+01:00" How do I convert the said date string with timestamp to timestamp in milliseconds so I can compute use the value so I can compute it? Purpose: to_timestamp_milliseconds("2017-12-12T00:00:00+01:00") + (10500 * 1000)
How to Converting Date String with timezone to timestamp milliseconds in Python 2.7
0
0
0
63
48,145,081
2018-01-08T06:13:00.000
0
0
1
0
autocomplete,sublimetext3,python-3.6,jedi
53,689,784
1
true
0
0
Sorry for the late reply. Reinstalling sublime and rebooting my Windows 10 seems to have fixe this issue. I tried to install jedi again, but in vain. So I reinstalled Sublime Text 3, rebooted my Windows 10 and I was able to install the jedi Autocomplete package.
1
0
0
I'm trying to install the jedi autocomplete package in Sublime Text 3. Each time I try to install it using Package Control it just doesn't complete installation even after several HOURS!!! It shows me no error messages . The package just doesn't install . I've installed Various Packages but never faced this issue.It just Keeps displaying at the bottom of the Sublime Text as Installing but never completes the installation. How to solve this problem? Is there any other way to install the package?
Can't Install Jedi -Autocomplete in Sublime Text 3
1.2
0
0
534
48,145,777
2018-01-08T07:15:00.000
2
0
0
0
python,nlp,nltk,sentiment-analysis
48,146,211
2
false
0
0
The models used for sentiment analysis are generally the result of a machine-learning process. You can produce your own model by running the model creation on a training set where the sentiments are tagged the way you like, but this is a significant undertaking, especially if you are unfamiliar with the underpinnings. For a quick and dirty fix, maybe just make your code override the sentiment for an individual word, or (somewhat more challenging) figure out how to change its value in the existing model. Though if you can get a hold of the corpus the NLTK maintainers trained their sentiment analysis on and can modify it, that's probably much simpler than figuring out how to change an existing model. If you have a corpus of your own with sentiments for all the words you care about, even better. In general usage, "quick" is not superficially a polarized word -- indeed, "quick and dirty" is often vaguely bad, and a "quick assessment" is worse than a thorough one; while of course in your specific context, a service which delivers quickly will dominantly be a positive thing. There will probably be other words which have a specific polarity in your domain, even though they cannot be assigned a generalized polarity, and vice versa -- some words with a polarity in general usage will be neutral in your domain. Thus, training your own model may well be worth the effort, especially if you are exploring utterances in a very specific register.
2
2
1
I've been working with NLTK in Python for a few days for sentiment analysis and it's a wonderful tool. My only concern is the sentiment it has for the word 'Quick'. Most of the data that I am dealing with has comments about a certain service and MOST refer to the service as being 'Quick' which clearly has Positive sentiments to it. However, NLTK refers to it as being Neutral. I want to know if it's even possible to retrain NLTK to now refer to the Quick adjective as having positive annotations?
Change sentiment of a single word
0.197375
0
0
915
48,145,777
2018-01-08T07:15:00.000
3
0
0
0
python,nlp,nltk,sentiment-analysis
48,154,073
2
true
0
0
I have fixed the problem. Found the vader Lexicon file in AppData\Roaming\nltk_data\sentiment. Going through the file I found that the word Quick wasn't even in it. The format of the file is as following: Token Mean-sentiment StandardDeviation [list of sentiment score collected from 10 people ranging from -4 to 4] I edited the file. Zipped it. Now NLTK refers to Quick as having positive sentiments.
2
2
1
I've been working with NLTK in Python for a few days for sentiment analysis and it's a wonderful tool. My only concern is the sentiment it has for the word 'Quick'. Most of the data that I am dealing with has comments about a certain service and MOST refer to the service as being 'Quick' which clearly has Positive sentiments to it. However, NLTK refers to it as being Neutral. I want to know if it's even possible to retrain NLTK to now refer to the Quick adjective as having positive annotations?
Change sentiment of a single word
1.2
0
0
915
48,149,281
2018-01-08T11:21:00.000
0
0
1
0
python-3.x,stanford-nlp,named-entity-recognition
53,168,687
1
true
0
0
Scrape data from sites like wikipedia,etc. and create a scoring model and then use it for context prediction.
1
1
1
I am working on Stanford NER, My question is regarding ambiguous entities. For example, I have 2 sentences: I love oranges. Orange is my dress code for tomorrow. How can i train these 2 sentences to give out, first orange as Fruit, second orange as Color. Thanks
Ambiguous Entity in stanfors NER
1.2
0
0
154
48,150,348
2018-01-08T12:30:00.000
3
0
1
1
python,continuous-integration,continuous-deployment,endevor,quali-cloudshell
48,150,417
1
true
0
0
Cloudshell-Shell-Core & Cloudshell-Core are not dependent on server version. This means that you can develop your shells with the latest versions of Cloudshell-Shell-Core (and Cloudshell-Core) for all Cloudshell server versions.
1
2
0
I'm developing a CloudShell shell and would like it to run on all CloudShell version >= 8.1. I require both cloudshell-shell and cloudshell-shell-core packages. Which version of both packages should I use?
which version of cloudshell-shell and cloudshell-shell-core should I use?
1.2
0
0
40
48,152,403
2018-01-08T14:32:00.000
1
1
1
0
python,python-2.7,base64
48,152,634
2
false
0
0
That's right. You can see the readable characters even if you save them into binary file. Because they are ASCII characters. Actually, you are not going to save file with binary format, because you can't get the binary string by save it with 'wb'. What you should do is to get the ASCII value of every character, and convert it to binary number.
1
3
0
I'm using Python 2.7 (that means that there is no base64.decodebytes()) I need to convert my base64 string, for example aW0ganVzdCBhIGJhc2UgNjQgZmlsZQ== into binary (i.e string of 1's and 0's). I thought to try and write the base 64 string to a file in mode wb and then read it back with rb but even when used wb to write - I still see the original base 64 string when opening the file.. What am I missing? Thanks
Python 2.7 - convert base 64 to binary string
0.099668
0
0
1,624
48,152,513
2018-01-08T14:40:00.000
6
1
1
0
python,python-3.x,python-2.7
48,152,682
1
true
0
0
You cannot "switch the interpreter to python 2.7". You're either using one or the other. Your choices are effectively: Come up with an alternative that doesn't require the pytan module. Modify the pytan module so that it runs under Python 3. Modify your code so that it runs under Python 2. Isolate the code that requires pytan such that you can run it as a subprocess under the python 2 interpreter. There are a number of problems with this solution: It requires people to have two versions of Python installed. It will complicate things like syntax highlighting in your editor. It will complicate testing. It may require some form of IPC (pipes, sockets, files, etc...) between your main code and your python 2 subprocess (this isn't terrible, but it's a chunk of additional complexity that wouldn't be necessary if you could make one of the other options work).
1
3
0
I've written a 40k line program in python3. Now I need to use a module throughout my program that is called pytan which will impart a functionality addition. The problem is that pytan is written in python2. So is it possible to switch the interpreter to python 2.7 inside one script that is called by another running in python 3? What's the best way to handle this situation.
How do you use a python2 module inside a python3 program
1.2
0
0
122
48,152,674
2018-01-08T14:50:00.000
18
0
0
0
python,memory-management,gpu,nvidia,pytorch
66,533,975
14
false
0
0
Query Command Does PyTorch see any GPUs? torch.cuda.is_available() Are tensors stored on GPU by default? torch.rand(10).device Set default tensor type to CUDA: torch.set_default_tensor_type(torch.cuda.FloatTensor) Is this tensor a GPU tensor? my_tensor.is_cuda Is this model stored on the GPU? all(p.is_cuda for p in my_model.parameters())
1
375
1
How do I check if pytorch is using the GPU? It's possible to detect with nvidia-smi if there is any activity from the GPU during the process, but I want something written in a python script.
How to check if pytorch is using the GPU?
1
0
0
569,617
48,155,275
2018-01-08T17:34:00.000
0
0
0
0
python,django,django-models,django-views
48,155,554
2
false
1
0
Running the makemigrations command will return any errors on the model. At least that's where I find that I forgot a default value for required fields, etc.
1
1
0
An a interviewer asked me what is the command to check your model has no errors. Does Django have any type of command like this?
How to check your model has no errors in Django
0
0
0
241
48,157,291
2018-01-08T20:00:00.000
-1
0
1
0
python-3.x,macos,anaconda,python-import,pygrib
55,633,320
2
false
0
0
sudo python -m pip install pygrib
2
3
0
I've installed pygrib by using conda install -c conda-forge pygrib and no issues were raised. However, when importing it in order to use it I get this message: ImportError: dlopen(/Users/andrea1994/anaconda3/lib/python3.6/site-packages/pygrib.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libpng16.16.dylib Referenced from: /Users/andrea1994/anaconda3/lib/python3.6/site-packages/pygrib.cpython-36m-darwin.so Reason: Incompatible library version: pygrib.cpython-36m-darwin.so requires version 51.0.0 or later, but libpng16.16.dylib provides version 49.0.0 I've gone through several procedures that were thought to solve similar issues but none worked (updating libpng, uninstalling and installing back Anaconda,...). Does anyone have any clue? I'm not an expert in this field: most of the times I manage to get things working, but as you see sometimes I fail. Thank you!
Unable to import pygrib on python3 (Mac)
-0.099668
0
0
1,096
48,157,291
2018-01-08T20:00:00.000
0
0
1
0
python-3.x,macos,anaconda,python-import,pygrib
49,723,408
2
false
0
0
I know this is old, but I had the same issue and finally was able to import pygrib after I started a clean environment, installed from conda conda install -c conda-forge pygrib and then installed jasper, even though I believe it is installed with the pygrib install I am not sure if the correct one is installed or what. conda install jasper -c conda-forge
2
3
0
I've installed pygrib by using conda install -c conda-forge pygrib and no issues were raised. However, when importing it in order to use it I get this message: ImportError: dlopen(/Users/andrea1994/anaconda3/lib/python3.6/site-packages/pygrib.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libpng16.16.dylib Referenced from: /Users/andrea1994/anaconda3/lib/python3.6/site-packages/pygrib.cpython-36m-darwin.so Reason: Incompatible library version: pygrib.cpython-36m-darwin.so requires version 51.0.0 or later, but libpng16.16.dylib provides version 49.0.0 I've gone through several procedures that were thought to solve similar issues but none worked (updating libpng, uninstalling and installing back Anaconda,...). Does anyone have any clue? I'm not an expert in this field: most of the times I manage to get things working, but as you see sometimes I fail. Thank you!
Unable to import pygrib on python3 (Mac)
0
0
0
1,096
48,157,435
2018-01-08T20:14:00.000
0
0
0
1
python,arrays,endianness
48,177,252
1
true
0
0
I found out a solution. Place my data into a byte array, and simply reverse it. In hexadecimal, reversing pairs will change it from big-> little, little-> big etc. Therefore reversing a byte array, where each index is two hex pairs, the same applies. Example, 0000 1F40 (Big endian) -> 401F 0000 (Little Endian) If each each two hex numbers represent a byte, i.e 1F or 40 are equal to one byte therefore reversing a byte array is equivalent.
1
1
0
I have a byte stream, or more specifically an RTP packet. How can I change it from big endian to little endian?
Changing Python byte stream from big endian to little endian
1.2
0
0
2,697
48,159,562
2018-01-08T23:34:00.000
2
0
0
0
python,numpy,tensorflow,batch-processing,channel
48,159,738
3
false
0
0
Assume you want to get the mean of multiple axis(if I didn't get you wrong). numpy.mean(a, axis=None) already supports multiple axis mean if axis is a tuple. I'm not so sure what you mean by naive method.
1
2
1
Looking to calculate Mean and STD per channel over a batch efficiently. Details: batch size: 128 images: 32x32 3 channels (RGB) So each batch is of size [128, 32, 32, 3]. There are lots of batches (naive method takes ~4min over all batches). And I would like to output 2 arrays: (meanR, meanG, meanB) and (stdR, stdG, stdB) (Also if there is an efficient way to perform arithmetic operations on the batches after calculating this, then that would be helpful. For example, subtracting the mean of the whole dataset from each image)
Calculating Mean & STD for Batch [Python/Numpy]
0.132549
0
0
3,580
48,161,673
2018-01-09T04:59:00.000
-1
0
0
0
python,postgresql,sqlalchemy
48,161,699
2
false
0
0
You could do some clever retrospective and do that but why not just select all and ignore the one you don't need?
1
4
0
I use mostly SQLAlchemy core(v.1.0.8) expression language with flask(0.12) to create API calls. For a particular case where the table has 20 columns, I wish to select all except 1 particular column. How can this be done in the 'select' clause? Is there anything like 'except' that can be used instead of explicitly selecting the columns by names?
How to use SQLAlchemy core to select all table columns except 1 specific column in postgresql?
-0.099668
1
0
3,459
48,161,770
2018-01-09T05:11:00.000
1
0
1
0
php,python,database,mongodb
48,162,606
3
false
0
0
Long named attributes (or, "AbnormallyLongNameAttributes") can be avoided while designing the data model. In my previous organisation we tested keeping short named attributes strategy, such as, organisation defined 4-5 letter encoded strings, eg: First Name = FSTNM, Last Name = LSTNM, Monthly Profit Loss Percentage = MTPCT, Year on Year Sales Projection = YOYSP, and so on..) While we observed an improvement in query performance, largely due to the reduction in size of data being transferred over the network, or (since we used JAVA with MongoDB) the reduction in length of "keys" in MongoDB document/Java Map heap space, the overall improvement in performance was less than 15%. In my personal opinion, this was a micro-optimzation that came at an additional cost (huge headache) of maintaining/designing an additional system of managing Data Attribute Dictionary for each of the data models. This system was required to have an organisation wide transparency while debugging the application/answering to client queries. If you find yourself in a position where upto 20% increase in the performance with this strategy is lucrative to you, may be it is time to scale up your MongoDB servers/choose some other data modelling/querying strategy, or else to choose a different database altogether.
1
0
0
Sometime, we have many fields and large data set in DB (i am using mongoDB). One thing come in my mind regarding to save some bytes in DB by keeping shorten name in DB. Like year : yr Month : mn isSameCity : isSmCt So, Is this approach good or bad. Or, that depends on case base. Please mentor me on this.
is it a good idea to shorten attribute names in MongoDB database?
0.066568
1
0
495
48,162,075
2018-01-09T05:44:00.000
1
0
0
0
python,amazon-rds,apache-nifi
48,170,299
2
true
0
0
Question might not have been clear based on the feedback, but here is the answer to get a NiFi (running on an AWS EC2 instance) communicating with an Amazon RDS instance: On the EC2 instance, download the latest JDBC driver (wget "https://driver.jar") (If needed) Move the JDBC driver into a safe folder. Create the DBCPConnectionPool, referencing the fully-resolved file path to the driver.jar (helpful: use readlink -f driver.jar to get the path). Don't forget -- under your AWS Security Groups, add an inbound rule that allows your EC2 instance to access RDS (under Source, you should put the security group of your EC2 instance).
1
0
0
I have a FlowFile and I want to insert the attributes into RDS. If this was a local machine, I'd create a DBCPConnectionPool, reference a JDBC driver, etc. With RDS, what am I supposed to do? Something similar (how would I do this on AWS)? Or am I stuck using ExecuteScript? If it's the later, is there a Python example for how to do this?
How best to interact with AWS RDS Postgres via NiFi
1.2
1
0
1,019
48,163,061
2018-01-09T07:15:00.000
0
0
1
1
python,package,python-wheel
48,163,292
1
false
0
0
.whl: A compressed file like Zip Achieve. How to install? Install Winrar or other software that can extract compressed files. Extract your .whl file in a safe folder. Add that directory to your Path Variables. Now everything is done. It's bad practice to install a .whl file. You should use PIP. Well, I recommend you to extract your .whl file in that folder where all Python modules are installed.
1
1
0
I have downloaded .whl and trying to install it. Can you tell me the possible ways of installing wheel without using pip. For tar.gz files, I executed python setup.py install and it installed my package. How the same can be done for wheel when there is no setup.py file in it?
How to install python wheel without using pip?
0
0
0
6,130
48,164,176
2018-01-09T08:36:00.000
0
0
1
1
python,pip
60,337,104
2
false
0
0
I had the same problem and did all the steps to fix it, eventually I found out that the antivirus program was denying the access to the executable file. So disabling it worked for me.
1
0
0
When attempting to do absolutely anything using pip in the terminal, I instantly receive the message "Access is denied." No other messages, just "Access is denied." I've tried using administrator terminal, going to different directories, but the same issue happens. I have attempted to run "python -m pip install --upgrade pip" from looking at a solution in a different question but it said pip is up to date and no change occurred. I can get around installing using the "python -m pip install (something)" command but I would like to know the cause of this issue and how to resolve it so it doesn't impede me in future when I try doing something other than installing with pip. Any ideas? Help is greatly appreciated, thank you :) I'm running Python 3.6 and Windows 10. So sorry if this is a duplicate question, I've spent a while attempting to see if the answer is already here but for the most part they appear to all be for a different OS or have a different error. Of course, I could have been searching using the wrong query but like Jon Snow, I know nothing, especially about technical stuff. EDIT: I've gone into the pip.exe files locations and have adjusted security permissions so that any user on my device has full control, still no luck. Many reboots have been done but no luck there either. I can't seem to log into the admin account in terminal as it doesn't accept my password, even though I am the sole account and an admin account.
pip Access denied Windows 10
0
0
0
4,171
48,165,302
2018-01-09T09:43:00.000
0
0
1
1
python,file,concurrency
48,165,446
1
false
0
0
To build a custom solution to this you will probably need to write a short new queuing module. This queuing module will have write access to the file(s) alone and be passed write actions from the existing modules in your code. The queue logic and logic should be a pretty straightforward queue architecture. There may also be libraries that exist in python to handle this problem that would avoid you writing your own queue class. Finally, it is possible that this whole thing will be/could be handled in some way by your OS, independent of python.
1
0
0
I have several scripts. Each of them does some computation and it is completely independent from the others. Once these computations are done, they will be saved to disk and a record updated. The record is maintained by an instance of a class, which saves itself to disks. I would like to have a single record instance used in multiple scripts (for example, record_manager = RecordManager(file_on_disk). And then record_manager.update(...) ); but I can't do this right now, because when updating the record there may be concurrent write accesses to the same file on disk, leading to data loss. So I have a separate record manager for every script, and then I merge the records manually later. What is the easiest way to have a single instance used in all the scripts that solves the concurrent write access problem? I am using macOS (High sierra) and linux (Ubuntu 16.04). Thanks!
Concurrent file accesses from different scripts python
0
0
0
204
48,165,947
2018-01-09T10:18:00.000
1
0
0
1
python,pyspark,azure-hdinsight,azure-data-factory,azure-data-lake
48,221,294
2
true
0
0
Currently, we don't have support for ADLS data store with HDI Spark cluster in ADF v2. We plan to add that in the coming months. Till then, you will have to contiue using the workaround as you mentioned in your post above. Sorry for the inconvenience.
2
1
1
I am trying to execute spark job from on demand HD Insight cluster using Azure datafactory. Documentation indicates clearly that ADF(v2) does not support datalake linked service for on demand HD insight cluster and one have to copy data onto blob from copy activity and than execute the job. BUT this work around seems to be a hugely resource expensive in case of a billion files on a datalake. Is there any efficient way to access datalake files either from python script that execute spark jobs or any other way to directly access the files. P.S Is there a possiblity of doing similar thing from v1, if yes then how? "Create on-demand Hadoop clusters in HDInsight using Azure Data Factory" describe on demand hadoop cluster that access blob storage but I want on demand spark cluster that access datalake. P.P.s Thanks in advance
Access datalake from Azure datafactory V2 using on demand HD Insight cluster
1.2
0
0
343
48,165,947
2018-01-09T10:18:00.000
0
0
0
1
python,pyspark,azure-hdinsight,azure-data-factory,azure-data-lake
49,116,105
2
false
0
0
The Blob storage is used for the scripts and config files that the On Demand cluster will use. In the scripts you write and store in the attached Blob storage they can write from ADLS to SQLDB for example.
2
1
1
I am trying to execute spark job from on demand HD Insight cluster using Azure datafactory. Documentation indicates clearly that ADF(v2) does not support datalake linked service for on demand HD insight cluster and one have to copy data onto blob from copy activity and than execute the job. BUT this work around seems to be a hugely resource expensive in case of a billion files on a datalake. Is there any efficient way to access datalake files either from python script that execute spark jobs or any other way to directly access the files. P.S Is there a possiblity of doing similar thing from v1, if yes then how? "Create on-demand Hadoop clusters in HDInsight using Azure Data Factory" describe on demand hadoop cluster that access blob storage but I want on demand spark cluster that access datalake. P.P.s Thanks in advance
Access datalake from Azure datafactory V2 using on demand HD Insight cluster
0
0
0
343
48,166,780
2018-01-09T11:04:00.000
0
0
0
0
python,tkinter,combobox,multiple-columns
58,595,746
1
false
0
0
You can use pyqt or pyside inside python to create a QComboBox with 1 or more columns.
1
0
0
I would like to know whether it is possible to create a combobox in python that uses multiple columns. A feature similar to what I am requesting is the combo box from MS Access, as those combo boxes can contain multiple columns. Does such a combo box exist in python or is there an alternative solution that I may use?
Python combobox with multiple columns
0
0
0
902
48,171,283
2018-01-09T15:16:00.000
1
0
0
0
python,numpy,derivative
48,172,836
1
true
0
0
Thanks to the discussions in the comments the problem with np.gradient has been solved by updating the numpy package from version 1.12.1 to 1.13.3. This update is specially relevant if you are also getting the ValueError "distances must be scalars" when using gradient. Thus, in order to extract the order of the power-law, computing np.gradient(logy,logx) remains a valid option of going about it.
1
0
1
We have the x and y values, and I am taking their log, by logx = np.log10(x) and logy = np.log10(y). I am trying to compute the derivative of logy w.r.t logx, so dlogy/dlogx. I used to do this successfully using numpy gradient, more precisely derivy = np.gradient(logy,np.gradient(logx)) but for some strange reason it doesn't seem to work anymore yielding the error: "Traceback (most recent call last): File "derivlog.py", line 79, in <module> grady = np.gradient(logy,np.gradient(logx)) File "/usr/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 1598, in gradient raise ValueError("distances must be scalars") ValueError: distances must be scalars" Context: When trying to detect power-laws, of the kind y ~ x^t, given the values of y as a function of x, one wants to exctract essentially the power t, so we take logs which gives log y ~ t*log x and then take the derivative in order to extract t. Here's a minimal example for recreating the problem: x=[ 3. 4. 5. 6. 7. 8. 9. 10. 11.] y = [ 1.05654 1.44989 1.7939 2.19024 2.62387 3.01583 3.32106 3.51618 3.68153] Are there other (more suited) methods in python for taking such numerical derivatives?
Derivative of log plot in python
1.2
0
0
1,101
48,173,038
2018-01-09T16:52:00.000
1
0
0
0
c#,python,c++,cntk
48,258,953
1
false
0
0
Yes, this is completely possible. The CNTK framework itself is written in C++ and the C# as well as the Python interface are only wrappers onto the C++ code. So, as long you use versions which are compatible to each other, you can do that. For instance, if you use CNTK 2.3.1 with python and also CNTK 2.3.1 with C# there is of course nothing which should get in your way. If the versions are different, it depends if there have been breaking changes. Just for your information: There will be two formats in the near future: CNTK V2 model format and the new ONNX format.
1
2
1
I want to use CNTK python to train a CNN model and then use the trained model when i am programming in c# or c++. Is it possible to use CNTK python trained model in C# or c++?
is it possible to use cnn models trained in cntk python in c#?
0.197375
0
0
149
48,173,347
2018-01-09T17:09:00.000
9
0
0
0
python,django,database
48,174,325
1
true
1
0
Managers use that parameter to define on which database the underlying queryset the manager uses should operate on. This is simply there to optionally override it in case you have e.g. multiple databases and you want your manger/queryset to operate on a specific one.
1
10
0
In working with the Django user model I've noticed model managers include a using=self._db parameter when acting on a database. If I'm only using a single database, is this necessary? What does using=self._db do other than specify the database. Is this specified as a fail-safe in the event another database is added?
Do Django Model Managers require using=self._db
1.2
0
0
2,226
48,174,011
2018-01-09T17:50:00.000
0
1
0
0
php,python,python-3.x,raspberry-pi3
48,174,819
2
false
1
0
First make sure you have permissions to write read execute for web user. You can you user sudo sudo chmod 777 /path/to/your/directory/file.xyz For php file and file you want to run. $output = exec('sudo pytho3 piUno'); echo $output; Credits ---> Ralph Thomas Hopper
1
0
0
I'm running apache2 web server on raspberry pi3 model B. I'm setting up smart home running with Pi's and Uno's. I have a php scrypt that executes python program>index.php. It has rwxrwxrwx >I'll change that late becouse i don't fully need it. And i want to real-time display print from python script. exec('sudo python3 piUno.py') Let's say that output is "Hello w" How can i import/get printed data from .py?
get python to print/return value on your website with php
0
0
0
2,682
48,177,771
2018-01-09T22:47:00.000
0
0
0
0
python,neural-network,training-data
48,188,480
1
false
0
0
If you're not using an already ready library it 'saves' the training only if you write a part of code to save it. The simplest way is to generate a TXT with a list of all the weights after the training and load it with a specific function.
1
1
1
I am trying machine learning for the first time, and am playing around with a handwriting recognition NN (in Python). I just wanted to know whether or not I need to train the model every time I run it, or if it 'saves' the training. Thanks in advance.
Do I need to train my neural network everytime I run it?
0
0
0
966
48,179,772
2018-01-10T03:15:00.000
0
0
0
0
javascript,python,html,csv,remote-access
48,180,149
2
false
1
0
I'm pushing against the boundaries set forth in your question.. if you're simply wanting to do it in JavaScript, but outside the context of a browser, have you considered working in Node?
1
1
0
I've been trying to implement some HTML code that accesses weather data in historical CSV files online, and perform maths on the data once I selectively extract it. In the past, I've programmed in Python and had no problems doing this by using pycurl.Curl(). HTML is a complete nightmare in comparison: XMLHttpRequest() does technically work, but web browsers automatically block access to all foreign URLs (because of the Same-Origin Policy). Not good. Any ideas and alternative approaches would be very helpful!
HTML - Access data from remote online CSV files
0
0
1
238
48,180,177
2018-01-10T04:10:00.000
0
0
1
0
python,floating-point,precision,underflow
48,187,560
2
false
0
0
Division, like other IEEE-754-specified operations, is computed at infinite precision and then (with ordinary rounding rules) rounded to the closest representable float. The result of calculating x/y will almost certainly be a lot more accurate than the result of calculating np.exp(np.log(x) - np.log(y) (and is guaranteed not to be less accurate).
1
2
1
Suppose both x and y are very small numbers, but I know that the true value of x / y is reasonable. What is the best way to compute x/y? In particular, I have been doing np.exp(np.log(x) - np.log(y) instead, but I'm not sure if that would make a difference at all?
Prevent underflow in floating point division in Python
0
0
0
6,827
48,188,940
2018-01-10T13:51:00.000
1
0
0
0
python,opencv,computer-vision,halcon
48,302,888
3
false
0
0
It depends on which Halcon functionalities are you using and why you want to do it. The question appears to be very general. I would recommend you to convert your Halcon Program to C++ and write a wrapper function to pass arguments to/from your openCV program.This would be the simplest option to provide interaction between your opencv and halcon program. Hope it helps.
2
1
1
I am developing a solution using a comercial computer vision software called Halcon. I am thinking on migrating or convert my solution to OpenCV in Python. I will like to start developing my other computer vision solution in Halcon because the IDE is incredible, and them generate a script to migrate them to OpenCV. Does anyone know any library for this task? I will like to start developing an open source SDK to convert Halcon to OpenCV. I and thinking to start developing all internal function from Halcon to Python. Any advice?
Migrate Halcon code to OpenCV
0.066568
0
0
4,998
48,188,940
2018-01-10T13:51:00.000
1
0
0
0
python,opencv,computer-vision,halcon
48,305,091
3
false
0
0
This is unfortunately not possible because Halcon itself is not an open source library and every single function is locked. The reason behind is runtime licencing.
2
1
1
I am developing a solution using a comercial computer vision software called Halcon. I am thinking on migrating or convert my solution to OpenCV in Python. I will like to start developing my other computer vision solution in Halcon because the IDE is incredible, and them generate a script to migrate them to OpenCV. Does anyone know any library for this task? I will like to start developing an open source SDK to convert Halcon to OpenCV. I and thinking to start developing all internal function from Halcon to Python. Any advice?
Migrate Halcon code to OpenCV
0.066568
0
0
4,998
48,189,504
2018-01-10T14:22:00.000
0
0
0
0
python,turtle-graphics
48,210,844
1
false
0
1
OK, I have solved the situation by using: speed(0); turtle.tracer(False); turtle.bye(). The graphics window is intialized but suddenly closed.
1
0
0
everyone! I want to implement the Turtle into my application just for the purpose of coordinate generation. The problem is that I need to get the coordinates of turtle move (etc.) without any "pop-up window" containing the graphics or even worse animation. Is it possible somehow to disable initialization of turtle graphics? Thanks a lot!
How to disable initializing of graphics window of Python Turtle
0
0
0
60
48,191,238
2018-01-10T15:50:00.000
2
0
1
1
python,multiprocessing,directory
48,192,334
3
true
0
0
This has absolutely nothing to do with Python, as file operations in Python use OS level system calls (unless run as root, your Python program would not have permissions to do raw device writes anyway and doing them as root would be incredibly stupid). A little bit of file system theory if anyone cares to read: Yes, if you study file system architecture and how data is actually stored on drives, there are similarities between files and directories - but only on data storage level. The reason being there is no need to separate these two. For example ext4 file system has a method of storing information about a file (metadata), stored in small units called inodes, and the actual file itself. Inode contains a pointer to the actual disk space where file data can be found. File systems generally are rather agnostic to directories. A file system is basically just this: it contains information about free disk space, information about files with pointers to data, and the actual data. Part of metadata is the directory where the file resides. In modern file systems (ancient FAT is the exception that is still in use) data storage on disk is not related to directories. Directories are used to allow both humans and the computer implementing the file system locate files and folders quickly instead of walking through sequentially the list of inodes until the correct file is found. You may have read that directories are just files. Yes, they are "files" that contain either a list of files in it (or actually a tree but please do not confuse this with a directory tree - it is just a mechanism of storing information about large directories so that files in that directory do not need to be searched sequentially within the directory entry). The reason this is a file is that it is the mechanism how file systems store data. There is no need to have a specific data storage mechanism, as a directory only contains a list of files and pointers to their inodes. You could think of it as a database or even simpler, a text file. But in the end it is just a file that contains pointers, not something that is allocated on the disk surface to contain the actual files stored in the directory. That was the background. The file system implementation on your computer is just a piece of software that knows how to deal with all this. When you open a file in a certain directory for writing, something like this usually happens: A free inode is located and an entry created there Free clusters / blocks database is queried to find storage space for the file contents File data is stored and blocks/clusters are marked "in use" in that database Inode is updated to contain file metadata and a pointer to this disk space "File" containing the directory data of the target directory is located This file is modified so that one record is added. This record has a pointer to the inode just created, and the file name as well Inode of the file is updated to contain a link to the directory, too. It is the job of operating system and file system driver within it to ensure all this happens consistently. In practice it means the file system driver queues operations. Writing several files into the same directory simultaneously is a routine operation - for example web browser cache directories get updated this way when you browse the internet. Under the hood the file system driver queues these operations and completes steps 1-7 for each new file before it starts processing the following operation. To make it a bit more complex there is a journal acting as an intermediate buffer. Your transactions are written to the journal, and when the file system is idle, the file system driver commits the journal transactions to the actual storage space, but theory remains the same. This is a performance and reliability issue. You do not need to worry about this on application level, as it is the job of the operating system to do all that. In contrast, if you create a lot of randomly named files in the same directory, in theory there could be a conflict at some point if your random name generator produced two identical file names. There are ways to mitigate this, and this would be the part you need to worry about in your application. But anything deeper than that is the task of the operating system.
1
1
0
I run several processes in Python (using multiprocessing.Process) on an Ubuntu machine. Each of the processes writes various temporary files. Each process writes different files, but all files are in the same folder. Is there any potential risk of error here? The reason I think there might be a problem is that, AFAIK, a folder in Unix is just a file. So it's jsut like several processes writing to the same file at the same time, which might cause a loss of information. Is this really a potential risk here? If so, how to solve it?
Can multiple processes write to the same folder?
1.2
0
0
1,864
48,193,281
2018-01-10T17:47:00.000
0
0
0
1
python-2.7,debian,barcode-scanner
48,219,220
1
true
0
0
I finally get It, I haveI finally discovered it, I had the Scanner configured as PS2, instead of USB Com Port Emulation. Once I set up the scanner everything started to work fine.
1
0
0
I´m trying to reed some data from a barcode scanner in python, using the serial library. My inconvenient is that I´m connecting the barcode scanner to a Virtual Machine where I have a Debian running. I connect the scanner and ir read the data but I cannot identify what /dev/tty* is using, so I can pass as an argument to a server and parse the data it is pacing. In Debian theres is no /dev/ttyACM0 and don´t know why. Answer to comment: HostOS: Windows 10. GuestOS: Debian 9 and/or ubuntu 17.04. VMtool: workstation 14. All of them 64bit. Scanner CINO FUZZYSCAN Model: F680-BSUG. Library I was using pyserial, a couple of month a go I was able to use /dev/ttyACM0. Now when I run lsub it gave me this:Bus 001 Device 005: ID 1fbb:3681 When run dmesg: [ 1026.204937] usbcore: registered new interface driver usbkbd [ 1051.955948] usb 1-2: USB disconnect, device number 4 [ 1054.647592] usb 1-2: new full-speed USB device number 5 using ohci-pci [ 1055.137077] usb 1-2: New USB device found, idVendor=1fbb, idProduct=3681 [ 1055.137083] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 1055.137087] usb 1-2: Product: FUZZYSCAN [ 1055.137089] usb 1-2: Manufacturer: CINO [ 1055.150619] input: CINO FUZZYSCAN as /devices/pci0000:00/0000:00:06.0 /usb1/1-2/1-2:1.0/0003:1FBB:3681.0004/input/input10 [ 1055.208155] hid-generic 0003:1FBB:3681.0004: input,hidraw1: USB HID v1.10 Keyboard [CINO FUZZYSCAN] on usb-0000:00:06.0-2/input0 So is recognizing the device, but not mapping to /dev/ttyACM0.
Barcode scanner, reading data from python USB or serial, how to identify what /dev/tty* is using the scanner
1.2
0
0
1,056
48,193,410
2018-01-10T17:55:00.000
0
0
0
1
python,django,solr,celery
48,265,807
1
true
1
0
Strangely after trying and trying i decided to restart the whole system and it just started working, without changing anything
1
0
0
Am using haystack with solr 6.6 for search indexing. Now i want to automatically update indexes when data changes in my models under indexing, so am using celery_haystack for that. Unfortunately, each time index should be updated i get could not load [app.model.pk]. Somehow it went missing error python 3.6.3 django 1.11.6 celery 4.1.0 django-haystack 2.7.dev0 celery-haystack 0.10 Thanks in advance.
Couldn't load "object". Somehow it went missing?
1.2
0
0
61
48,195,422
2018-01-10T20:19:00.000
2
0
1
0
python,windows,cmd,anaconda,conda
56,211,183
2
false
0
0
Just Run Command Prompt with Admin permission then it will install desired package and will work perfectly
1
0
0
I'm getting started on Python 2.7, using the Anaconda package and its Spyder IDE, but when I find out that something I want to do requires that I execute a command that starts with the word "conda", I have terrible trouble. I first assumed that those were commands to type in the IPython console in Spyder, but instead of executing what I commanded, it told me NameError: name 'conda' is not defined. I also tried the Windows Command Prompt (cmd.exe), but it told me conda is not recognized as an internal or external command. Some results when I googled that claimed that I had to add one of the Anaconda-related folders to Windows' Path, so I tried that, but still no good. How can I carry out conda commands on Windows 10?
Errors with conda commands on Windows 10
0.197375
0
0
1,505
48,195,505
2018-01-10T20:25:00.000
1
0
0
0
python,tkinter,treeview,ttk
48,207,480
1
true
0
1
After some testing I discovered iids are not just three digit hexadecimals but can be up to five. I say up to five because in my testing I hit a memory error before I could exhaust the amount of unique iids. I was getting iids like "IEA600" before I hit memory issues. One memory error was "unable to realloc 3145736 bytes" when deleting just under a million children from the treeview.
1
1
0
I am using tkinter and specifically the ttk.treeview widget to display tuples. I do a lot of inserting and was wondering if the iid (item identifier) can overflow or how it is handled. I hypothesize that the maximum iid is 0xFFF which is equivalent to 4095 base 10 given that they are formatted as string like "I001." If they do overflow how can I reuse/delete an iid?
Can Tkinter ttk.treeview iid overflow?
1.2
0
0
159
48,196,521
2018-01-10T21:40:00.000
0
1
1
0
python,python-2.7,raspberry-pi3,pyvisa
48,204,460
2
false
0
0
"In python 2.7, the import system will always use files in the working directory over the one in site-packages and as your file is named pyvisa.py when importing visa.py it picks your own module instead of the 'real' pyvisa module."MatthieuDartiailh from github
1
0
0
py on a raspberry pi with python 2.7.9 and pip 1.5.6. I installed and uninstalled pyvisa and pyvisa-py several times, but the problems stay. I connected the KEITHLEY Multimeter 2000 per R232 to USB with the Raspberry. When I run the basic Code: import visa rm = visa.ResourceManager('@py') a=rm.list_resources() print(a) I receive: Traceback (most recent call last): File "pyvisa.py", line 1, in <module> import visa File "/usr/local/lib/python2.7/dist-packages/visa.py", line 16, in <module> from pyvisa import logger, __version__, log_to_screen, constants File "/home/pi/pyvisa.py", line 2, in <module> rm = visa.ResourceManager('@py') AttributeError: 'module' object has no attribute 'ResourceManager' as well when I try python -m visa info Traceback (most recent call last): File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/local/lib/python2.7/dist-packages/visa.py", line 16, in <module> from pyvisa import logger, __version__, log_to_screen, constants File "pyvisa.py", line 1, in <module> import visa File "/usr/local/lib/python2.7/dist-packages/visa.py", line 16, in <module> from pyvisa import logger, __version__, log_to_screen, constants ImportError: cannot import name logger On the other hand i can't upgrade, because the requirements are already up-to-date. pip install pyvisa-py --upgrade Requirement already up-to-date: pyvisa-py in /usr/local/lib/python2.7/dist-packages Requirement already up-to-date: pyvisa>=1.8 in /usr/local/lib/python2.7/dist-packages (from pyvisa-py) Requirement already up-to-date: enum34 in /usr/local/lib/python2.7/dist-packages (from pyvisa>=1.8->pyvisa-py) I would be very thankfull if somebody could help me with this issue.
pyvisa-py on raspberry pi AttributeError: 'module' object has no attribute 'ResourceManager'
0
0
0
1,834
48,197,227
2018-01-10T22:36:00.000
0
0
0
0
python,machine-learning,artificial-intelligence,jupyter-notebook,data-science
48,197,823
3
false
0
0
It depends on the model. For example linear regression, training will give you the coefficients of the slope and the intercept (generally). These are the "model parameters". When deployed, traditionally, these coefficients get fed into a different algorithm (literally y=mx+b), and then when queried "what should y be, when I have x", it responds with the appropriate value. Kmeans clustering on the other hand the "parameters" are vectors, and the predict algorithm calculates distance from a vector given to the algorithm, and then returns the closest cluster - note often times these clusters are post processed, so the predict algorithm will say "shoes" not "[1,2,3,5]", which is again an example of how these things change in the wild. Deep learning returns a list of edge weights for a graph, various parametric systems (as in maximum likelihood estimation), return the coefficients to describe a particular distribution, for example uniform distribution is number of buckets, Gaussian/Normal distribution is mean and variance, other more complicated ones have even more, for example skew and conditional probabilities.
3
0
1
Coming from a programming background where you write code, test, deploy, run.. I'm trying to wrap my head around the concept of "training a model" or a "trained model" in data science, and deploying that trained model. I'm not really concerned about the deployment environment, automation, etc.. I'm trying to understand the deployment unit.. a trained model. What does a trained model look like on a file system, what does it contain? I understand the concept of training a model, and splitting a set of data into a training set and testing set, but lets say I have a notebook (python / jupyter) and I load in some data, split between training/testing data, and run an algorithm to "train" my model. What is my deliverable under the hood? While I'm training a model I'd think there'd be a certain amount of data being stored in memory.. so how does that become part of the trained model? It obviously can't contain all the data used for training; so for instance if I'm training a chatbot agent (retrieval-based), what is actually happening as part of that training after I'd add/input examples of user questions or "intents" and what is my deployable as far as a trained model? Does this trained model contain some sort of summation of data from training or array of terms, how large (deployable size) can it get? While the question may seem relatively simple "what is a trained model", how would I explain it to a devops tech in simple terms? This is an "IT guy interested in data science trying to understand the tangible unit of a trained model in a discussion with a data science guy". Thanks
Data Science Model and Training - Understanding
0
0
0
225
48,197,227
2018-01-10T22:36:00.000
0
0
0
0
python,machine-learning,artificial-intelligence,jupyter-notebook,data-science
50,946,421
3
false
0
0
A trained model(pickled) or whatever you want to use, contains the features on which it has been trained at least. Take for example a simple distance based model, you design a model based on the fact that (x1,x2,x3,x4) features are important and if any point if comes into contact with the model should give back the calculated distance based on which you draw insights or conclusions. Similarly for chatbots, you train based on ner-crf, whatever features you want. As soon as a text comes into contact with the model, the features are extracted based on the model and insights/conclusions are drawn. Hope it was helpful !! I tried explaining the Feyman way.
3
0
1
Coming from a programming background where you write code, test, deploy, run.. I'm trying to wrap my head around the concept of "training a model" or a "trained model" in data science, and deploying that trained model. I'm not really concerned about the deployment environment, automation, etc.. I'm trying to understand the deployment unit.. a trained model. What does a trained model look like on a file system, what does it contain? I understand the concept of training a model, and splitting a set of data into a training set and testing set, but lets say I have a notebook (python / jupyter) and I load in some data, split between training/testing data, and run an algorithm to "train" my model. What is my deliverable under the hood? While I'm training a model I'd think there'd be a certain amount of data being stored in memory.. so how does that become part of the trained model? It obviously can't contain all the data used for training; so for instance if I'm training a chatbot agent (retrieval-based), what is actually happening as part of that training after I'd add/input examples of user questions or "intents" and what is my deployable as far as a trained model? Does this trained model contain some sort of summation of data from training or array of terms, how large (deployable size) can it get? While the question may seem relatively simple "what is a trained model", how would I explain it to a devops tech in simple terms? This is an "IT guy interested in data science trying to understand the tangible unit of a trained model in a discussion with a data science guy". Thanks
Data Science Model and Training - Understanding
0
0
0
225
48,197,227
2018-01-10T22:36:00.000
1
0
0
0
python,machine-learning,artificial-intelligence,jupyter-notebook,data-science
54,176,655
3
false
0
0
A trained model will contain the value of its parameters. If you tuned only a few parameters, then only they will contain the new adjusted value. Unchanged parameters will store the default value.
3
0
1
Coming from a programming background where you write code, test, deploy, run.. I'm trying to wrap my head around the concept of "training a model" or a "trained model" in data science, and deploying that trained model. I'm not really concerned about the deployment environment, automation, etc.. I'm trying to understand the deployment unit.. a trained model. What does a trained model look like on a file system, what does it contain? I understand the concept of training a model, and splitting a set of data into a training set and testing set, but lets say I have a notebook (python / jupyter) and I load in some data, split between training/testing data, and run an algorithm to "train" my model. What is my deliverable under the hood? While I'm training a model I'd think there'd be a certain amount of data being stored in memory.. so how does that become part of the trained model? It obviously can't contain all the data used for training; so for instance if I'm training a chatbot agent (retrieval-based), what is actually happening as part of that training after I'd add/input examples of user questions or "intents" and what is my deployable as far as a trained model? Does this trained model contain some sort of summation of data from training or array of terms, how large (deployable size) can it get? While the question may seem relatively simple "what is a trained model", how would I explain it to a devops tech in simple terms? This is an "IT guy interested in data science trying to understand the tangible unit of a trained model in a discussion with a data science guy". Thanks
Data Science Model and Training - Understanding
0.066568
0
0
225
48,199,519
2018-01-11T03:53:00.000
2
0
0
0
python,odoo-10,odoo
48,203,871
2
true
1
0
The best way to overwrite any method is to use super(), to execute the old functionality and avoid destroying the functionality of other module methods which also overwrite that source method. If you do that you're doing well. But sometimes you'll find situations in which you can't use super() and you need to copy the source code to modify anything inside. In those cases, you must be careful and be aware of what you're overwriting. Example A You've the base method action_confirm(), which is created by sale module. You install a module named sale_extension, which overwrites action_confirm() method using super(). Then you install your custom module, where you're also overwriting action_confirm() with super(). When action_confirm() is called, it's going to execute sale, sale_extension and your module functionality. Example B You've the base method action_confirm(), which is created by sale module. You install a module named sale_extension, which overwrites action_confirm() method using super(). Then you install your custom module, where you're also overwriting action_confirm() without super(), so you've been copied the source code of action_confirm() and modified it. When action_confirm() is called, it's going to execute sale and your module functionality, but not the sale_extension one. So you should have understood first and then have added the code introduced by sale_extension in the source code you had copied, and then, modify it. The problem with this case is the future modules you can install, if they overwrite action_confirm(), your code is going to ignore their changes. So the conclusion is that if you use super(), you're not bound to have problems in your method when installing new modules (although there are cases in which even with super(), you'll have to modify your code to adapt it to the new installed module functionality).
1
1
0
I created a new method in my custom module and I want to use my method when user clicks button confirm in sale order. so I did research and found easy solution to complete this job. Override base method. In this case is method action_confirm() in sale.py by use super() method. so I am just wondering, is it the right way?Is there a better solution than this? please suggest me.
Is override base method in Odoo is good option?
1.2
0
0
882
48,201,478
2018-01-11T07:04:00.000
0
0
1
0
java,python,python-3.x,jvm,pycharm
48,224,581
1
true
1
0
Found solution : I downgraded my PyCharm version to 2.4 and now it's working.
1
0
0
I'm running java commands through my python script but getting Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. . The script is running through cmd. I removed max and min heapsize in pycharm.exe.vmoptions. Also, degraded the java version but nothing worked.
Pycharm : Getting "Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit."
1.2
0
0
611
48,204,680
2018-01-11T10:20:00.000
0
1
0
0
python,smtplib
51,780,455
3
false
0
0
thank you for all your answer, the problem is actually caused because of some restriction in my work network. I have talked with them and the problem is solved.
1
0
0
Hi I am trying to connect to the outlook server to sending the email with smtplib in python. Trying this code smtplib.SMTP('smtp-mail.outlook.com') does not print out anything and also does not return an error. Can somebody tell me what might be the problem and how to fix it. Thank you a lots.
Python SMTPLIB does not connect
0
0
1
1,765
48,206,010
2018-01-11T11:29:00.000
1
0
0
1
python,logging,flask,uwsgi
48,211,175
1
false
1
0
There is no "stop" event in WSGI, so there is no way to detect when the application stops, only when the server / worker stops.
1
0
0
I have a Flask app that I run with uWSGI. I have configured logging to file in the Python/Flask application, so on service start it logs that the application has been started. I want to be able to do this when the service stops as well, but I don't know how to implement it. For example, if I run the uwsgi app in console, and then interrupt it with Ctrl-C, I get only uwsgi logs ("Goodbye to uwsgi" etc) in console, but no logs from the stopped python application. Not sure how to do this. I would be glad if someone advised on possible solutions. Edit: I've tried to use Python's atexit module, but the function that I registered to run on exit is executed not one time, but 4 times (which is the number of uWSGI workers).
Logging uWSGI application stop in Python
0.197375
0
0
183
48,207,331
2018-01-11T12:40:00.000
1
0
1
0
python,ros,bag
48,337,123
1
false
0
0
No need of full ros for accessing the data from topic, But you actually need the Basic ROS. Because ROS topics have a ROS msgs type (msgs you are using in the topic must be installed) and to communicate with the topic ros master is required. You can install the basic ROS not the full version. But then you might need to install the different package required for your appllication.
1
0
0
I want to extract topics of ros bag files directly with python without the need to install a full ros distribution on the machine. I'm currently using the "rosbag" package but afaik it requires a ROS installation and gets all topic/message definitions from that environment. Is there any possibility to achieve that?
Reading ROS bag file content directly with python
0.197375
0
0
2,043
48,209,706
2018-01-11T14:43:00.000
0
0
1
0
python,regex
48,210,007
3
false
0
0
Ok, thanks. I find another solution. lista = re.findall(("PROGRAM S\d\d\S+") To find any character after the digit as repetition.
1
0
0
My code is as follow: list = re.findall(("PROGRAM S\d\d"), contents If I print the list I just print S51 but I want to take everything. I want to findall everything like that "PROGRAM S51_Mix_Station". I know how to put the digits to find them but I don´t know how to find everything until the next space because usually after the last character there is an space. Thanks in advance.
Regular expression help to find space after a long string
0
0
0
50
48,211,634
2018-01-11T16:21:00.000
0
0
1
1
python-3.x
51,157,311
1
false
0
0
Most of the external packages are still not supporting 3.6. Try with cx_Freeze in 3.6 else go with pyinstaller but python version should be 3.5(this works fine for me)
1
1
0
I need to convert my .py files into .exe files for Python 3.6.4. I have tried almost everything on Google and YouTube and none of it seems to work for me. It seems as though a lot of the explanations either gloss over the most technical aspects of installing any modules that convert .py files into .exe files or they are outdated. Can someone give me a step by step example of how to convert my .py files into .exe files for Python 3.6.4.? I was able to convert the .py files easily for Python 3.4 but not 3.6.4. My file path is: This Pc > C: > Users > XXXX > AppData > Local > Programs > Python > Python36-32
How to convert a .py file into a .exe file for Python 3.6.4
0
0
0
133
48,214,572
2018-01-11T19:33:00.000
0
1
0
0
python,git,teamcity
48,214,746
3
false
0
0
By design, TeamCity resists making usernames/passwords available to build steps. This is not to say that you couldn't make it work that way, but be aware of the reason it does it. If anyone should be able to configure build jobs but shouldn't be able to see the password in question, you'd have a security problem. The best solution in my opinion is to use SSH when possible. TeamCity will be happy to run an ssh-agent while your build steps execute, configured with the private key (hence, the identity) of your choosing. Just being able to configure a build does not give a user the ability to see the private key (or any other sensitive credential) in this scenario, even if they code cleverly and maliciously. Of course this isn't always an option. But for basic git access it usually is, and you may want to consider it.
1
1
0
I have a script that runs git remote commands and require user/password credential, because the script runs by teamcity I was wondering if there is a way for teamcity to pass these credential to my script?
run a script using teamcity that require credentials
0
0
0
4,184
48,216,556
2018-01-11T22:05:00.000
0
0
1
0
python,jupyter-notebook,jupyter
51,660,162
1
false
0
0
I got a same problem. I don't know why but I delete ipython log, after then I can click my taskbar. You can try. rm -rf ~/.ipython
1
1
0
I've been trying to find an answer to this problem for a while now, but I can't seem to find the right answer. When I am in jupyter, I can open a notebook and I can code in it, however, the taskbar that has "File, Edit, View, Insert, Cell, etc." is no longer functional. When I click the buttons, nothing happens. I can't make any new files as well for the same reason.
Jupyter Notebook Taskbar/toolbar not working
0
0
0
248
48,219,121
2018-01-12T03:24:00.000
8
0
0
0
python,tensorflow,neural-network,conv-neural-network
48,223,162
1
true
0
0
tf.layers.conv1d is used when you slide your convolution kernels along 1 dimensions (i.e. you reuse the same weights, sliding them along 1 dimensions), whereas tf.layers.conv2d is used when you slide your convolution kernels along 2 dimensions (i.e. you reuse the same weights, sliding them along 2 dimensions). So the typical use case for tf.layers.conv2d is if you have a 2D image. And possible use-cases for tf.layers.conv1d are, for example: Convolutions in Time Convolutions on Piano notes
1
2
1
What is the difference in the functionalities of tf.layers.conv1d and tf.layers.conv2d in tensorflow and how to decide which one to choose?
Difference between tf.layers.conv1d vs tf.layers.conv2d
1.2
0
0
4,923
48,219,296
2018-01-12T03:54:00.000
4
0
0
0
python,python-3.x,tensorflow,deep-learning,keras
50,122,396
1
false
0
0
To my understanding, as long as each operator that you will use in your Error function has already a predefined gradient. the underlying framework will manage to calculate the gradient of you loss function.
1
8
1
To my understanding, in order to update model parameters through gradient descend, the algorithm needs to calculate at some point the derivative of the error function E with respect of the output y: dE/dy. Nevertheless, I've seen that if you want to use a custom loss function in Keras, you simply need to define E and you don't need to define its derivative. What am I missing? Each lost function will have a different derivative, for example: If loss function is the mean square error: dE/dy = 2(y_true - y) If loss function is cross entropy: dE/dy = y_true/y Again, how is it possible that the model does not ask me what the derivative is? How does the model calculate the gradient of the loss function with respect of parameters from just the value of E? Thanks
Why doesn't Keras need the gradient of a custom loss function?
0.664037
0
0
1,747
48,220,414
2018-01-12T06:01:00.000
0
0
0
0
python,word2vec
48,220,586
2
false
0
0
If I understand this option correctly, you are resetting all the weights of the shared words and then train them on the C2 data... This would mean that all the information on the shared words from C1 is lost, which would seem like a big loss to me. (I dont know the corpus sizes). Also, how different are the two corpora? How big is this intersection? Do the corpora cover similar topics/areas or not? This could also influence your decision on whether losing all the info from the C1 corpus is okay or not okay. This seems like a more logical flow to me... but again the difference in corpora/vocabulary is important here. If a lot of words from C2 are left out because of the intersection, you can think of ways to add unknown words one way or another. But in order to asses which option is truly 'best' in your case, create a case where you can measure how 'good' one approach is according to the other. In most cases this involves some similarity measure... but maybe your case is different..
2
0
1
Scenerio: A word2vec model is trained on corpus C1 with vocabulary V1. If we want to re-train the same model with another corpus C2 having vocabulary V2 using train() API, what will happen out of these two: For model, weights for V1 intersection V2 will be reset and re-training for with corpus C2 will come up with all together new weights For model, re-training with corpus C2 will be continued with the existing weights for vocabulary V1 intersection V2. Which one is correct hypothesis out of the above two?
Word2vec Re-training with new corpus, how the weights will be updated for the existing vocabulary?
0
0
0
224
48,220,414
2018-01-12T06:01:00.000
0
0
0
0
python,word2vec
51,941,579
2
false
0
0
Why not initiate each of the word2vec parameters with random generated numbers for each run? I could do this and with careful selection of the random numbers for each parameter (numFeatures, contextWindow, seed) I was able to get random similarity tuples which I wanted for my usecase. Simulating an ensemble architecture. What do others think of it? Pls do reply.
2
0
1
Scenerio: A word2vec model is trained on corpus C1 with vocabulary V1. If we want to re-train the same model with another corpus C2 having vocabulary V2 using train() API, what will happen out of these two: For model, weights for V1 intersection V2 will be reset and re-training for with corpus C2 will come up with all together new weights For model, re-training with corpus C2 will be continued with the existing weights for vocabulary V1 intersection V2. Which one is correct hypothesis out of the above two?
Word2vec Re-training with new corpus, how the weights will be updated for the existing vocabulary?
0
0
0
224
48,222,898
2018-01-12T09:12:00.000
1
0
0
0
android,python,linux,kivy,buildozer
48,235,557
1
false
0
1
Something like ~/.buildozer/android/platform/android-sdk-21/build-tools/19.1.0/zipalign -v 4 /home/kivy/Desktop/provaAPP/bin/yourapkname.apk youroutputapkname.apk, I think.
1
1
0
My path is cd ~/.buildozer/android/platform/android-sdk-21/build-tools/19.1.0/ the name of my apk is MyApplication-0.1-release-unsigned.apk the path of apk is /home/kivy/Desktop/provaAPP/bin
How to zipalign a Apk with kivy buildozer?
0.197375
0
0
457
48,224,210
2018-01-12T10:24:00.000
3
0
1
0
python,api,azure,naming,azure-resource-group
48,224,666
1
true
0
0
You cannot (at this point in time) rename resource group in Azure.
1
1
0
How can I, using the Azure Python API, rename an existing resource group? I have thoroughly researched the example code and both official and unofficial documentation, but I can't find any mention of "rename". Is this operation even supported? Sample code highly sought after, but will take any hint!
Rename resource group programmatically in Azure Python API
1.2
0
0
1,043
48,226,958
2018-01-12T13:09:00.000
2
0
1
1
python,linux,python-2.7,ubuntu,anaconda
48,227,608
3
false
0
0
It depends what do you really want to use. Install miniconda instead of Anaconda and then install required packages 1 by 1 using conda install this will definitely reduce the size. :)
2
9
0
I'm using Ubuntu 16.04 LTS with Anaconda 2, which takes over 5 gb disk space. Is it normal to take such large space, or I can make it smaller by removing some unnecessary folders? P.S. Some commands such as "conda clean" have been used, I just wonder if there are some repeated modules installed...
Is there any way to make anaconda smaller?
0.132549
0
0
7,715
48,226,958
2018-01-12T13:09:00.000
8
0
1
1
python,linux,python-2.7,ubuntu,anaconda
59,863,417
3
false
0
0
I have seen anaconda accumulate lots of garbage package caches and tarballs. To delete caches, tarballs and lock files which are not used and reduce a little bit the space used, you can try: conda clean -a
2
9
0
I'm using Ubuntu 16.04 LTS with Anaconda 2, which takes over 5 gb disk space. Is it normal to take such large space, or I can make it smaller by removing some unnecessary folders? P.S. Some commands such as "conda clean" have been used, I just wonder if there are some repeated modules installed...
Is there any way to make anaconda smaller?
1
0
0
7,715
48,228,084
2018-01-12T14:17:00.000
1
0
1
0
python,python-2.7,pyautogui
51,386,256
1
false
0
1
It's not possible, pyautogui manually moves the mouse so you can't be doing something else...
1
0
0
I use the command pyautogui.press('enter') in a for loop that runs for many times and for a long time. The problem is that if I run the code and I want to do something else, happens that enter is pressed in the window I'm working on and not only on the terminal as I would like. Is there a way to run pyautogui.press('enter') only on the terminal and, in the meantime, work on other windows?
pyautogui in the same window
0.197375
0
0
227
48,231,233
2018-01-12T17:26:00.000
1
0
0
0
python,machine-learning,deep-learning,keras,rnn
48,231,802
1
true
0
0
The input_dim is just the shape of the input you pass to this layer. So: input_dim = 7 There are other options, such as: input_shape=(7,) -- This argument uses tuples instead of integers, good when your input has more than one dimension batch_input_shape=(batch_size,7) -- This is not usually necessary, but you use it in cases you need a fixed batch size (there are a few layer configurations that demand that) Now, the size of the output in a Dense layer is the units argument. Which is 128 in your case and should be equal to num_neurons.
1
0
1
I'm just starting with deep learning, and I've been told that Keras would be the best library for beginners. Before that, for the sake of learning, I built a simple feed forward network using only numpy so I could get the feel of it. In this case, the shape of the weight matrix was (len(X[0]), num_neurons). The number of features and the number of neurons. And it worked. Now, I'm trying to build a simple RNN using Keras. My data has 7 features and the size of the layer would be 128. But if I do something like model.add(Dense(128, input_dim=(7, 128)))it says it's wrong. So I have no idea what this input_dim should be. My data has 5330 data points and 7 features (shape is (5330, 7)). Can someone tell me what the input_dim should be and why? Thank you.
What's the input_size for the RNN Model in Keras
1.2
0
0
328
48,233,118
2018-01-12T19:48:00.000
0
0
0
0
python,kernel,convolution,gaussian,gaussianblur
50,582,786
1
true
0
0
You have to create a gaussian filter of the dimension you want e.g. 3x3 or 11x11. Then do the convolution on each channel of colour. If you want to do it using Fourir, you have to apply psf2otf (a matlab function that it is also in Python by users) and multiply both matrixs pointwise (on each channel).
1
0
1
Hello I'm working with images with Python. I want to convolve an image with a gaussian filter. The image is an array that it have the shape (64,64,3) 64x64 pixels and 3 channels of colour. How will it be the gaussian filter? which dimension? Do you know a function to define it and make the convolution with the image?
Python - Gaussian Kernel for colour images
1.2
0
0
1,583
48,233,768
2018-01-12T20:44:00.000
1
0
1
0
python,python-3.x
48,233,905
1
true
0
0
Assuming Linux, filesize is stored on the metadata on the file (precisely in the inode of the relevant file, maintained by the filesystem). So, you don't need to open the file to get the file size, and os.stat and similar methods does not open the file to get the file size obviously. They simply do the stat(2) (and similar) system call underneath to read the inode data.
1
1
0
Do both os.stat and os.path.getsize open and close the file to get this information? Is there a faster way to obtain the file size for large amounts of data without opening each file when I'm scanning the contents of terabytes of files?
Python file size without opening file
1.2
0
0
272
48,236,383
2018-01-13T02:22:00.000
0
0
0
0
python,pandas
62,427,224
3
false
0
0
Try using this: df.describe(include=['object','bool']).T the T here is used for the transpose purpose
1
2
1
I am trying to get the summary statistics of the columns of a data frame with data type: Boolean. When I run:df.describe() it only gives me the summary statistics for numerical (in this case float) data types. When I change it to df.describe(include=['O']), it gives me only the object data type. In either case, the summary statistics for Boolean data types are not provided. Any suggestion is highly appreciated. Thanks
How to perform .describe() method on variables that have boolean data type in pandas
0
0
0
5,123
48,236,584
2018-01-13T03:08:00.000
9
0
1
0
python,python-3.x,anaconda
48,238,444
2
true
0
0
I'm running Windows 10 Pro Version 1703 OS Build 15063.786. I'm not sure if a Windows update is what caused the issue. Ultimately, all the shortcuts were pointing to things that no longer existed. I performed the following task. Uninstall Anaconda using the standard Windows uninstall process. Reboot. Scour the ENTIRE hard drive looking for anything remotely related to the Anaconda installation and manually delete it. I don't remember all the exact folders but I do remember specifically in C:\Users\Bob, there was a .Anaconda and a .IPython and a .Jupyter file. All those had to go. C:\Users\Bob\Anaconda3 also had to be manually deleted along with some files that were just hanging out in C:\ProgramData\Anaconda3 that somehow refused to be deleted. Manually delete the shortcuts in C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Anaconda3 (64-bit). Reboot. Reinstall Anaconda. Reboot. Everything is back to normal. Special thanks to sytech. Checking the environment variables didn't provide a solution but it put me on the right investigative track.
1
7
0
My installation of Anaconda has gone sideways for some reason. I noticed it when I tried to open Jupyter. The start menu shortcut was broken. When I clicked on it, I got the infamous opening and closing of a command line window. I can start it by clicking on the executable but this winds up with the notebook opening in a weird location that I can't move out of. When I try to uninstall and reinstall, I get the same behavior as before. I did some googling and found some Stackoverflow questions like this (which I've since lost so can't post the link), but when I followed the solutions offered, I just get more of the same. I thought there might be something that needed to be blown away from appdata but that didn't seem to fix the issue either.
Python - How can I completely uninstall Anaconda on Windows 10?
1.2
0
0
21,715
48,236,905
2018-01-13T04:26:00.000
0
0
0
0
python,django,excel,django-models
48,237,156
1
false
1
0
you can use xlrd to read excel files in client side you just submit a form with file input. on server uploaded file stored on request.FILES read file and pass it to xlrd then process sheets and cells of each sheet
1
0
0
Ok, I had a look at the UploadFile Class documentation of the Django framework. Didn't find exactly what I am looking for? I am creating a membership management system with Django. I need the staff to have the ability to upload excel files containing list of members (and their details) which I will then manipulate to map to the Model fields. It's easy to do this with pandas framework for example, but I want to do it with Django if I can. Any suggestions. Thanks in advance
How do I upload and manipulate excel file with Django?
0
1
0
195
48,237,302
2018-01-13T05:51:00.000
0
0
0
0
python,mongodb,pymongo,data-analysis
48,237,575
2
false
0
0
I solved the problem using allowDiskUse option. So this is my answer. pipeline_2 = [...] db.command('aggregate', 'statCollection', pipeline=pipeline_2, allowDiskUse=True, cursor={})
1
2
0
I need to run an aggregation query for a large collection which has 200,000+ data records. And I want to run it with pymongo. I tried out the preferred method in the docs. pipeline = [...] db.command('aggregate', 'statCollection', pipeline=pipeline_aggregate) But this returned an error saying pymongo.errors.OperationFailure: The 'cursor' option is required, except for aggregate with the explain argument.
How to run pymongo aggregation query for large(200,000+ records) collection?
0
1
0
1,279