Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
41,668,158
2017-01-16T01:09:00.000
1
0
0
0
python,python-3.x,tensorflow,deep-learning,data-science
41,763,164
1
false
0
0
Generally Deep Learning algorithms are ran on GPUs which has limited memory and thus a limited number of input data samples (in the algorithm commonly defined as batch size) could be loaded at a time. In general larger batch size reduces the overall computation time (as the internal matrix multiplications are done in a parallel manner in GPU, thus with large batch sizes the time gets saved in reading/writing gradients and possibly some other operations output). Another probable benefit of large batch size is: In multi-class classification problems, if the number of classes are large, a larger batch size makes algorithm generalize better(technically avoids over-fitting) over the different classes (while doing this a standard technique is to keep uniform distribution of classes in a batch). While deciding batch size there are some other factors which comes into play are: learning rate and the type of Optimization method. I hope this answers your question to certain extent!
1
0
1
I am learning TensorFlow (as well as general deep learning). I am wondering when do we need to break the input training data into batches? And how do we determine the batch size? Is there a rule of thumb? Thanks!
TensorFlow: how to determine if we want to break the training dataset into batches
0.197375
0
0
211
41,668,312
2017-01-16T01:35:00.000
9
0
0
0
python,amazon-web-services,amazon-elastic-beanstalk
41,682,747
1
true
1
0
Sign into AWS Console, click Services in top left, then click Elastic Beanstalk (under Compute), then click on your application (not applicationName-env, but just ApplicationName). Next, click Application Versions on the left hand side and you'll see a list of applications. Under the Source section, you can access the source code of your applcation.
1
3
0
I just launched an AWS beanstalk django application after going through the steps provided by AWS. I used the default settings to create the application (the default "Welcome" page for Django). Is there any way for me to download or view the source code and project directories from the AWS console, or any other location?
How to view source code and project directories of AWS Beanstalk application?
1.2
0
0
2,471
41,671,657
2017-01-16T07:48:00.000
0
1
1
0
python,cgi,virtualenv,anaconda
41,747,994
1
false
0
0
If you can control the environment in which your server runs you can set PYTHONPATH to the path of some directory you have permissions to write in, and then install your third-party modules in that directory.
1
0
0
I have a python cgi file on server and it imports some packages that installed only locally by anaconda(because I've no root privilege on the server). The problem is when I call the file from web, it can not be executed because of those "missing" packages. How can I get through this if I can't have root privilege?
run python cgi from web with local packages installed by anaconda
0
0
0
614
41,671,972
2017-01-16T08:15:00.000
0
0
0
0
python,hostname,nat,dhcp,sdn
41,672,160
1
false
0
0
Try socket.gethostbyaddr() from the module socket
1
1
0
I'm using python to develop SDN I also wrote a virtual network function just like DHCP,NAT,Firewall,QoS But I want to get computer's hostname from IP like 192.168.2.XXX I try to use arp but it only can find IP and MAC address in packets. So how should I get hostname from specific IP? Should I try this in DHCP or NAT? Thanks a lot !!
How to get hostname from IP?
0
0
1
1,035
41,676,908
2017-01-16T12:54:00.000
0
0
1
0
python,python-3.x,scheduling
41,678,978
2
true
0
0
I think the scheduling system like cron on Linux (don't know about Windows, but I woud expect a similar situation there) is the way to go because of its numerous advantages: you can rely on it, it is a robust and mature system it starts a fresh process every time, thus saving resources and protecting the system from a possible memory or file descriptor leak in a long running program it mails the output and reports crashes to the owner you don't have to put your process to the background The disadvantages: if you need to remember a state between runs, you have to save it to a file
2
0
0
I have a small script which I want to run after every 15 minutes. I can achieve it in two ways: By putting whole code in while loop while True: and at end time.sleep(900). By Scheduling a job to run the script after every 15 minutes once. Both will work fine, but I am not sure whether the script keeps the resource busy while sleeping. Please suggest.... Which one is better approach?
time.sleep() or scheduling script. Which one is better approach?
1.2
0
0
178
41,676,908
2017-01-16T12:54:00.000
0
0
1
0
python,python-3.x,scheduling
41,677,863
2
false
0
0
Scheduling is more stable and python scripts is more flexible. If you are debuging or monitoring a software like browsing web pages or you are always modifing the script, The first way is good. You can kill it easily when necessary. Intuitively I like this one because 15 mins is short so I guess you won't run it long-term. If you are making a scedule like "periodic backup" or logging or checking something, the second way is better because you need not to manage it once you set.
2
0
0
I have a small script which I want to run after every 15 minutes. I can achieve it in two ways: By putting whole code in while loop while True: and at end time.sleep(900). By Scheduling a job to run the script after every 15 minutes once. Both will work fine, but I am not sure whether the script keeps the resource busy while sleeping. Please suggest.... Which one is better approach?
time.sleep() or scheduling script. Which one is better approach?
0
0
0
178
41,679,182
2017-01-16T14:57:00.000
0
0
0
0
python,optimization,machine-learning,tensorflow,deep-learning
50,435,429
2
false
0
0
Both Theano & Tensorflow have built-in differentiation for you. So you only need to form the loss.
1
0
1
For a standard machine learning problem, e.g, image classification on MNIST, the loss function is fixed, therefor the optimization process can be accomplished simply by calling functions and feed the input into them. There is no need to derive gradients and code the descent procedure by hand. But now I'm confused when I met some complicated formulation. Say we are solving a semi-supervised problem, and the loss function has two parts:Ls + lambda * Lu. The first part is a normal classification formulation, e.g, cross entropy loss. And the second part varies. In my situation, Lu is a matrix factorization loss, which in specific is:Lu = MF(D, C * W). And the total loss function can be written as: L = \sum log p(yi|xi) + MF(D, C * W) = \sum log p(yi|Wi) + MF(D, C * W) = \sum log p(yi|T * Wi + b) + MF(D, C * W) Where parameters are W, C, T and b. The first part is a classification loss, and the input xi is a raw of W, i.e. Wi, a vector of size (d, 1). And the label yi can be a one-hot vector of size (c, 1), so parameters T and b map the input to the label size. And the second part is a matrix factorization loss. Now I'm confused when I'm going to optimize this function using sgd. It can be solved by write down the formulation derive gradients then accomplish a training procedure from scratch. But I'm wondering if there is a simpler way? Because it's easy to use a deep learning tool like Tensorflow or Keras to train a classification model, all u need to do is build a network and feed the data. So similarly, is there a tool that can automatically compute gradients after I defined the loss function? Because deriving gradients and achieve them from scratch is really annoying. Both the classification loss and matrix factorization loss is very common, so I think the combination can be achieved thoroughly.
Is there some way can accomplish stochastic gradient descent not from scratch
0
0
0
115
41,680,144
2017-01-16T15:50:00.000
1
1
0
0
python,linux,cmake,ftdi
41,735,460
1
false
0
1
It worked after i uninstalled all versions of python3 and the python3-dev package.
1
0
0
I need to use libFTDI (www.intra2net.com/en/developer/libftdi/download/libftdi1-1.2.tar.bz2) for a project that I'm working on. All my current modules have been written in python2 and so i want libFTDI to work with python2 too, but the installation process automatically selects python3.5. Cmake is used to build the project. I can't seem to get it to work and apparently no one else has faced this problem before. Any help would be appreciated!
Is there a way to force libFTDI to make the python packages according to python2.7 instead of python3?
0.197375
0
0
401
41,680,636
2017-01-16T16:17:00.000
1
0
1
0
python,python-newspaper
46,494,795
2
false
0
0
You can type at the terminal pip install newspaper3k
1
4
0
I am trying to build a python program that will display various headlines from certain news sites. I used pip to install the module newspaper, but when I run the program, I get the error: ImportError: No module named newspaper Any ideas on how to fix this?
ImportError: No module named newspaper
0.099668
0
0
8,074
41,680,964
2017-01-16T16:35:00.000
1
1
0
0
php,python,web,automation
41,681,979
1
true
1
0
The web site could be created in any number of languages, PHP being one good choice. The server could be local, or if you want to be able to interface globally, on a hosted server. How your Arduino connects to the server is the most telling part. If you use a WiFi or Ethernet shield, you can have it poll the server to get information (ie. turn something on/off) and to post info (ie. temp/humidity). In you want the server to be the controlling factor, have it use curl to poll a web server on the Arduino. The Arduino would respond with data, look for parameters for control, etc. I've written several projects that use the Arduino and Witty ESP8266 micro-controllers and interface with a web server. It's not that hard if you know everything you need to know about creating a web site, writing Arduino code, and HTTP communications. If you don't, there's a steep learning curve.
1
0
0
I have a long term project, for learning purposes, which is creating a virtual assistent, such as Siri or Google Now, but to control my home automation. I'll have an arduino controling some hardware (such lamps, sensors, etc.), and the assistent I'll write in Python. Until this step, I have the knowledge to do this. But thinking forward, when this is functional, would be great if I could add the feature to control remotely by mobile app and/or webpage, and not just by my desktop. The problem is I don't know which knowledge I need have to do this. I want to have a web page, or a mobile app that show me this webpage, where I can program buttons to turn on/off stuff, check the sensors data, etc. I should like to use PHP, cause as I said, this is for learning purposes. I supose that I'll need set a server in my home, and then access him through this app/page. So, which programming skills I need to accomplish this (considering that arduino runs in C and the assistent will be scripted in Python)? Thanks.
Setting a remote control panel page for home automation
1.2
0
0
164
41,682,737
2017-01-16T18:22:00.000
0
0
0
0
python,r,social-networking
41,684,697
2
false
0
0
Decide how you want the graph to represent the data. From what you've described one approach would be to have nodes in your graph represent People, and edges represent grants. In that case, create a pairwise lis of people who are on the same grant. Edges are bidirectional by default in iGraph, so you just need each pair once.
1
0
1
I have a two-mode (grant X person) network in csv format. I would like to create personXperson projection of this network and calculate some network measures (including centrality measures of closeness and betweenness, etc.). What would be my first step? I am guessing creating 2 separate files for Nodes and Edges and run the analysis in R using igraph package?! Here is a super simplified version of my data (my_data.csv). Grant, Person A , 1 A , 2 B , 2 B , 3
network analysis: how to create nodes and edges files from csv
0
0
0
1,405
41,683,334
2017-01-16T19:02:00.000
0
0
1
0
python,module,keyboard,hotkeys,emulation
55,424,469
2
false
0
1
Have you tried using CTRL + SHFT + C and CTRL + SHFT + V? If not, those should work. When I was copying from the terminal, it worked to do CTRL SHFT C.
2
1
0
I have the latest version of Python 3 and PyAutoGUI module on Windows 7 x64. Hotkeys like ALT + F4, CTRL + SHIFT + ESC works pretty well, but, from what I noticed, doesn't works CTRL + C and CTRL + V at all!
Doesn't works CTRL+V in PyAutoGUI, Python 3 on Windows
0
0
0
3,053
41,683,334
2017-01-16T19:02:00.000
1
0
1
0
python,module,keyboard,hotkeys,emulation
64,219,957
2
false
0
1
choose "EN" default keyboard layout in your OS
2
1
0
I have the latest version of Python 3 and PyAutoGUI module on Windows 7 x64. Hotkeys like ALT + F4, CTRL + SHIFT + ESC works pretty well, but, from what I noticed, doesn't works CTRL + C and CTRL + V at all!
Doesn't works CTRL+V in PyAutoGUI, Python 3 on Windows
0.099668
0
0
3,053
41,689,297
2017-01-17T04:52:00.000
1
0
1
1
python,cluster-computing,pbs,qsub,supercomputers
41,753,582
2
false
0
0
If you are using pbs professional then try to export PYTHONPATH in your environment and then submit job using "-V" option with qsub. This will make qsub take all of your environment variables and export it for the job. Else, try setting it using option "-v" (notice small v) and then put your environment variable key/value pair with that option like qsub -v HOME=/home/user job.sh
1
3
1
I have an account on a supercomputing cluster where I've installed some packages using e.g. "pip install --user keras". When using qsub to submit jobs to the queue, I try to make sure the system can see my local packages by setting "export PYTHONPATH=$PYTHONPATH:[$HOME]/.local/lib/python2.7/site-packages/keras" in the script. However, the resulting log file still complains that there is no package called keras. How can I make sure the system finds my packages?
When using qsub to submit jobs, how can I include my locally installed python packages?
0.099668
0
0
2,181
41,689,872
2017-01-17T05:52:00.000
1
1
0
1
python,allure
41,712,644
1
false
0
0
Since the Allure CLI script calls a java application makes it a Python to Java problem. There are a few solutions like Py4J that can help you with that. Keep in mind that most solutions rely on the Java app already running inside the secondary application before being called from Python.
1
1
0
Is there a way to call allureCLI from Python? I would like to use python instead than shell scripting to run multiple reports. I could use Popen but I am having so many issues with it, that I would rather avoid it unless there is no other way around
allure command line from python
0.197375
0
0
824
41,692,655
2017-01-17T08:51:00.000
2
0
1
0
python,pydoc
56,145,781
4
false
1
0
When you get the import error from Pydoc, there are some steps you can find out the bug. Step Check the Python version of your files/packages and Pydoc are mapping. Some environment exist Python2 and Python3 at the same time. Basically, the alias pydoc is for Python 2, and pydoc3.X is for Python 3. Check the import file exists. Sometime you just import nonexistent file/module. Check the file/module can be imported by python. If you are documentizing your custom module in Python 2, please check __init__.py exist in your every directory of the main source code.
1
4
0
I have following files in same folder: config.py, init.py, asc.py and mainfile.py contents of init.py file: from mainfile import * config and asc files are imported in mainfile.py. I am creating html using pydoc as: python -m pydoc -w mainfile.py I am getting error as: <type 'exceptions.ImportError'>: No module named config If i will remove config, it works fine and html gets created but does not work with config file. What is the problem?
Import Error on using pydoc
0.099668
0
0
6,605
41,694,723
2017-01-17T10:30:00.000
0
0
0
0
python,api,graph,zabbix
41,695,124
1
false
0
0
Get the existing graph items with graph.get first, then update the graph and pass all the existing items (include gitemid for these items) with your new items added.
1
1
0
I have a graph created in zabbix. I want to update this graph to include items from other hosts. For that I am calling graph.update() zabbix API using a python script. The method is updating the graph item instead of adding/appending to the existing graph item list. Does any one has idea about this ? graph.update(graphid=graph_id,gitems=[{"itemid" :"10735", "color":"26265b"}]) where graph_id is and id of existing graph. Thanks in advance!!
add graph items to existing graph in zabbix using API's
0
0
1
601
41,696,268
2017-01-17T11:44:00.000
3
1
1
0
python,revit,revitpythonshell
41,714,110
2
true
0
0
I agree with your assumption that these two environments are probably completely separate and independent of each other. Since the Python code that you write within them is pure .NET, it is easy to reference other .NET modules, assemblies, libraries, whatever you want to call them. They would reside in .NET assembly DLLs. Using pip to add external Python libraries is a completely different matter, however, and rather removed from the end user environment offered by these two. To research that, you will have to dive into the IronPython documentation and see how that can be extended 'from within' so to speak.
2
1
0
Both RevitPythonShell scripts and Revit Python Macros are relying on Iron Python. In both cases, at least in Revit 15, neither require the installation of IronPython. I believe RevitPythonShell installs with the capacity to process the ironPython script (rpsruntime.dll). Revit must install with the same capacity. Therefore, I assume the two programming environments are being executed from separate sources. Is this accurate? How would I be able to install an external module as I would typically use pip for? Another words, if I wanted to use the docx, pypdf2, tika, or other library - is this possible?
Revit Python Macros and RevitPythonShell Modules, or loaded packages
1.2
0
0
369
41,696,268
2017-01-17T11:44:00.000
1
1
1
0
python,revit,revitpythonshell
41,719,461
2
false
0
0
For pure-python modules, it could be as easy as having a CPython installation (e.g. Anaconda) and pip installing to that, then adding the site-packages of that installation to the search path in RPS. For modules that include native code (numpy and many others), this gets more tricky and you should google how to install those for IronPython. In the end, adding the installed module to your search path will give you access in RPS.
2
1
0
Both RevitPythonShell scripts and Revit Python Macros are relying on Iron Python. In both cases, at least in Revit 15, neither require the installation of IronPython. I believe RevitPythonShell installs with the capacity to process the ironPython script (rpsruntime.dll). Revit must install with the same capacity. Therefore, I assume the two programming environments are being executed from separate sources. Is this accurate? How would I be able to install an external module as I would typically use pip for? Another words, if I wanted to use the docx, pypdf2, tika, or other library - is this possible?
Revit Python Macros and RevitPythonShell Modules, or loaded packages
0.099668
0
0
369
41,697,053
2017-01-17T12:23:00.000
1
0
1
0
python,isinstance
41,697,511
3
false
0
0
If MyClass isn't defined then you have no way to reference its type. Therefore you can have no way to verify that type(a) has the correct value.
1
0
0
Assume that class MyClass is sometimes, but not always, defined. I have a function foo(a=None) in which argument a can be None, a string, or an object of MyClass. My question is: If MyClass is not defined in my Python session, how can I check the type of argument a in a fashion similar to isinstance without getting a NameError? Note on duck-typing: I am deliberately limiting the function. I'm using Python 2.6.x and Updating is not an option. A forward-compatible solution (especially for 2.7.x) is highly appreciated.
Use isinstance with an undefined class
0.066568
0
0
1,031
41,699,840
2017-01-17T14:35:00.000
0
0
0
0
python,scripting,abaqus,odb
41,927,049
1
false
0
0
Problem solved! I had to use 'ElementSet . ALL ELEMENTS' instead of 'ElementSet ALL ELEMENTS', in this line: histRegion = crackODB.steps[crackStep].historyRegions['ElementSet . ALL ELEMENTS']
1
0
0
I prepared a Python script to get initial data from a CAE file and modify that for another analysis. To do this, I created a historyOutput command to obtain stress intensity factor and I need to use these values at the post-processing inside of python code, but I have problem with historyRegion definition, could you please give me an advice of why this happen? Here are the corresponding lines of codes: crack tip set myAssembly.Set(nodes = crackTipNode, name = 'crackTip') Contour Integral definitions: a.engineeringFeatures.ContourIntegral(name='Crack-1', symmetric=OFF, crackFront=crackFront, crackTip=crackTip, extensionDirectionMethod=Q_VECTORS, qVectors=((v11[7], a.instances['crackedPart'].InterestingPoint(edge=e11[8], rule=MIDDLE)), ), midNodePosition=0.5, collapsedElementAtTip=NONE) Request history output for the crack myModel.HistoryOutputRequest(name = 'SIF', createStepName = crackStep, contourIntegral = 'Crack-1',numberOfContours = contours, contourType = K_FACTORS, kFactorDirection = KII0, rebar = EXCLUDE, sectionPoints = DEFAULT) Read from history output crackODB = session.openOdb(name = jobName, path = jobName + '.odb', readOnly = True) histRegion = crackODB.steps[crackStep].historyRegions['Crack-1'] I put the contourIntegral name for historyRegions, but I get "KeyError: Crack-1" error. I don't what else to do? Any advice would be really appreciated. Thanks,
How to define history region in history output definition for Abaqus Python scripting?
0
0
0
1,997
41,699,897
2017-01-17T14:38:00.000
0
0
1
0
python,list
41,700,079
2
false
0
0
You can use: B=[[None]*m]*n It creates a list of n rows of m columns of None.
1
0
1
Given a list A with n rows each having m columns each. Is there a one liner to create an empty list B with same structure (n rows each with m components)? Numpy lists can be created/reshaped. Does the python in-built list type support such an argument?
Initialize an empty list of the shape/structure of a given list without numpy
0
0
0
1,097
41,701,274
2017-01-17T15:40:00.000
1
0
0
1
python,http,tornado
41,704,300
1
true
0
0
No, Tornado does not support HTTP/1.1 pipelining. It won't start serving the second request until the response to the first request has been written.
1
2
0
As you may know HTTP/1.1 can allow you leave the socket open between HTTP requests leveraging the famous Keep-Alive connection. But, what less people exploit is the feature of just launch a burst of multiple sequential HTTP/1.1 requests without wait for the response in the middle time, Then the responses should return to you the same order paying the latency time just one time. (This consumption pattern is encouraged in Redis clients for example). I know this pattern has been improved in HTTP/2 with the multiplexing feature but my concern right now is if I can use that pipelining pattern with the tornado library exploiting its async features, or may be other library capable?
Can I pipeline HTTP 1.1 requests with tornado?
1.2
0
0
301
41,701,694
2017-01-17T16:00:00.000
1
0
0
0
python,c++,qt,layout,pyqt
42,113,740
1
true
0
1
The way to ensure deleting widget from old layout is using sip.delete(somewidget). This will delete C++ object itself(because sometimes it keeps existing on it's own)
1
0
0
Welcome! I am trying to create my own gui app using PyQT(5 i guess). Well, mainwindow consists of menubar, statusbar and central widget. Central widget is QTabWidget. In each tab there is it's own workplace widget. the program itself allows to create a pipline of OpenFOAM stages, set the options for each process and launch it. It's all is to be made in one widget that is the only widget in a tab. The problem i encountered is connected to QLayout. I am using QHBoxLayouts and QVBoxLayouts combination on each step of launching the task. When I make initial self.setLayout(somelayout1) it works fine. But as i make next steps in different methods of this widget's class self.setLayout(somelayout2), self.setLayout(somelayout3) and so on, the new layout becomes being drawn on the top of all the previous layouts. More than this: all the layers' parts of previous layouts that not covered by new one are remain active! haven't found any working method for disabling old layout of the widget, or better: removing it. Already tried even creating a layout container of 1 element and influencing it with self.layout().removeItem(0) and self.layout().setLayout(newlayout) (or .inserLayout(newlayout), but there is no difference. Is there any working method to change the layout of the widget without appearing the old one on a backside? thanks for any help. p.s.: self.setStyleSheet("QWidget { background-color: rgb(255, 255, 255) }") neither QObjectCleanupHandler().add(self.layout()) both make no effect.
re-setting layout for widget pyqt
1.2
0
0
651
41,704,515
2017-01-17T18:34:00.000
0
0
1
0
java,python,eclipse,plugins,pydev
41,715,898
1
true
1
0
If you mean in an Eclipse plugin with a dependency on PyDev, then yes, it should be possible... Take a look at the test-cases for this: com.python.pydev.refactoring.refactorer.refactorings.renamelocal.RenameClassRefactoringTest
1
0
0
Is possible to call the pydev refactor (passing a new/old class name) from a Java code, such as clicking in a button in an Eclipse plugin?
Running PyDev refactor from java code?
1.2
0
0
78
41,708,061
2017-01-17T22:20:00.000
0
0
0
0
excel,python-2.7
41,767,777
1
false
0
0
Sorted it out myself and like to share with you guys. xlwings actually does the job while openpyxl/xlrd seem to have failed around this issue.
1
0
0
I am using Python openpyxl package to read values from Excel cells. Cells with formulas always return formula strings instead of the calculated values. I'd rather avoid using 'data_only=True' when loading the workbook as it wipes out all the formulas and I do need to retain some of them. Seemingly a problem not so difficult but turns out to be quite challenging. Appreciate it very much if anyone can shed some lights on this. Thanks a lot!
Python to read Excel cell with the value calculated by the formulas but not the formula strings themselves
0
1
0
1,063
41,710,540
2017-01-18T02:43:00.000
0
0
1
0
python,python-3.x,pip,pyautogui
41,710,713
1
true
0
1
If you have multiple versions of Python installed you need to find your versions and rename them and their Pips. In windows the path is, C:\\Users\USERNAME\AppData\Local\Programs\Python\Python3x-32. The x should be replaced with the Python version and USERNAME with your username. On Mac it's located in /usr/local/bin/python. On Linux it should be in /usr/bin/python. The location might vary depending on OS and Python version. Rename the files python.exe/python and pip.exe/pip so that each file is different. I named mine python35.exe, python2.exe and python.exe(for 3.5, 2.7 and 3.6). Now when you execute your pip command use, pip34 install pyautogui or whatever you named the file. Or if you really want to you can go the painful way of renaming all the path variables, but I won't explain that here.
1
1
0
I, as it will soon be obvious, am a total newb when it comes to Python. I am running python version 3.5 on Windows 10, 64 bit. I installed the PyAutoGui module for a small project I am working on. At first, everything worked perfectly. But now it appears that PyAutoGui is crashing when it clicks. I suspect that it's because PyAutoGui is only intended for use up to Python 3.4. In order to rectify this, I downloaded Python 3.4. Unfortunately, however, when I try to install PyAutoGui (using pip install pyautogui), it tells me that it's already been installed because it sees it in the Python 3.5 folder. My question is this: How do I install PyAutoGui in Python 3.4 with it already installed in Python 3.5? Assume that I know virtually nothing about how to install a module manually without using pip Thanks in advance!
Installing PyAutoGui on multiple versions of Python
1.2
0
0
1,047
41,715,835
2017-01-18T09:36:00.000
1
0
0
0
python,scikit-learn,svm
49,909,264
3
false
0
0
Hope I'm not too late. OCSVM, and SVM, is resource hungry, and the length/time relationship is quadratic (the numbers you show follow this). If you can, see if Isolation Forest or Local Outlier Factor work for you, but if you're considering applying on a lengthier dataset I would suggest creating a manual AD model that closely resembles the context of these off-the-shelf solutions. By doing this then you should be able to work either in parallel or with threads.
1
4
1
I am using the Python SciKit OneClass SVM classifier to detect outliers in lines of text. The text is converted to numerical features first using bag of words and TF-IDF. When I train (fit) the classifier running on my computer, the time seems to increase exponentially with the number of items in the training set: Number of items in training data and training time taken: 10K: 1 sec, 15K: 2 sec, 20K: 8 sec, 25k: 12 sec, 30K: 16 sec, 45K: 44 sec. Is there anything I can do to reduce the time taken for training, and avoid that this will become too long when training data size increases to a couple of hundred thousand items ?
SciKit One-class SVM classifier training time increases exponentially with size of training data
0.066568
0
0
2,038
41,716,121
2017-01-18T09:51:00.000
0
0
1
0
python,pycharm,virtualenv,anaconda
41,716,472
2
false
0
0
This is the normal steps that i follow when i use virutalenv with PyCharm I normally work on ubuntu First, i always create a separate environment for every project using the command virtualenv "environment_name" from the command line. Activate the environment using the command - source environment_name/bin/activate in ubuntu. Suppose if i want to start a django project, i create the project using the command django-admin startproject project_name Open this project in pycharm. go to settings-> interpreter in pycharm. choose "add local" interpreter from the settings. It will open a pop-up. Go to the directory of the environment you just created and select the correct python interpreter you want to use. now if you want to install a new package, you can go to interpreter settings and add package from the pycharm or you can fire up the command line, activate the environment and run pip install package_name. Once the package is installed, it will also show in pycharm. if you are using Windows OS, use powershell to execute the above commands. The only difference will be in activating the environment. In windows, to activate an env use environment_name/Scripts/activate EDIT: Same goes anaconda environments also, the easy way is to manage the environment from the terminal and pycharm will show the packages changes in the interpreter settings.
1
0
0
I would like to use Anaconda and the newest Pycharm 2016.3 together. I would like to be able to manage packages in settings->interpreter. If this is not supported, I would like to know the workflow of using these two together. According to another SO question, Pycharm 5 used to have a 'Create conda env' in the interpreter settings, but this seems to be gone now. I have tried: 1) Manually creating a virtual environment with 'conda create --name project numpy' and I add the interpreter ('~/anaconda2/envs/bin/python', the location of python for my created virtual environment. However, pycharm doesn't allow me to add any packages through settings->interpreter. Running an 'import numpy' through the console shows errors that are pointing to /usr/bin/python, not my virtual env python, and an error 'ImportError: cannot import name multiarray'. I'm not sure what package to add using conda from the cli, and the pycharm frontend doesn't add packages 2) I've tried the same as 1) but with my global anaconda python as the interpeter ('~/anaconda2/bin/python') and it doesn't seem to be able to connect to the console. 3) Creating a virtual environment through pycharm directly. I would like to remove my default pythons (/usr/bin/python2.7/3.5 from the list of interpreters in pycharm) for debugging purposes but it won't let me and it seems to be showing packages that my anaconda virtual env doens't have installed. Is there a way to manage my VIRTUAL enviornment in Conda using pycharm? If not, what steps do I take to make these two play well together assuming I can't manage it through pycharm interepreters settings.
How to manage (ana)conda with pycharm 2016.3 in linux
0
0
0
541
41,718,767
2017-01-18T11:54:00.000
1
0
1
0
python,doc2vec
41,732,675
1
false
0
0
Yes, if words is a list of word strings, preprocessed/tokenized the same way as training data was fed to the model during training.
1
1
1
For a small project I need to extract the features obtained from Doc2Vec object in gensim. I have used vector = model.infer_vector(words) is it correct?
Extract the features from Doc2Vec in Python
0.197375
0
0
439
41,724,719
2017-01-18T16:37:00.000
2
1
1
0
python,methods
41,724,860
1
true
0
0
Differently form other languages (Java, C++), there are no "private" methods in Python (i.e. methods that cannot be called outside of the class that defines them). So, any caller can call internal methods from any object. Conventionally, you should not call those methods of an object, to avoid unwanted consequences not predicted by the class' programmer.
1
1
0
When using the dir() method in python, why are some of the methods I am returned with surrounded with underscores? Am I supposed to use these methods? For Example, dir([1,2,3,4,5,6]) returns: ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] The last nine of these methods are the ones which are conventionally used. When I check the documentation, I see very little in regards to what these methods are: If the object has a method named __dir__(), this method will be called and must return the list of attributes. This allows objects that implement a custom __getattr__() or __getattribute__() function to customize the way dir() reports their attributes. Thank you.
What is the difference between underscored methods and non-underscored methods, specifically those that are listed by the dir() method?
1.2
0
0
159
41,725,993
2017-01-18T17:40:00.000
0
0
0
0
python,machine-learning
41,740,148
1
false
0
0
Any aggregative operation on the word vectors can give you a sentence vector. You should consider what do you want your representation to mean and choose the operation accordingly. Possible operations are summing the vectors, averaging them, concatenating, etc.
1
0
1
I am interested to find sentence vectors using word vectors.I read that by multiplying each word's tf-idf weights with their vectors and finding their average we can get whole sentence vector. Now I want to know that how these tf-idf weights helps us to get sentence vectors i.e how these tf-idf and sentence vector are related?
How tf-idf is relevant in calculating sentence vectors
0
0
0
878
41,730,490
2017-01-18T22:20:00.000
0
0
0
0
python,facebook,facebook-graph-api,facebook-ads-api
41,785,371
1
true
1
0
Okay, so I just ended up filling the form and when asked for instructions of how to run the demo I explained that the app wasn't "demonstrable". And the app tier was upgraded automatically.
1
1
0
I am developing an app using the Facebook Ads api. Initially it is just a Python script which downloads ads performance information from my client and stores it in a database, where it is combined with information from other sources in order to provide better reporting for them. It is not a web app (yet). It is just a Python script, working for a single business user account (yet!). The developer token clearly isn't enough because it limits access to 5 Ads Accounts. I would like to upgrade to basic account however the approval form requires things such as providing a video demo, website of the app, privacy policy, etc. The app doesn't have those because it is not intended for general use (yet!!). It seems that only an app focused on other Facebook users can apply to a Basic access token. Is this so? How can I upgrade my access token if my app is just a Python script running on a server? Thank you! :-)
Upgrading Facebook Ads token to "Basic"
1.2
0
0
75
41,732,242
2017-01-19T01:24:00.000
2
0
0
1
apache-kafka,kafka-consumer-api,kafka-python
41,732,601
1
false
0
0
The new consumer is single-threaded (excluding the background heartbeat thread), so no equivalent config is offered. By the way, 'num.consumer.fetchers' does not specify the number of fetcher threads as the doc says. It actually controls the possible maximum number of fetcher threads that Kafka can create.
1
1
0
In old consumer configs of Kafka, there is a property num.consumer.fetchers in order to configure the number fetcher threads used to fetch data. In the new consumer configs of Kafka, is there any property with this same function? And if not, how is the new consumer working on that?
Equivalent property `num.consumer.fetchers` for the new kafka consumer
0.379949
0
0
822
41,732,396
2017-01-19T01:42:00.000
1
1
1
0
python-2.7,python-3.x,validation,syntax
41,732,436
1
false
0
0
Do you do it at build time? If so, you can try run 2to3 and parse its output to determine if the file is valid Python 2 code.
1
0
0
I am in Python3. I have a bunch of Python2 .py files. For each of these files, I need to check whether the contents are valid Python2 code and return it as a boolean. The files are small scripts, sometimes they may import things or have a function or class, but mostly they are pretty simple. How can I do this?
How to validate syntax of Python2 code from Python3?
0.197375
0
0
279
41,732,616
2017-01-19T02:12:00.000
0
0
0
0
python-2.7,python-3.x
45,281,197
1
false
1
0
First verify if the login is being done by checking the redirected link with print br.geturl() If it is logging in and you're have an http error in your console use exceptions for the http error which will redirect you to your page
1
0
0
I am having trouble logging in my Microsoft account using python mechanize utility. user-name and password are working fine. Problem comes when submitting the form, I get an interim response page with title: "continue" . and URL: some interim_URL. Question is how do I move to my intended URL? br.open("intended_URL") doesn’t work at all.
python mechanize Http error 100
0
0
0
57
41,733,550
2017-01-19T04:05:00.000
0
0
0
0
jira,jira-rest-api,python-jira
42,102,051
2
false
0
0
Did you find a way to do this in the jira python api? I'm looking for a solution to a similar problem myself. The issues have a property of issuelinks which could be iterated. Or you could use the following JQL to return a list of linked issues issue in linkedIssues(ABC123)
1
0
0
I have seen multiple answers for how to return the key of the parent of a given issue, but is it possible to return the keys of all children of a parent issue? Ideally, I'd like to return the keys for all children but not all 'grandchildren,' etc. I have the key of the parent issue as input. edit: I can already return the keys for subtasks of a parent but I need to be able to return keys for issue children: for example, return all stories under an epic.
JIRA python for JIRA REST API: how to return keys of all child issues for a single parent issue given the parent's key?
0
0
0
1,897
41,735,413
2017-01-19T06:44:00.000
2
0
1
0
python,python-3.4,cx-freeze
45,154,967
2
false
0
0
Ok, I think we are in the same boat. i get the idea from the last post, but i'm not so familiar with the grammar and there is some different grammar with the last post in setup.py. But I get another way to solve this: add import numpy.core._methods and import numpy.lib.format in your python file.
1
16
0
exe build successfully by using cx-freeze. But it shows the following error when I execute the exe file: from . import _methods ImportError: cannot import name '_methods'
from . import _methods ImportError: cannot import name '_methods' in cx-freeze python
0.197375
0
0
13,185
41,741,318
2017-01-19T12:01:00.000
1
0
1
0
python
41,741,386
3
false
0
0
You use for your number num: num % 10 to extract the last digit. num = num // 10 to remove the final digit (this exploits floored division). Finally, you want to get out the leading digit first. Therefore you adopt a recursive function to perform the above (which calls itself prior to printing the digit).
1
0
0
how would i allow a user to input a 3-digit number, and then output the individual digits in the number in python e.g. If the user enters 465, the output should be “The digits are 4 6 5” sorry if this sounds basic
to allow the user to input a 3-digit number, and then output the individual digits in the number
0.066568
0
0
1,162
41,742,720
2017-01-19T13:15:00.000
1
0
1
0
python,google-app-engine,int,type-conversion
41,743,702
1
true
0
0
It seems like an encoding issue, but a quick workaround would be to remove '\x00' from each string before converting it. So try int(splitted_line[j].replace('\x00',''))
1
0
0
I have some weird problem with python int function. I read some file with numeric values and convert these to integers. When I do this locally it's goes fine, but when I upload it to Google App Engine the conversion fails with error: invalid literal for int() with base 10: '' I tried to print the value it's trying to convert and it is 2210. Then I tried to output whole splitted line from file and got this: ['\x00B\x00a\x00u\x00w\x00e\x00n\x00s\x00', '\x002\x002\x001\x000\x00', '\x005\x004\x003\x001\x00', '\x005\x003\x007\x002\x00', '\x005\x002\x006\x005\x00', '\x005\x006\x001\x008\x00', '\x005\x003\x002\x008\x00\r\x00'] I use that code to convert: int(splitted_line[j]) And I am very new to python. Could someone say what I need to do?
How solve conversion thing in python that int() can't convert '\x002\x002\x001\x000\x00' into integer?
1.2
0
0
325
41,745,022
2017-01-19T15:02:00.000
1
0
0
0
python,machine-learning,tensorflow,neural-network,deep-learning
41,792,826
1
false
0
0
You have 3 main options - multiply your classes, multi-label learning or training several models. The first option is the most straight forward - instead of having teachers who belong to John and teachers who belong to Jane you can have teachers whose class is Teachers_John and class whose class is Teachers_John and learn to classify to those categories as you would learn any other set of categories, or use something like hierarchical softmax. The second option is to have a set of categories that includes Teachers as well as John and Jane - now your target is not to correctly predict the one most accurate class (Teachers) but several (Teachers and John). Your last option is to create a hierarchy of models where the first learns to differentiate between John and Jane and others to classify the inner classes for each of them.
1
0
1
I am using the inception v3 model to retrain my own dataset. I have few folder which represent the classes which contain images for each class. What i would like to do is to 'attach' some text ids to these images so when they are retrained and used to run classification/similarity-detection those ids are retrieved too. (basically its image similarity detection) For instance, Image X is of class 'Teachers' and it belongs to John. When i retrain the model, and run a classification on the new model, i would like to get the Teachers class, but in addition to this i would like to know who is teacher (John). Any ideas how to go for it? Regards
Tensorflow Inception v3 retraining - attach text/labels to individual images
0.197375
0
0
596
41,747,991
2017-01-19T17:25:00.000
2
0
0
0
python,django,django-templates
41,748,047
2
true
1
0
Template tags need to be in an app. But once they are, they can be used by templates in any app. There is no need to do anything to make them available globally.
1
0
0
I tried to add the path under INSTALLED_APPS and to create a folder of template tags and reuse them in multiple apps. But it is not working. Is there a smart way to work it out? All I need is to place templatetags for whole project in single directory accessible globally.
Accessing templatetags globally for django project
1.2
0
0
579
41,748,112
2017-01-19T17:31:00.000
15
0
1
0
python,python-2.7
41,748,229
3
false
0
0
If you invoke your program with python -i <script>, the interpreter will remain active after the script ends. raise SystemExit would be the easiest way to force it to end at an arbitrary point.
1
10
0
I have a Python script, and I want to execute it up to a certain point, then stop, and keep the interpreter open, so I can see the variables it defines, etc. I know I could generate an exception, or I could invoke the debugger by running pdb.set_trace(), then stop the debugger, which is what I currently use. ...but is there a command that will just stop the script, as if it had simply reached its end? This would be equivalent to commenting the entire rest of the script (but I would not like doing that), or putting an early return statement in a function. It seems as if something like this has to exist but I have not found it so far. Edit: Some more details of my usecase I'm normally using the regular Python consoles in Spyder. IPython seems like a good thing but ( at least for the version I'm currently on, 2.2.5) some of the normal console's features don't work well in IPython (introspection, auto-completion). More often than not, my code generates matplotlib figures. In debug mode, those cannot be updated (to my knowledge), which is why I need to get completely out of the script, but not the interpreter). Another limit of the debugger is that I can't execute loops in it: you can copy/paste the code for a loop into the regular console and have it execute, but that won't work in the debugger (at least in my Spyder version).
How to stop a Python script but keep interpreter going
1
0
0
6,279
41,748,325
2017-01-19T17:43:00.000
0
1
0
0
django,postgresql,python-3.x,post,arduino
41,748,434
1
true
1
0
1)Depends, if your arduino is on the same local network than your Django Server then you don't need a public IP, otherwise you would have to forward your Django Server IP & port so its accesible from internet. 2) Not really, you can do a traditional POST request to a normal view on Django.
1
1
0
I know it is frowned upon to post questions without code, but I have been stuck for days thinking of how to handle this issue and cant think of a solution. My setup is this: Arduino Mega w/ 4G + GPS Shield from Cooking Hacks Django Server set up with Python Postgresql Database Because the 4G + GPS shield has the capability for http commands, I want to use http POST to send gps data to my Django Server and store that information in my Postgresql database. Another thing to keep in mind is I am running a Django test server on my Localhost, so I need to POST to that local host. Because I am not posting through a form and it is not synchronous I am really confused as to how the Django server is supposed to handle this asynchronous POST. It will look like this (I imagine): Arduino (POST) --> Django Server (Localhost) --> Postgresql Database So I have 2 questions: 1) In order to successfully send a POST to my local Django Server, should my host be my public router IP and the Port be the same as that which I am running my server on? Is there something else I am missing? 2) Do I need to use Django REST Framework to handle the POST request? if not, how would I implement this in my views.py? I am trying to get a reference point on the problem in order to visualize how to do it. I DONT need coded solutions. Any help on this would be greatly appreciated, and if you have any other questions I will be quick to answer.
HTTP POST Data from Arduino to Django Database
1.2
0
0
1,484
41,748,751
2017-01-19T18:07:00.000
1
0
0
0
python,algorithm,binary-search
41,749,141
2
true
0
0
Yes, your instructor's one statement is a flaw. For 0 < x < 1, the root will lie between x and 1. This is true for any power in the range (0, 1) (roots > 1). You can reflect the statement to the negative side, since this is an odd root. The cube root of -1 <= x <= 0 will be in the range [-1, x]. For x < -1, your range is [x, -1]. It's the mirror-image of the positive cases. I'm not at all clear why the instructor made that asymmetric partitioning.
2
0
1
I am doing MIT6.00.1x course on edX and the professor tells that "If x<1, search space is 0 to x but cube root is greater than x and less than 1". There are two cases : 1. The number x is between 0 and 1 2. The number x is less than 0 (negative) In both the cases, the cube root of x will lie between x and 1. I understood that. But what about the search space? Will the initial search space lie between 0 and x? I think it is not that. I think the bold text as cited from the lecture is a flaw! Please enlighten me on this.
Finding cube root of a number less than 1 using binary search
1.2
0
0
1,193
41,748,751
2017-01-19T18:07:00.000
1
0
0
0
python,algorithm,binary-search
44,792,766
2
false
0
0
I think I know the problem you're talking about. The only reason she put that is that she deals with the absolute difference: while abs(guess**3 - cube) >= epsilon However, the code will need another line to deal with negative cubes all together which will be something along the lines of: if cube<0: guess = -guess I hope this helps.
2
0
1
I am doing MIT6.00.1x course on edX and the professor tells that "If x<1, search space is 0 to x but cube root is greater than x and less than 1". There are two cases : 1. The number x is between 0 and 1 2. The number x is less than 0 (negative) In both the cases, the cube root of x will lie between x and 1. I understood that. But what about the search space? Will the initial search space lie between 0 and x? I think it is not that. I think the bold text as cited from the lecture is a flaw! Please enlighten me on this.
Finding cube root of a number less than 1 using binary search
0.099668
0
0
1,193
41,749,448
2017-01-19T18:47:00.000
0
0
1
0
python
41,749,570
1
false
0
0
I assume you are using 32bit python. 32bit python limits your program ram memory to 2 gb (all 32bit programs have this as a hard limit), some of this is taken up by python overhead, more of this is taken up by your program. normal python objects do not need contiguous memory and will map disparate regions of memory numpy.arrays require contiguous memory allocation, this is much harder to allocate. aditionally np.array(a) + 1 creates a 2nd array and must allocate again a huge contiguous block (in fact most operations). some possible solutions that come to mind use 64 bit python ... this will give you orders of magnitude more ram to work with ... you will be unlikely to encounter a memory error with this unless you have a really really really big array (so much so that numpy is probably not the right solution) use multiprocessing to create a new process with a new 2gb limit that just does the numpy processing stuff use a different solution than numpy( ie a database)
1
0
1
I am using matrix = np.array(docTermMatrix) to make DTM. But sometimes it will run into memory error problems at this line. How can I prevent this from happening?
Memory error with np array when making document term matrix in python 2.7
0
0
0
84
41,749,825
2017-01-19T19:11:00.000
1
0
0
0
python,django,database
41,751,929
1
false
1
0
Deleted objects If you ever decided to delete the original post, you'd need a separate query to handle whatever you expect to do with the cloned posts instead of using the on_delete kwarg of a FK. Its an extra query As noted in the comments, foreign keys allow you to traverse the relationships directly through the ORM relationship methods. Data structure visualisation tools These won't be able to traverse any further down from an integer field since it will believe it is at a leaf node. Throughout all of this though, the elephant in the room is that a "clone" is still just duplicated data so I wonder why you don't just let a blog post be referenced more than once then you don't need to worry about how you store clones.
1
0
0
I'm not sure if this is an appropriate question here. I know the answer but I don't really know why, and I need proof when I raise this to my team. We have a number of Blog Posts on a Django Site. It's possible to "clone" one of those blog posts to copy it to another site. The way the current developer did that was to take the pk of the original post and store it as an IntegerField on the cloned post as clone_source. Therefore to get a story's clones, we do: clones = BlogPost.all_sites.filter(clone_source=pk) It seems to me that this would be much better structured as a foreign key relationship. Am I right? Why or why not?
Benefits of foreign key over integerfield
0.197375
0
0
61
41,751,050
2017-01-19T20:25:00.000
0
1
0
0
python,git,jenkins
41,751,932
1
false
0
0
I've made use of the following plugins to achieve this: Flexible Publish Plugin Run Condition Plugin
1
0
0
I'm using Jenkins with python code as follows. After detecting a change to the GIT dev branch: Checkout GIT repository dev branch code Perform Unit tests / code coverage If build passes, check code into the production branch of the same repo What I want to add, is the ability to keep track of the previous code version (the python code package stores the version number in the setup.py file ) and if the version in the latest build job is incremented compared to the saved version, only then check the passed code into the production branch. Any thoughts on how best to achieve this? Thanks
Jenkins - Store code previous version number, and take actions if version number changes
0
0
0
140
41,751,647
2017-01-19T21:02:00.000
0
0
1
1
exe,python-3.6
42,111,487
1
false
0
0
I have successfully used cx_freeze 5.0.1 with Python 3.6. Did you tried with older version or specific setup that failed ?
1
1
0
I want to learn if is there any available wheel for Python 3.6 to create executable files. I know pyinstall cx_freeze and py2exe options. However, they are available for Python3.4 or 3.5 for the most uptaded. Is there any way to create .exe from Python 3.6 script?
How to create exe files from Python 3.6 script?
0
0
0
2,057
41,752,291
2017-01-19T21:46:00.000
6
0
0
0
python,importerror,iterm2,neat,virtual-environment
41,767,302
1
true
0
0
I think you could simply copying the visualize.py into the same directory as the script you are running. If you wanted it in your lib/site-packages directory so you could import it with the neat module: copy visualize.py into lib/site-packages/neat/ and modify __init__.py to add the line import neat.visualize as visualize. Delete the __pycache__ directory. Make sure you have modules installed: Numpy, GraphViz, and Matplotlib. When you've done the above, you should be able to import neat and access neat.visualize. I don't recommend doing this though for several reasons: Say you wanted to update your neat module. Your visualize.py file is technically not part of the module. So it wouldn't be updated along with your neat module. the visualize.py file seems to be written in the context of the examples as opposed to being for general use with the module, so contextually, it doesn't belong there. At some point in the future, you might also forget that this wasn't a part of the module, but your code acts as if it was part of the API. So your code will break in some other neat installation.
1
5
1
So recently I have found about a NEAT algorithm and wanted to give it a try using NEAT-Python(not sure if this is even the correct source :| ). So I created my virtual environment activated it and installed the neat-python using pip in the VE. When I then tried to run one of the examples from their GitHub page it threw an error like this: ImportError: No module named visualize So I checked my source files, and actually the neat-python doesn't include the visualize.py script, however it is in their GitHub repository. I then tried to add it myself by downloading just the visualize.oy script dragging it inside my VE and adding it to all the textfiles the NEAT brought with it, like the installed-filex.txt etc. However it still threw the same error. I'm still fairly new to VE and GitHub so please don't be too hard on me :] thanks in advance. -Jorge
NEAT-Python not finding Visualize.py
1.2
0
0
8,034
41,754,825
2017-01-20T01:52:00.000
0
0
0
0
python,powershell,scripting,server,tableau-api
47,515,380
1
false
0
0
Getting data from excel to Tableau Server: Setup the UNC path so it is accessible from your server. If you do this, you can then set up an extract refresh to read in the UNC path at the frequency desired. Create an extract with the Tableau SDK. Use the Tableau SDK to read in the CSV file and generate a file. In our experience, #2 is not very fast. The Tableau SDK seems very slow when generating the extract, and then the extract has to be pushed to the server. I would recommend transferring the file to a location accessible to the server. Even a daily file copy to a shared drive on the server could be used if you're struggling with UNC paths. (Tableau does support UNC paths; you just have to be sure to use them rather than a mapped drive in your setup.) It can be transferred as a file and then pushed (which may be fastest) or it can be pushed remotely. As far as scheduling two steps (python and data extract refresh), I use a poor man's solution myself, where I update a csv file at one point (task scheduler or cron are some of the tools which could be used) and then setup the extract schedule at a slightly later point in time. While it does not have the linkage of running the python script and then causing the extract refresh (surely there is a tabcmd for this), it works just fine for my purposes to put 30 minutes in between as my processes are reliable and the app is not mission critical.
1
2
1
I used python scripting to do a series of complex queries from 3 different RDS's, and then exported the data into a CSV file. I am now trying to find a way to automate publishing a dashboard that uses this data into Tableau server on a weekly basis, such that when I run my python code, it will generate new data, and subsequently, the dashboard on Tableau server will be updated as well. I already tried several options, including using the full UNC path to the csv file as the live connection, but Tableau server had trouble reading this path. Now I'm thinking about just creating a powershell script that can be run weekly that calls the python script to create the dataset and then refreshes tableau desktop, then finally re-publishes/overwrites the dashboard to tableau server. Any ideas on how to proceed with this?
Tableau: How to automate publishing dashboard to Tableau server
0
0
0
1,415
41,755,950
2017-01-20T04:07:00.000
0
0
0
0
python,matplotlib,seaborn
41,767,039
2
false
0
0
Looks like the problem was with statsmodels (which seaborn uses to do KDE). I reinstalled statsmodels and that cleared up the problem.
1
0
1
After upgrading to matplotlib 2.0 I have a hard time getting seaborn to plot a pairplot. For example... sns.pairplot(df.dropna(), diag_kind='kde') returns the following error TypeError: slice indices must be integers or None or have an __index__ method. My data doesn't have any Nans in it. Infact, removing the kde option allows the function to run. Any idea what is happening?
Seaborn pairplot not showing KDE
0
0
0
1,051
41,756,756
2017-01-20T05:33:00.000
0
0
1
0
java,python,json
41,764,528
2
false
1
0
while passing json to java api from python replace null with None ,true with True and false with False. It will work
2
0
0
I am trying to give rest call to java API using python. Java API needs JSON input with java literals like {a:null,b:true,c:false}, While parsing the JSON from python it is not allowing to do so because python needs null,true and false to be inside double quotes like "null","true","false". what is the solution?
How to pass java literals from python dictionary
0
0
1
52
41,756,756
2017-01-20T05:33:00.000
0
0
1
0
java,python,json
41,756,812
2
false
1
0
The JSON syntax expects values to be quoted. It means that the problem comes from the java JSON api. What API do you use ?
2
0
0
I am trying to give rest call to java API using python. Java API needs JSON input with java literals like {a:null,b:true,c:false}, While parsing the JSON from python it is not allowing to do so because python needs null,true and false to be inside double quotes like "null","true","false". what is the solution?
How to pass java literals from python dictionary
0
0
1
52
41,756,800
2017-01-20T05:37:00.000
0
0
0
0
python,django,apache,wsgi
57,338,131
3
false
1
0
For those of you with cPanel, if you go under "Setup Python App" and click "Restart" it should update. Saved me about 5 times.
1
1
0
I have a Django website running and any updates I make to the source code won't update. (Reason I'm changing the file is because one line of code is generating an error. What's weird is I commented out this line of code that causes the error, but the code still runs and thus still causes the error. In the django.log it shows that line causing the error still, but it also shows it commented out now. So the error log shows my new source code, but the application itself isn't executing the new code) I am very new to Django, so I don't really know what's going on here (not my website, I got thrown on this project for work.) Researching around for this, I have already tried to restart apache: $ sudo apachectl restart $ sudo service apache2 restart and I've also tried to touch the wsgi.py file: $ touch wsgi.py and I have even deleted the .pyc file. Nothing has worked and the old line of code is still executing, even though the logs show it commented out. Not sure where else to check or what else I'm missing.
Django source code won't update on server
0
0
0
666
41,761,645
2017-01-20T10:40:00.000
19
0
1
0
python,naming-conventions,private-members
41,762,806
2
true
0
0
Short answer: use a single leading underscore unless you have a really compelling reason to do otherwise (and even then think twice). Long answer: One underscore means "this is an implementation detail" (attribute, method, function, whatever), and is the Python equivalent of "protected" in Java. This is what you should use for names that are not part of your class / module / package public API. It's a naming convention only (well mostly - star imports will ignore them, but you're not doing star imports anywhere else than in your Python shell are you ?) so it won't prevent anyone to access this name, but then they're on their own if anything breaks (see this as a "warranty void if unsealed" kind of mention). Two underscores triggers a name mangling mechanism. There are very few valid reason to use this - actually there's only one I can think of (and which is documented): protecting a name from being accidentally overridden in the context of a complex framework's internals. As an example there might be about half a dozen or less instances of this naming scheme in the whole django codebase (mostly in the django.utils.functional package). As far as I'm concerned I must have use this feature perhaps thrice in 15+ years, and even then I'm still not sure I really needed it.
1
10
0
Ok I think I have understood the use of one and two heading underscores in Python. Correct me if I am wrong, In the case of one underscore, the underscore prevents the from X import * statement to import this kind of variables. In the case of two underscores, the variable's name is prepended with the name of the class it belongs to allow a higher level of "privateness". My question now is: why not use two underscores only? In which cases one underscore is preferred (or needed) over two underscores?
When to use one or two underscore in Python
1.2
0
0
6,394
41,763,154
2017-01-20T11:56:00.000
0
0
1
0
python,io,token
41,763,296
3
false
0
0
There isn't a simple analogue in Python. Your best bet is probably to read the characters into an list in your program, and then later consume them from that list.
1
0
0
I need to read a single char from stdin and then unread it so that the next time an input method is called, that char should be included in the result. In C++, cin.putback does this. What is the equivalent in Python? Please note that I don't need to intermingle different input methods/functions.
How to unread a single char in Python?
0
0
0
452
41,766,948
2017-01-20T15:19:00.000
0
1
1
0
python,command-line,python-import,importerror
41,767,558
1
true
0
0
A few things that commonly result in this error: Module is not in PYTHONPATH. Since you have checked this with sys.path, I will assume that it is there already. However, for future reference, you can manually add it to your profile or bashrc file in the home directory. It may be that the module that you are using doesn't have a __init__.py file or the module path in PYTHONPATH doesn't point to the uppermost directory with __init__.py. You can fix this by adding a blank __init__.py file where necessary or editing the module path. Another possibility is that the python interpreter that you used sys.path is not the same python in which the module was installed. Commonly, this is due to two different versions of python installed on the same machine. Make sure that your module is installed for the correct python interpreter or switch into the correct (usually not default) python using source activate. Hope this helps!
1
0
0
I'm working with Python in the command line and want to import the module 'twitter'. twitter is in this directory: C:\Users\U908153\AppData\Local\Enthought\Canopy32\User\Lib\site-packages sys.path tells me that the above directory is in sys.path. Still, when I write import twitter, I get ImportError: No module named twitter. What is going wrong? Thanks in advance!
Python ImportError when module is in sys.path
1.2
0
0
1,693
41,767,788
2017-01-20T16:04:00.000
1
0
1
0
python,pycharm,anaconda
47,740,904
2
false
0
0
Go to Setting then Project Interpreter Click on Setting icon which will lead to browse folder, Choose Add Local then on left side of page, click on system Interpreter then Press OK and Apply.
1
31
0
I have installed anaconda with python 3.5, and created a new environment with Python 2.7 (on windows 10). I can easily change the Anaconda environment with the command line tool. However in Pycharm, when I try to change the Python interpreter, I can only see the Anaconda Python 3.5 version. Is there a easy way to select the Anaconda environment from Pycharm?
How can I access different Anaconda environment from Pycharm (on Windows 10)
0.099668
0
0
24,248
41,768,400
2017-01-20T16:38:00.000
3
0
1
0
python,anaconda
66,104,759
2
false
0
0
You can revert fully to initial state with conda install --revision 0 If you want to do a partial rollback, you can try conda list --revisions and then conda install --revision xxx
1
5
0
I have installed too many packages in my root environment in Anaconda. How can I reset Anaconda to its initial state without manually removing all packages on an individual basis?
Reset Root Environment Anaconda
0.291313
0
0
7,375
41,769,372
2017-01-20T17:35:00.000
0
0
1
0
python,filter
41,769,987
4
false
0
0
Linear Replacement You will want something adaptable to innovative orthographers. For a start, pattern-match the alphabetic characters to your lexicon of banned words, using other characters as wild cards. For instance, your example would get translated to "h...o", which you would match to your proposed taboo word, "hello". Next, you would compare the non-alpha characters to a dictionary of substitutions, allowing common wild-card chars to stand for anything. For instance, asterisk, hyphen, and period could stand for anything; '4' and '@' could stand for 'A', and so on. However, you'll do this checking from the strength of the taboo word, not from generating all possibilities: the translation goes the other way. You will have a little ambiguity, as some characters stand for multiple letters. "@" can be used in place of 'O' of you're getting crafty. Also note that not all the letters will be in your usual set: you'll want to deal with moentary symbols (Euro, Yen, and Pound are all derived from letters), as well as foreign letters that happen to resemble Latin letters. Multi-character replacements That handles only the words that have the same length as the taboo word. Can you also handle abbreviations? There are a lot of combinations of the form "h-bomb", where the banned word appears as the first letter only: the effect is profane, but the match is more difficult, especially where the 'b's are replaced with a scharfes-S (German), the 'm' with a Hebrew or Cryllic character, and the 'o' with anything round form the entire font. Context There is also the problem that some words are perfectly legitimate in one context, but profane in a slang context. Are you also planning to match phrases, perhaps parsing a sentence for trigger words? Training a solution If you need a comprehensive solution, consider training a neural network with phrases and words you label as "okay" and "taboo", and let it run for a day. This can take a lot of the adaptation work off your shoulders, and enhancing the model isn't a difficult problem: add your new differentiating text and continue the training from the point where you left off.
2
1
0
I am working on a python project in which I need to filter profane words, and I already have a filter in place. The only problem is that if a user switches a character with a visually similar character (e.g. hello and h311o), the filter does not pick it up. Is there some way that I could find detect these words without hard coding every combination in?
Identify Visually Similar Strings in Python
0
0
0
282
41,769,372
2017-01-20T17:35:00.000
0
0
1
0
python,filter
41,782,720
4
true
0
0
Thank you to all who posted an answer to this question. More answers are welcome, as they may help others. I ended up going off of David Zemens' comment on the question. I'd use a dictionary or list of common variants ("sh1t", etc.) which you could persist as a plain text file or json etc., and read in to memory. This would allow you to add new entries as needed, independently of the code itself. If you're only concerned about profanities, then the list should be reasonably small to maintain, and new variations unlikely. I've used a hard-coded dict to represent statistical t-table (with 1500 key/value pairs) in the past, seems like your problem would not require nearly that many keys. While this still means that all there word will be hard coded, it will allow me to update the list more easily.
2
1
0
I am working on a python project in which I need to filter profane words, and I already have a filter in place. The only problem is that if a user switches a character with a visually similar character (e.g. hello and h311o), the filter does not pick it up. Is there some way that I could find detect these words without hard coding every combination in?
Identify Visually Similar Strings in Python
1.2
0
0
282
41,771,459
2017-01-20T20:02:00.000
4
0
1
0
python,package,atom-editor,conda,virtual-environment
47,923,174
2
false
0
0
One way is to start atom from the activated virtual environment. In this case, executing programs/scripts uses the configured python interpreter and imports the installed in the virtual environment. EDIT: It's been long though, it might be useful for people redirected to this question. By installing atom-python-virtualenv you can create, change or deactivate virtual environments with atom editor.
1
12
0
Don't have much expertise in programming. Only picked up Python last summer. I have installed both Atom and Conda on my computer. Recently, I've used Atom to edit my scripts in Python, then run the scripts via Command Line. As per standard practice, I created Virtual Environments where I installed packages I needed to run different Python scripts. I now want to use Atom as an IDE, and so have installed the Script package on Atom so I can run my scripts in Atom itself. However, when I tried running a Python script that required the Python numpy package, I got this: ImportError: No module named 'numpy' This error is obviously going to appear for other packages that haven't already been installed in the root environment (I think?). So now, my question is how do I activate the needed Virtual Environment in Atom? In other applications like Jupyter and Spyder, I would activate the Virtual Environment I needed then open the Application via Command Line, but I can't do that with Atom. (If possible, is there a way to use Virtual Environments created by Conda) Thanks
Activating Python Virtual Environment in Atom
0.379949
0
0
10,521
41,771,659
2017-01-20T20:14:00.000
0
0
1
0
python,multithreading,for-loop
41,772,457
2
true
0
0
On SSDs and HDDs: As others have pointed out, your main constraint here is going to be your hard drive. If you're using an HDD and not an SSD, you're actually going to see a decrease in performance by attempting to have multiple threads read from the disk at the same time (assuming they're trying to read randomly distributed blocks of data from the disk and are reading sequentially). If you look at how a hard drive works, it has a head must seek (scan) to find the location of the data you're attempting to read. If you have multiple threads, they will still be limited by the fact that the hard drive can only read one block at a time. Hard drives perform well when reading/writing sequentially but do not perform well when reading/writing from random locations on the disk. On the other hand if you look at how a solid state drive works, it is the opposite. The solid state drive does better at reading from random places in storage. SSDs do not have seek latency which makes them great at reading from multiple places on disk. The optimal structure of your program will look different depending on whether or not you're using an HDD or an SSD. Optimal HDD Solution: Assuming you're using an HDD for storage, your optimal solution looks something like this: Read a large chunk of data into memory from the main thread. Be sure you read in increments of your block size, which will increase performance. If your HDD stores data in blocks of 4kB (or 4096 bytes), you should read in multiples of 4096. Most modern disk sectors (another term for blocks) are 4kB. Older legacy disks will have sectors of 512 bytes. You can find out how big your blocks/sectors are by using lsblk or fdisk on linux. You will need to play around with different multiples of your block size, steadily increasing the amount of data you're reading, to see what size gives the best performance. If you read too much data in at once your program will be inefficient (because of read speeds). If you don't read enough data in at once, your program will also be inefficient (because of too many reads). I'd start with 10 times your block size, then 20 times your block size, then 30 times your block size, until you find the optimal size of data to read in at once. Once your main thread has read from disk, you can spawn multiple threads to process the data. Since python has a GIL (global interpreter lock) for thread safety, you may want to use multiprocessing instead. The multiprocessing library is very similar to the threading library. While the child threads/processes are processing the data, have the main thread read in another chunk of data from the disk. Wait until the children have finished to spawn more for processing, and keep repeating this process.
1
0
0
I would like to understand if there is any way to use the multithreading in a for loop, I have a big txt file (35GB), the script needs to split and strip each line and print the result in an another txt file, the problem is it's pretty slow and I would like to make it faster. I thought about using a lock but I'm still not sure if it could work, anyone have any ideas? Thanks :D
Python, use multithreading in a for loop
1.2
0
0
547
41,772,263
2017-01-20T21:01:00.000
0
0
0
0
python,scala,apache-spark,fortify
49,675,309
3
false
0
0
Fortify support python scan. Since it is not compiled, you can directly feed the code to python, it will detect the same, scan and give you the result.
1
2
1
Does Fortify Supports Python, Scala, and Apache Spark? If it supports how to scan these codes using Fortify. We need to have compiler to scan C++ code using Fortify. This can be done using Microsoft visual studio. Similarly should we need to have some plugin to scan Python, Scala, and Spark codes?
Does Fortify support Python, Scala, and Apache Spark?
0
0
0
7,798
41,773,602
2017-01-20T22:48:00.000
1
0
0
0
python-3.x,python-2to3
41,773,794
1
false
0
0
Turns out there's several options for this: Copy the file first to a new location, then run 2to3 -w -n which modifies the file in place (-w) without making a backup (-n) 2to3 -n -o desired/path/to/new/file specifies an output directory (-o) and disables backup (-n) 2to3 -n -W --add-suffix=3 will put the file in the same location, but put a suffix on it (-W --add-suffix=) without making a backup (n)
1
1
0
When I run 2to3.py -w my_script.py it converts my_script.py to Python3 and then puts the original version my_script.py.bak. I want the old file to remain as is, and the converted file to go into a new file, like my_script.converted.py. Is there a 2to3.py argument that allows this?
Redirect 2to3 output to new file
0.197375
0
0
1,449
41,774,136
2017-01-20T23:44:00.000
3
0
1
0
python,regex,python-3.x
41,774,229
3
false
0
0
The above one can also be shortened as re.sub('[^/|]'), ie without needing to escape the two chars.
1
1
0
I am writing a program in Python3, and I have a string in a variable. This string contains lots of | and / characters, as well as possible alphanumeric characters and whitespace. I need a regular expression to get rid of all characters except | and /, and replace them with nothing. So basically, /|||/ blah /|||||/ foo /|/ bar /|||/ would become /|||//|||||//|//|||/. Can anyone help me do this? Thanks in advance
Python3 Regex: Remove all characters except / and | from string
0.197375
0
0
2,319
41,776,801
2017-01-21T07:26:00.000
0
0
0
0
python,sorting,pandas,in-place
60,477,227
4
false
0
0
"inplace=True" is more like a physical sort while "inplace=False" is more like logic sort. The physical sort means that the data sets saved in the computer is sorted based on some keys; and the logic sort means the data sets saved in the computer is still saved in the original (when it was input/imported) way, and the sort is only working on the their index. A data sets have one or multiple logic index, but physical index is unique.
2
22
1
Maybe a very naive question, but I am stuck in this: pandas.Series has a method sort_values and there is an option to do it "in place" or not. I have Googled for it a while, but I am not very clear about it. It seems that this thing is assumed to be perfectly known to everybody but me. Could anyone give me some illustrative explanation how these two options differ each other for dummies...? Thank you for any assistance.
In-place sort_values in pandas what does it exactly mean?
0
0
0
21,797
41,776,801
2017-01-21T07:26:00.000
0
0
0
0
python,sorting,pandas,in-place
71,012,398
4
false
0
0
inplace = True changes the actual list itself while sorting. inplace = False will return a new sorted list without changing the original. By default, inplace is set to False if unspecified.
2
22
1
Maybe a very naive question, but I am stuck in this: pandas.Series has a method sort_values and there is an option to do it "in place" or not. I have Googled for it a while, but I am not very clear about it. It seems that this thing is assumed to be perfectly known to everybody but me. Could anyone give me some illustrative explanation how these two options differ each other for dummies...? Thank you for any assistance.
In-place sort_values in pandas what does it exactly mean?
0
0
0
21,797
41,778,073
2017-01-21T10:10:00.000
1
0
1
0
python,code-coverage,pragma
41,782,050
3
false
0
0
A better solution is to not ignore the lines at all, and instead to measure the coverage on all the platforms, and then combine them together. You can run coverage in "parallel mode" so that each data file gets a distinct name, with parallel=true. Then copy all the data files to one place, run "coverage combine", and then "coverage report".
1
1
0
Some part of code works on Windows and some part works on other platforms. I want to increase the coverage of the code by placing #pragma: no cover appropriately. So when the program is running on Windows platform, the code related to other platforms should be ignored and vice versa. How can I achieve this?
Improving coverage for python code that is platform dependent
0.066568
0
0
213
41,778,173
2017-01-21T10:22:00.000
-3
0
1
0
python,python-3.x,jupyter-notebook,turtle-graphics
47,085,929
4
false
0
1
It seems you can get turtle module to work, if you run the Jupyter Notebook cell containing the code twice. Not sure why it works, but it does!
1
10
0
I have been using turtle package in python idle. Now I have switched to using Jupyter notebook. How can I make turtle inline instead of opening a separate graphic screen. I am totally clueless about. Any pointers and advice will be highly appreciated.
Make turtle graphics inline
-0.148885
0
0
10,154
41,779,485
2017-01-21T12:41:00.000
1
0
1
0
python,pygame
41,779,486
1
true
0
1
First, tlatorre does not have a Osx version it seems. So I did this: anaconda search -t conda pygame and you can see in the reply several places (including tlatorre) with pygame. You can see that tlatorre only has linux-64 version. Also quasiben seems to have a Osx version but I tried it and there was some incompatibility with python 3.5* so I tried CogSci (which seems to have linux windows and osx versions conda install -c CogSci pygame=1.9.2 and it seems that it has been installed. (my apologies if it is not, - I am going to check from now on)
1
1
0
I was having problems trying to install pygame on a mac where python was installed with anaconda. I searched in SO and no solution worked. But it seems I have worked it out (haven't checked yet throughly ) so I write it here
Install pygame with anaconda on mac
1.2
0
0
1,489
41,779,922
2017-01-21T13:25:00.000
0
0
0
0
python-3.x,deep-learning,theano-cuda
56,411,030
2
false
0
0
For latest version of Theano (1.04) import theano generates an error without the nose package installed install via conda or pip pip install nose / conda install nose
2
2
1
I am trying to use theano gpu on my ubuntu, but each time after it running one time successfully, it will give me the error like this when I try to run next time. No idea why, could anyone help me ? import theano Traceback (most recent call last): File "", line 1, in File "/home/sirius/anaconda3/lib/python3.5/site-packages/theano/init.py", line 95, in if hasattr(theano.tests, "TheanoNoseTester"): AttributeError: module 'theano' has no attribute 'tests'
AttributeError: module 'theano' has no attribute 'tests'
0
0
0
1,036
41,779,922
2017-01-21T13:25:00.000
0
0
0
0
python-3.x,deep-learning,theano-cuda
50,360,800
2
false
0
0
I met the same problem.I just fix it with conda install nose
2
2
1
I am trying to use theano gpu on my ubuntu, but each time after it running one time successfully, it will give me the error like this when I try to run next time. No idea why, could anyone help me ? import theano Traceback (most recent call last): File "", line 1, in File "/home/sirius/anaconda3/lib/python3.5/site-packages/theano/init.py", line 95, in if hasattr(theano.tests, "TheanoNoseTester"): AttributeError: module 'theano' has no attribute 'tests'
AttributeError: module 'theano' has no attribute 'tests'
0
0
0
1,036
41,781,884
2017-01-21T16:44:00.000
0
0
1
0
python,anaconda
41,782,106
1
false
0
0
I've managed to get through by opening anaconda terminal & then writing 'conda'. This have opened me conda and I made my way.
1
3
0
I've recently re-installed anaconda(a python distribution) to update it. Both the conda terminal and anaconda immediately close just after opening. Before the re-installation it did not work either. (unrelated: I need conda terminal to work because I'll install nmap and also create a virtual environment and then install tensorflow there.)
anaconda and conda terminal closes immediately
0
0
0
1,948
41,783,003
2017-01-21T18:36:00.000
41
0
1
0
python,date,datetime,pandas
41,796,793
9
true
0
0
I got some help from a colleague. This appears to solve the problem posted above pd.to_datetime(df['mydates']).apply(lambda x: x.date())
1
33
1
I need to merge 2 pandas dataframes together on dates, but they currently have different date types. 1 is timestamp (imported from excel) and the other is datetime.date. Any advice? I've tried pd.to_datetime().date but this only works on a single item(e.g. df.ix[0,0]), it won't let me apply to the entire series (e.g. df['mydates']) or the dataframe.
How do I convert timestamp to datetime.date in pandas dataframe?
1.2
0
0
92,271
41,783,198
2017-01-21T18:54:00.000
0
0
0
0
python,sql,django
41,783,265
1
true
1
0
unicode gets a field from these other models (self.foreign_key_field.field_to_print_out). This is where the other queries are made, not from the call to unicode or checking if its a b or a c. Had your unicode method reference only local fields this wouldn't have been an issue, but as you have noticed fields that require a join are not done automatically to save some performance on things that may not even be required. If you use django-debug-toolbar, you should notice that the 1000 queries are the retrieval of the related object, not the list of a objects. So yes, as you've pointed out, select_related would help here since this is you telling django that you do require these fields.
1
3
0
I have two models, model B and model C, which both extend model A. In a form I have a model select field for model A - this obviously loads all instances of model B and C which was my intention. So this dropdown is over 1000 'A Objects'. I am using hasattr() to determine if they're B or C, which then uses the unicode method from those classes to display the object in string from in the dropdown. This creates thousands of SQL queries which takes around a minute to process. Right now my solution is to query the database 3 times (to get all A, B and C objects, and then loop over A and decide if each oject is of type B or C and push the correct unicode string into a list which is used in the dropdown. I then clean the data and select the right object when saving the form. This is hackey (to me). I was wondering if anyone knows of an efficient way of populating a dropdown with thousands of model object choices when that model is a Base Model for other models. Cheers, Dean
Django select field with thousands of choices creating thousands of database queries
1.2
0
0
530
41,785,893
2017-01-21T23:59:00.000
1
0
0
0
python,websocket
56,409,252
1
false
0
0
Most likely, the remote host closed the connection. You cannot stop it. You can handle it by re-connecting. People running web servers will implement automatic cleanup to get rid of potentially stale connections. Closing a connection that's been open for 24 hours sounds like a sensible approach. And there's no harm done, because if the client is still interested, it can re-establish the connection. That's also useful to re-authenticate the client, if authentication is required. On second thought, it might be a network disconnect as well. Some DSL providers used to disconnect and re-assign a new IP every 24 hours, to prevent users from running permanent services on temporarily assigned IP addresses. Don't know if they still do that.
1
13
0
I've been trying to use the python websocket-client module to receive and store continuous updates from an exchange. Generally, the script will run smoothly for a day or so before raising the following error: websocket._exceptions.WebSocketConnectionClosedException: Connection is already closed. I've looked at the websocket-client source code and apparently the error is being raised in line 92 by the code if not bytes_:. Furthermore, the WebSocketConnectionClosedException is supposed to be raised "If remote host closed the connection or some network error happened". Can anybody tell me why this is happening, and what I could do to stop or handle it.
"Connection is already closed." error with python WebSocket client
0.197375
0
1
7,326
41,788,747
2017-01-22T08:15:00.000
1
0
1
0
python,ide,styles,eric-ide
71,388,545
2
false
0
0
You can change the complete look of the IDE, but it's split between four different settings menus: In Settings > Preferences > Editor > Highlighters > Styles click "Import styles" near the bottom, and choose one of the predefined themes. Now you can't see your text cursor, because it's still black. Go to Settings > Preferences > Editor > Styles, scroll down to "Caret", and change the "Foreground" color to white. In the same menu, you can also change the color of the margins. The surrounding panels are still white. Go to Settings > Preferences > Interface > Interface and in "Style Sheet" choose one of the predefined styles. Now you can't see the icons in the top bar, because they are black on a dark background. Go to Settings > Icons and change "Default icons" to "Breeze (dark)" Works for eric7.
1
4
0
I want to change my background IDE in eric but when I do this in preferences>editor>style nothing change in background color, just font styles changed. Is there any solution for this? cause white color make a huge damage in editor when working lots of hours
theme for python eric ide
0.099668
0
0
2,595
41,788,924
2017-01-22T08:39:00.000
1
0
0
0
python,c++,tensorflow
47,182,528
1
false
0
0
according to Graves paper [1], the loss for a batch is defined as sum(log(p(z|x))) over all samples (x,z) in this batch. If you use a batch size of 1, you get log(p(z|x)), that is the log-probability of seeing the labelling z given the input x. This can be achieved with the ctc_loss function from TensorFlow. You can also implement the relevant parts of the Forward-Backward Algorithm described in Section 4.1 of the paper [1] yourself. For small input sequences it is feasible to use a naive implementation by constructing the paths shown in Figure 3 and then sum over all that paths in the RNN output. I did this for a sequence of length 16 and for a sequence of length 100. For the former one the naive approach was sufficient while for the latter one the presented dynamic programming approach was needed. [1] Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks
1
1
1
I'm doing my first tensorflow project. I need to get ctc probability (not ctc loss) for given input and my expected sequences. Is there any api or ways to do it in python or c++? I prefer python side, but c++ side is also okay.
How to calculate ctc probability for given input and expected output?
0.197375
0
0
495
41,789,176
2017-01-22T09:10:00.000
-1
0
0
0
python,amazon-web-services,amazon-s3,boto
41,790,354
5
false
0
0
As of now, you cannot get such information without downloading the zip file. You can store the required information as the metadata for a zip file when uploading to s3. As you have mentioned in your question, using the python functions we are able to get the file list without extracting. You can use the same approach to get the file counts and add as metadata to a particular file and then upload it to S3. Hope this helps, Thanks
1
7
0
Case: There is a large zip file in an S3 bucket which contains a large number of images. Is there a way without downloading the whole file to read the metadata or something to know how many files are inside the zip file? When the file is local, in python i can just open it as a zipfile() and then I call the namelist() method which returns a list of all the files inside, and I can count that. However not sure how to do this when the file resides in S3 without having to download it. Also if this is possible with Lambda would be best.
How to count files inside zip in AWS S3 without downloading it?
-0.039979
0
1
3,417
41,789,489
2017-01-22T09:51:00.000
0
0
0
0
php,python,mysql
41,790,010
1
false
0
0
I have installed Phyton 3.4 and uninstalled the 3.4.6 version and it worked, I don't know why but it worked
1
0
0
I am trying to install MySql Server on Win 8 and as I go through the process, the installer requires Python 3.4 so I installed it manually with the given link of the installer. I have installed Python 3.4.6 as it is the latest version but still it is not recognized and the installer returns an error message saying the "The requirement is still failing". Should I install Python 3.4 instead of 3.4.6?
MySql Server installation dont recognize python 3.4.6
0
1
0
131
41,790,392
2017-01-22T11:37:00.000
1
0
0
0
python,pandas
41,855,351
1
true
0
0
As far as I understand, Replace is used when working on missing values and transform is used while doing group_by operations.Map is used to change series or index
1
5
1
I'm trying to clearly understand for which type of data transformation the following functions in pandas should be used: replace map transform Can anybody provide some clear examples so I can better understand them? Many thanks :)
Python (Pandas) : When to use replace vs. map vs. transform?
1.2
0
0
2,423
41,793,059
2017-01-22T15:59:00.000
1
0
1
1
python,c++,visual-studio,visual-c++
41,793,169
1
true
0
0
cl.exe and similar visual studio commands are not in PATH. This means that you cannot execute them in the familiar manner (except if you add them to PATH) using CMD. You'll have to open the Visual Studio 2015 Command Prompt to be able to access cl.exe and similar commands. Then, inside the VS 2015 command prompt, you can execute the get-deps.cmd script.
1
0
0
I am trying to compile a project from command prompt to open from visual studio. The project needed CMake, Python and Visual Studio 2015 to run, i have downloaded and installed all of those. I am trying to run a .cmd file "get-deps.cmd" file but it is unable to locate the valid MSVC version. Can someone help. Below is the screen sample. D:\pT1\polarisdeps>get-deps.cmd c:\opt\polarisdeps_vs2015 BASEDIR=c:\opt\polarisdeps_vs2015 1 file(s) copied. Could not locate a valid MSVC version.
Could not locate a valid MSVC version
1.2
0
0
521
41,793,953
2017-01-22T17:26:00.000
2
0
0
0
python,tkinter
41,794,262
1
true
0
1
You should assign an IntVar (or possibly StringVar) to the checkbutton when you create it, via its variable= configuration option. You call .get() on this var to check the button's state, and .set() to change its state.
1
0
0
I have a checkbutton inside of a menu widget in python with tkinter. (Using python 3.5.2). I know that with normal checkbuttons you can select or deselect the checkbuttons using checkbutton.select() and checkbutton.deselect(). I need to know how to do this with the checkbuttons that I have in the menu object. I have tried the menu.entrybutton.configure(id, coption) method but there is no coption for selecting and deselecting checkbuttons within the menu. Any help would be appreciated.
Selecting and deselecting tkinter Menu Checkbutton widgets
1.2
0
0
2,380
41,794,956
2017-01-22T19:00:00.000
9
0
0
0
python,netcdf,python-xarray
41,795,121
2
true
0
0
In Xarray, directly indexing a Dataset like hndl_nc['variable_name'] pulls out a DataArray object. To get or set attributes, index .attrs like hndl_nc.attrs['global_attribute'] or hndl_nc.attrs['global_attribute'] = 25. You can access both variables and attributes using Python's attribute syntax like hndl_nc.variable_or_attribute_name, but this is a convenience feature that only works when the variable or attribute name does not conflict with a preexisting method or property, and cannot be used for setting.
2
5
1
Is there some way to add a global attribute to a netCDF file using xarray? When I do something like hndl_nc['global_attribute'] = 25, it just adds a new variable.
Adding global attribute using xarray
1.2
0
0
6,426
41,794,956
2017-01-22T19:00:00.000
6
0
0
0
python,netcdf,python-xarray
46,549,251
2
false
0
0
I would add here that both Datasets and DataArrays can have attributes, both called with .attrs e.g. ds.attrs['global attr'] = 25 ds.variable_2.attrs['variable attr'] = 10 ds.variable_2.attrs['variable attr'] = 10
2
5
1
Is there some way to add a global attribute to a netCDF file using xarray? When I do something like hndl_nc['global_attribute'] = 25, it just adds a new variable.
Adding global attribute using xarray
1
0
0
6,426
41,795,111
2017-01-22T19:16:00.000
0
0
0
0
python,macos,python-3.x,tkinter
41,905,609
2
false
0
1
If you run in IDLE then don't bother Python Shell - IDLE is used only to develop code. Later you don't need IDLE to run it and you will no see Python Shell.
2
2
0
Hello to the Stack Overflow Community! I am an amateur coder & student and am developing a UI for my superiors at my 'school.' I have been bothered by the Python Shell window opening as well and was wondering if there was a way to remove that window without having my Tkinter program shut down. Thanks!
How can I remove the Python Shell window while using Tkinter?
0
0
0
1,351
41,795,111
2017-01-22T19:16:00.000
1
0
0
0
python,macos,python-3.x,tkinter
41,795,153
2
false
0
1
Rename you main script to have the extension .pyw. This file type, when executed, is by default run by pythonw.exe instead of python.exe, and it doesn't show the console. You will need some means to report debug errors, though. Just an advice.
2
2
0
Hello to the Stack Overflow Community! I am an amateur coder & student and am developing a UI for my superiors at my 'school.' I have been bothered by the Python Shell window opening as well and was wondering if there was a way to remove that window without having my Tkinter program shut down. Thanks!
How can I remove the Python Shell window while using Tkinter?
0.099668
0
0
1,351
41,797,071
2017-01-22T22:54:00.000
1
0
0
0
python,pandas
50,193,390
3
false
0
0
Just in case anyone else ends up here, let me provide a more generic answer. Suppose your DataFrame column, Series, vector, whatever, X has n values. At an arbitrary position i you'd like to get (X[i])*(X[i+1])*...*(X[n]), which is equivalent to (X[1])*(X[2])*...*(X[n]) / (X[1])*(X[2])*...*(X[i-1]). Therefore, you may just do inverse_cumprod = (np.prod(X) / np.cumprod(X)) * X
1
4
1
I have a data frame which contains dates as index and a value column storing growth percentage between consecutive dates (i.e. dates in the index). Suppose I want to compute 'real' values by setting a 100 basis at the first date of the index and then iteratively applying the % of growth. It is easy with the cumprod method. Now, I want to set as 100 basis the laste date in the index. I thus need to compute for each date in the index the 'inverse' growth. Is there an easy way (and pythonic) to do this with pandas? Regards, Allia
'Inverse' cumprod in pandas
0.066568
0
0
2,303
41,806,128
2017-01-23T12:14:00.000
2
0
0
0
python,machine-learning,tensorflow,computer-vision,deep-learning
42,111,038
2
false
0
0
I have been wondering the same thing and have been disappointed with my during-training-time image processing performance. It has taken me a while to appreciate quite how big an overhead the image manipulation can be. I am going to make myself a nice fat juicy preprocessed/augmented data file. Run it overnight and then come in the next day and be twice as productive! I am using a single GPU machine and it seems obvious to me that piece-by-piece model building is the way to go. However, the workflow-maths may look different if you have different hardware. For example, on my Macbook-Pro tensorflow was slow (on CPU) and image processing was blinding fast because it was automatically done on the laptop's GPU. Now I have moved to a proper GPU machine, tensorflow is running 20x faster and the image processing is the bottleneck. Just work out how long your augmentation/preprocessing is going to take, work out how often you are going to reuse it and then do the maths.
1
4
1
In Tensorflow, it seems that preprocessing could be done on either during training time, when the batch is created from raw images (or data), or when the images are already static. Given that theoretically, the preprocessing should take roughly equal time (if they are done using the same hardware), is there any practical disadvantage in doing data preprocessing (or even data augmentation) before training than during training in real-time? As a side question, could data augmentation even be done in Tensorflow if was not done during training?
Tensorflow: Is preprocessing on TFRecord files faster than real-time data preprocessing?
0.197375
0
0
2,142
41,811,185
2017-01-23T16:36:00.000
0
0
0
0
python,django,git,migrate,makemigrations
41,811,454
2
false
1
0
Running makemigrations will automatically create python files in the "migrations" folder of the app where you modified the model. These files must be versionned in git because they cannot be dissociated from your modifications of the model. Then, when you will merge your branch, both the modification in the model and the corresponding migration will be in the git tree. So the next call to migrate will synchronize the DB with the current state described by your models.
1
1
0
I am working on a project that has been written in Python/ Django, and have recently made some changes to one of the models. I want to test the changes that I have made now, before I go any further into the development of this new feature, but I am aware that I will need to run python manage.py makemigrations & python manage.py migrate before the changes that I have made to the models take effect. I am doing the development on a separate git branch to master, but am a bit unsure what the best practice is here in terms of running migrations on different branches (I am relatively new to both Python/ Django & Git). Would it be sensible to run makemigrations on my development branch, and testing it there, the same way I have been testing the bug fixes that I have worked on so far, or will I need to merge my development branch with master before running makemigrations? I know that if I do run the migrations on my development branch, I will need to run them again on master once I have merged my changes, but I was just wondering if there are any dangers to this approach, or things I should look out for?
Django/ Python- should I run makemigrations on a local branch, or only on master?
0
0
0
655
41,813,799
2017-01-23T19:07:00.000
4
0
0
0
numpy,cython,boost-python
41,815,502
2
false
0
0
For small one shot problems, I tend to prefer cython, for larger integration with c++ code bases, prefer boost Python. In part, it depends on the audience for your code. If you're working with a team with significant experience in python, but little experience of using C++, Cython makes sense. If you have a fixed code base with complex types to inter operate with, the boost python can end up being a little cheaper to get running. Cython encourages you to write incrementally, gradually adding types as required to get extra performance and solves many of the hard packaging problems. boost Python requires a substantial effort in getting a build setup, and it can be hard to produce packages that make sense on PyPI Cython has good built in error messages/diagnostics, but from what I've seen, the errors that come out of boost can be very hard to interpret - be kind to yourself and use a new-ish c++ compiler, preferably one known for producing readable error messages. Don't discount alternative tools like numba (similar performance to cython with code that is Python, not just something that looks similar) and pybind11 (boost Python without boost and with better error messages)
1
7
1
I need to speed up some algorithms working on NumPy arrays. They will use std::vector and some of the more advanced STL data structures. I've narrowed my choices down to Cython (which now wraps most STL containers) and Boost.Python (which now has built-in support for NumPy). I know from my experience as a programmer that sometimes it takes months of working with a framework to uncover its hidden issues (because they are rarely used as talking points by its disciples), so your help could potentially save me a lot of time. What are the relative advantages and disadvantages of extending NumPy in Cython vs Boost.Python?
What are the relative advantages of extending NumPy in Cython vs Boost.Python?
0.379949
0
0
2,636
41,814,376
2017-01-23T19:46:00.000
0
0
0
0
python,tkinter
41,822,268
2
false
0
1
Actually, I realised that I could solve my own problem in a much simpler way, by literally making a list of lists, with each sub-list containing all of the widgets for a single row, and therefore I can refer to each item through it's row and column.
2
1
0
In tkinter, is there a way for me to reference a widget within a grid by its row and column, in the same way that you would be able to reference an item within a list (or list of lists) by knowing its position in the list?
Find an item by its position in a grid tkinter
0
0
0
1,890
41,814,376
2017-01-23T19:46:00.000
5
0
0
0
python,tkinter
41,814,789
2
false
0
1
You can call the .grid_slaves(row, column) method on the parent widget; this will return a list (possibly empty) of the widgets in that cell. You could also iterate over all of the child widgets (.grid_slaves() with no parameters, or .winfo_children()) and call .grid_info() on each one. This returns a dictionary with 'row' and 'column' keys, along with various other grid parameters.
2
1
0
In tkinter, is there a way for me to reference a widget within a grid by its row and column, in the same way that you would be able to reference an item within a list (or list of lists) by knowing its position in the list?
Find an item by its position in a grid tkinter
0.462117
0
0
1,890
41,814,520
2017-01-23T19:56:00.000
0
1
1
0
python-3.x,raspberry-pi2,mayavi
41,827,272
1
false
0
0
as your mayavi2 executable has been removed (this is typically the case with the message bash: /usr/bin/mayavi2: No such file or directory), it likely means that apt-get removed it when updating python-envisage, python-EnvisageCore and python-EnvisagePlugins. First steps: apt-get update, apt-get install mayavi2 (both as root or using sudo) and check if the errors are the same. The first error you had was about a missing _py2to3 module that normally comes with the packages python-traits and `python-traitsui. Are they installed?
1
0
0
I install mayavi2 using sudo apt-get install mayavi2 as shown below: pi@raspberrypi:~ $ sudo apt-get install mayavi2 Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: python-enthoughtbase Use 'apt-get autoremove' to remove it. The following extra packages will be installed: python-envisage Suggested packages: ipython python-chaco The following packages will be REMOVED: python-envisagecore python-envisageplugins The following NEW packages will be installed: mayavi2 python-envisage 0 upgraded, 2 newly installed, 2 to remove and 3 not upgraded. Need to get 0 B/18.5 MB of archives. After this operation, 34.9 MB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 160193 files and directories currently installed.) Removing python-envisagecore (3.2.0-2) ... Removing python-envisageplugins (3.2.0-2) ... Selecting previously unselected package python-envisage. (Reading database ... 158978 files and directories currently installed.) Preparing to unpack .../python-envisage_4.4.0-1_all.deb ... Unpacking python-envisage (4.4.0-1) ... Selecting previously unselected package mayavi2. Preparing to unpack .../mayavi2_4.3.1-3.1_armhf.deb ... Unpacking mayavi2 (4.3.1-3.1) ... Processing triggers for man-db (2.7.0.2-5) ... Setting up python-envisage (4.4.0-1) ... Setting up mayavi2 (4.3.1-3.1) ... Now I try to run mayavi2 but there is error as shown below. pi@raspberrypi:~ $ mayavi2 Traceback (most recent call last): File "/usr/bin/mayavi2", line 493, in raise ImportError(msg) ImportError: No module named _py2to3 Could not load envisage. You might have a missing dependency. Do you have the EnvisageCore and EnvisagePlugins installed? If you installed Mayavi with easy_install, try 'easy_install '. 'easy_install Mayavi[app]' will also work. If you performed a source checkout and installed via 'python setup.py develop', be sure to run the same command in the EnvisageCore and EnvisagePlugins folders. If these packages appear to be installed, check that your numpy and configobj are installed and working. If you need numpy, 'easy_install numpy' will install numpy. Similarly, 'easy_install configobj' will install configobj. I install envisage, EnvisageCore, and EnvisagePulgins using sudo apt-get install python-envisage sudo apt-get install python-EnvisageCore sudo apt-get install python-EnvisagePlugins pi@raspberrypi:~ $ mayavi2 bash: /usr/bin/mayavi2: No such file or directory Hi is there a way to get past this error? Thanks
mayavi2 does not work
0
0
0
388
41,816,254
2017-01-23T21:46:00.000
15
0
1
0
python,pip,conda
41,914,218
2
true
0
0
You can set the environmental variable PIP_CONFIG_FILE and point to the pip.conf you like. Since you want to recreate the environment from a script you could set PIP_CONFIG_FILE in this script.
1
14
0
I'm working on a Python library and using the anaconda distribution. I install packages into a conda environment with both conda and pip. I'd like to install Python packages from both PyPi and an in-house repository server (Sonatype Nexus). To do this I need to set the --extra-index-url flag. I'd like to make this reproducible to enable anyone to recreate the environment from a script so setting --extra-index-url from a command line invocation of pip isn't an option. I could set this globally in $HOME/.pip/pip.conf, which works, but this isn't transferrable to other users, at least not in an automated way. Is there a way to set a conda environment specific pip.conf file? Where would it be placed? This would enable anyone to check out the library code and recreate the environment with all dependencies intact and pulling code from an internal repository?
Environment specific pip.conf under anaconda
1.2
0
0
16,894
41,818,382
2017-01-24T01:05:00.000
0
0
0
1
python-2.7,opencv,numpy,aws-device-farm,python-appium
42,273,559
2
true
0
0
(numpy-1.12.0-cp27-cp27m-manylinux1_x86_64.whl) is numpy wheel for ubuntu. But still Amazon device farm throws error while configuring tests with this wheel. Basically, Device farm is validating if the .whl file has prefix -none-any.whl Just renaming the file to numpy-1.12.0-cp27-none-any.whl works in device farm. Note: This renamed file is non-universal python wheel. There might be few things which are not implemented in non-universal python wheel. This may cause somethings to break. So, test to ensure all your dependencies are working fine before using this.
1
0
1
I am facing the following error on configuring Appium python test in AWS device farm: There was a problem processing your file. We found at least one wheel file wheelhouse/numpy-1.12.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl specified a platform that we do not support. Please unzip your test package and then open the wheelhouse directory, verify that names of wheel files end with -any.whl or -linux_x86_64.whl, and try again I require numpy and opencv-python packages to run my tests. How to get this issue fixed?
Amazon device farm - wheel file from macosx platform not supported
1.2
0
0
290
41,818,773
2017-01-24T01:56:00.000
2
0
1
1
python,scheduler
41,819,729
2
true
0
0
You can use cron in linux. I also use cron to run my python script on my shared hosting server. And if you need to install python modules on your server maybe you also need to create a virtual environment using virtualenv. From my experience, if your script has clean exit than your python script will be killed or terminated properly, so you dont have to worry about python script not being killed and consuming your server resources :D
1
0
0
This would be quite a general question though, what I want to know is: when scheduling a python script(ex. everyday 1:00 PM), I wonder if we have to let the script(or editor such as spyder) always 'open'. This means, do I have to let python always running? I have avoided to use scheduler library because people say that the python script is not killed, pending and waiting for the next task. What I have been doing as far was just using Windows Scheduler to run my scripts(crawlers) automatically everyday(people say this is called the 'batch process'..). But now I have to do these jobs on the server side, not in my local any more. Therefore, how can I run my python scripts just the same as the Windows Scheduler, with using the python scheduler library?
How do I run python script everyday at the same time with scheduler?
1.2
0
0
1,339