Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
38,047,915
2016-06-27T07:14:00.000
1
1
0
0
python,eclipse,utf-8
38,047,976
1
true
1
0
Edit -> Set encoding UTF-16 screwed up my text again. another ctrl-z and Edit->set encoding ASCII fixed it.
1
1
0
I attempted to change the character encoding to UTF-16 and it changed all of my text in Eclipse's text editor to Chinese. A ctrl-z saved my work, but now the console is stuck in Chinese. When running an arbitrary python script, the script terminates immediately and gives the following message: "†䙩汥•䌺屄敶屗..." (The string goes on for much longer, but stackoverflow detects it as spam) What does this mean? I've tried resetting things to default but to no avail.
unexpected Chinese output from eclipse console
1.2
0
0
39
38,048,988
2016-06-27T08:19:00.000
1
0
1
0
python,jython-2.7,pydicom
38,078,205
1
true
0
0
There is a bug in Jython on the bytecode files size. That is, Jython can’t compile the file if a module have huge bytecode size and unfortunately PyDicom have 2 such files. So, The work around is to split the files into junks and try installing. This is a temporary work around and this issue has been resolved in Jython2.7.1 version. For now, try the following Split the “pydicom-0.9.8\dicom_dicom_dict.py” file into multiple files(4) files with 700 entries in a list. Split the “pydicom-0.9.8\dicom_private_dict.py” files into multiple files with 700 entries in each Search and change the usage of _dicom_dict.py contents in the pydicom package example: go to datadict.py and edit the following from dicom._dicom_dict_1 import DicomDictionaryOne from dicom._dicom_dict_2 import DicomDictionaryTwo from dicom._dicom_dict_3 import DicomDictionaryThree from dicom._dicom_dict_4 import DicomDictionaryFour DicomDictionary.update(DicomDictionaryOne) DicomDictionary.update(DicomDictionaryTwo) DicomDictionary.update(DicomDictionaryThree) DicomDictionary.update(DicomDictionaryFour) Search and change the usage of _private_dict.py contents in the pydicom package Install the package using setup.py
1
1
0
When I tried to import dicom in pydicom package I got error. I performed the following steps. Downloaded the pydicom-0.9.9.tar file,extracted and performed 'jython setup.py install' in cmd.But its not working. Is this due to compatability of jython with python? How to make pydicom is working in jython?
How to use pydicom in jython
1.2
0
0
138
38,050,053
2016-06-27T09:14:00.000
0
0
1
1
python,linux,python-3.x,rhel,make-install
38,384,650
3
false
0
0
Because python is integral into the RHEL OS, please explain what you mean by: "and made it default over existing Python2.6 ." Otherwise attempts to "uninstall" your working python install might leave you with a broken RHEL install.
2
0
0
I installed Python3.5 in Linux machine using configure, make, make install and made it default over existing Python2.6 . Now I want to uninstall Python 3.5 as it is not supporting zlib. How do I uninstall default Python 3.5. Linux is RHEL6.7.
Uninstalling Python3.5 from Linux RHEL
0
0
0
3,916
38,050,053
2016-06-27T09:14:00.000
0
0
1
1
python,linux,python-3.x,rhel,make-install
38,050,633
3
false
0
0
you probably can remove the directory that contain the new installation. but the main thing is to remove it from the $PATH.
2
0
0
I installed Python3.5 in Linux machine using configure, make, make install and made it default over existing Python2.6 . Now I want to uninstall Python 3.5 as it is not supporting zlib. How do I uninstall default Python 3.5. Linux is RHEL6.7.
Uninstalling Python3.5 from Linux RHEL
0
0
0
3,916
38,055,072
2016-06-27T13:19:00.000
1
0
1
0
python,security,deployment
38,055,288
2
false
0
0
You can distribute the script, you can distribute packages (that can be installed with tools like pip install), you can distribute executable files that user can simply launch. If your end user is not a tech-savy person (so no console commands and no source code shenanigans), you can distribute executables and hope that user's machine is not tampered with. Sure, you can make sure that executable is the same you are distributing but that serves little purpose if user's machine is compromised.
2
0
0
Python is an interpreted language. So, when we submit the code to end users, will they get the source code or the executable one? If a user gets the source code of the application, it may be tampered with. So how is safety measured in Python applications?
Is python code safe after deployment?
0.099668
0
0
706
38,055,072
2016-06-27T13:19:00.000
2
0
1
0
python,security,deployment
38,055,129
2
true
0
0
The end user would get the the source code unless you compile your python into bytecode and send that to the user. For example : python -O -m py_compile file1.py file2.py file3.py However as with any bytecode it can be decompiled to a form similar to the source.
2
0
0
Python is an interpreted language. So, when we submit the code to end users, will they get the source code or the executable one? If a user gets the source code of the application, it may be tampered with. So how is safety measured in Python applications?
Is python code safe after deployment?
1.2
0
0
706
38,056,711
2016-06-27T14:34:00.000
1
0
1
0
python,binary,hex,byte,bytearray
38,056,839
1
false
0
0
A bytearray is always a list of integers. How they are displayed is only their representation. The same applies to the way you entered them. Python understand the 0x?? (hexadicimal) and 0?? (octal) notation for integers but it will display the decimal notation. To convert an integer to a string in the 0x?? format use hex(value).
1
1
0
Hi I've been trying to iterate through a bytearray, add up all the bytes and then append the result back into the same bytearray. The bytearray looks like this: key = bytearray([0x12, 0x10, 0x32]) However, when I call sum(key) I get the decimal representation of 84. Any idea how I can change the decimal representation and put it back into a hexadecimal format while keeping it of type int. Thank You
Adding bytes in python 2.7
0.197375
0
0
1,227
38,059,310
2016-06-27T16:53:00.000
0
0
0
1
python,google-app-engine
38,060,684
1
false
1
0
The parameter to the appcfg update command is the yaml file or directory containing yaml file
1
0
0
I wrote a simple pymongo code insert a few values in MongoDB instance on GAE and My app got deployed properly from Pycharm but, I am getting the same error while running following command appcfg.py -A login-services-1354 -V v1 update . on my cloud shell The following is the error I got Usage: appcfg.py [options] update | [file, ...] appcfg.py: error: Directory '/home/seshanthnadal' does not contain configuration file app.yaml Any help would be appreciated!
PyCharm is not able find app.yaml when pushing to GAE
0
0
0
98
38,059,456
2016-06-27T17:02:00.000
0
0
1
0
python,ipython,ipython-notebook,jupyter,jupyter-notebook
38,059,614
1
false
0
0
You can choose either using the Kernel tab --> change kernel and select python version, or when you open a new notebook under the file tab you can choose python version there.
1
0
0
I'm designing some software that is specifically going to be used by people running Python 2.7 instead of 3. Unfortunately, I'm using a computer that has 3, and apparently there are some dependency issues when some of my colleagues are using Python 2.7 to run code. I'm hoping to run with 2.7 in the ipython notebook to fix my problem. Do I need to install a new version, and if so which one?
Run iPython notebook with a specific version (2.7 instead of 3)
0
0
0
386
38,060,783
2016-06-27T18:27:00.000
0
0
0
0
python,algorithm,pattern-matching,sequence,apriori
38,061,355
2
false
0
0
There are a couple of business decisions you have to take before you will have a normal algorithm. The first and the most important decision is what size of the set do you want. Clearly if you have {a, b, ... x} is the most frequent set, then every subset (like {a, x} or {c, d, y} will be at least with the same frequency). You need to know which one do you need (may be all or any). Also what would you do in case of these frequencies {a, b} with frequency 100 and {a, c, d, e, f, g} with frequency 20. Clearly the first one is more frequent, but the second is also pretty frequent and really long. One way to approach this is to iterate over all 1 element subsequences and find their frequency. Then all 2 element and so on. Then create some weighted score which can be the frequency multiplied by some function based on the length of the sequence. Select the highest score.
1
0
1
I am trying to find frequent ( ordered or unordered) patterns in a column. The column contains numeric IDs. for eg: s=[1 2 3 4 1 2 6 7 8 2 1 10 11] Here 1 2 or 2 1 taking as a same case is the most frequent set. Please help me to solve this problem, I could think of apriori, FP algorithms but I don't have any transaction, just a sequence.
Trying to find frequent patterns in a sequence python
0
0
0
735
38,062,579
2016-06-27T20:23:00.000
1
0
1
0
python,pyinstaller
38,467,283
1
false
0
0
Solved by changing the modules' order. Old order: from Tkinter import * import tkFileDialog import matplotlib.pyplot as plt import numpy as np import PIL.Image import pylab from matplotlib.colors import LightSource, Normalize,LinearSegmentedColormap New order: import matplotlib.pyplot as plt import numpy as np import PIL.Image import pylab from matplotlib.colors import LightSource, Normalize,LinearSegmentedColormap from Tkinter import * import tkFileDialog
1
0
0
PyInstaller previously worked very well. However, after installing Jupyter, the new generated exe files by PyInstaller did not work,for instance, warning no module named Tkinter. If uninstall Jupyter and generate the exe file by PyInstaller again, the new exe file works well. Is there any conflict between the two? How to solve this problem? I want to keep Jupyter installed.
The exe file generated by PyInstaller does not work
0.197375
0
0
351
38,064,885
2016-06-27T23:42:00.000
0
0
1
0
python,arrays,list,multidimensional-array
38,065,115
1
true
0
0
Python's fairly friendly about this sort of thing and will let you have lists as elements of lists . Here's an example of one way to do it. TableA = [['01/01/2000', '$10'], ['02/01/2000', '$11']] If you entered this straight into the python interpreter, you'd define TableA as a list with two elements. Both of these elements are also lists. If you then entered in TableA[0] you'd get ['01/01/2000', '$10']. Furthermore, by entering TableA[0][0] you'd get '01/01/2000' as that's the first element of the first list in TableA. Extending this further, you can have lists of lists of lists (and so on). First, let's define TableA and TableB. TableA = [['01/01/2000', '$10'], ['02/01/2000', '$11']] TableB = [['03/01/2000', '$13'], ['04/01/2000', '$14']] Now we can simply define BigTable as having TableA and TableB as its elements. BigTable = [TableA, TableB] Now, BigTable[0] is just TableA so BigTable[0][0][0] will be the same as TableA[0][0] If at some point down the line you realise that you want BigTable to have more lists in it, say a TableC or TableD. Just use the append function. BigTable.append(TableC) By the way, you'll probably want to have prices and dates expressed as numbers rather than strings, but it's easier to follow the example this way.
1
0
1
I'm looking to create a master array/list that takes several two dimensional lists and integrates them into the larger list. For example I have a TableA[] which has dates as one array/list and prices as another array/list. I have another as TableB[] which has the same. TableA[0][0] has the first date; TableA[0][1] has the first price; TableA[1][0] has the second date, and so on. I would like to create BigTabe[] that has BigTable[0][0][0] = TableA[0][0] and BigTable[1][0][0] = TableB[0][0]. Any guidance would be much appreciated. Thank you!
Python: multi-dimensional lists - appending one 2D list into another list
1.2
0
0
1,397
38,065,448
2016-06-28T00:57:00.000
0
0
0
0
python-3.x,neural-network,tensorflow
38,067,029
1
false
0
0
If you could have your code/ more detail here that would be beneficial. However, you can return the session you're using to train N1 and access it while you want to train N2.
1
1
1
I am using Tensorflow 0.8 to train the Deep Neural Networks. Currently, I encounter an issue that I want to define two exact same Neural Networks N1 and N2, and I train N1, during the training loop, I copy updated weights from N1 to N2 every 4 iterations. In fact, I know there is way using tf.train.saver.save() to save all N1 weights into a .ckpt file on Disk, and using tf.train.saver.restore() to load those weights from .ckpt file, which is equivalent to the copy functionality. However, this load/reload will impact the training speed, and I wonder if there are other more efficient ways to do the copy (For example, do in-memory copy, etc.). Thanks!
Tensorflow Copy Weights Issue
0
0
0
539
38,066,526
2016-06-28T03:21:00.000
0
0
0
0
python,mongodb,windows-7,barcode-scanner,hid
38,120,429
1
false
0
0
In this scenario I will suggest using scanners/readers that can emulate serial (COM) port. As HID device writes to same bus then there is a huge probability that output from two or more devices could by mixed-up. More over I will add a device id string to a prefix like dev01. Binding to a com port can be used by pySerial module. Any comments welcome!
1
0
0
I have found a number of answers in pulling information from HIDs in Linux, but not many in Windows. I have created a system where a person can scan an ID badge when entering a briefing that logs their attendance into a database. It utilizes a Python 3.4 front end which queries and then updates a MongoDB database. Currently, I have a USB Barcode Scanner which, when scanning, acts as a keyboard and "types" what the barcode says, followed by a CR. I also have a window which takes the text input and then closes the window and executes a database query and update when the CR is received. The current issue is speed. I have been asked to expand the system so that one computer with a USB hub can take 4-8 of these Barcode Scanners at the same time, attempting to increase scanning rate to 1000 people every 5 minutes. What I am afraid will happen is that if two scans happen at almost the same time, then their inputs will overlap, generating an invalid query and resulting in both individuals not being logged. As far as I can understand, I need to place each Scanner in its own thread to prevent overlapping data, and I do not want to "lock" input from the other scanners when the system detects a scan beginning as this system is all about speed. However, I am unsure of how to differentiate the devices and how to implement the system. Any solutions would be appreciated! Please take note that I am unfamiliar with HID use in this sense, and only have a basic background in multi-threading.
Sorting Input from Multiple HIDs in Windows
0
0
0
180
38,067,324
2016-06-28T04:55:00.000
2
0
1
0
python,database,cursor
38,067,434
2
false
0
0
Probably it is most like a file handle. That does not mean that it is a file handle, and a cursor is actually an object - an instance of a Cursor class (depending on the actual db driver in use). The reason that it's similar to a file handle is that you can consume data from it, but (in general) you can't go back to previously consumed data. Consumption of data is therefore unidirectional. Reading from a file handle returns characters/bytes, reading from a cursor returns rows.
1
1
0
In Python, what is a database "cursor" most like? A method within a class A Python dictionary A function A file handle I have searched on internet but I am not getting proper justification of this question.
Database Cursor
0.197375
1
0
2,948
38,071,436
2016-06-28T08:52:00.000
0
0
0
0
python,pandas,matplotlib,boxplot
38,072,512
1
false
0
0
Use return_type='axes' to get data1.boxplot to return a matplotlib Axes object. Then pass that axes to the second call to boxplot using ax=ax. This will cause both boxplots to be drawn on the same axes. Alternatively if you just want them plotted side to side use matplotlib subplot
1
0
1
I want to plot boxplots for each of the dataframes side by side. Below is an example dataset. data 1 : id |type |activity | feature1 1 | A | ACTIVE | 12 2 | B | INACTIVE| 10 3 | C | ACTIVE| 9 data 2: id | type | activity | feature1 1 | A | ACTIVE | 13 2 | B | INACTIVE | 14 3 | C | ACTIVE | 15 First boxplot should be to plot the feature1 grouped by type and the second boxplot should be to plot the feature1 grouped by activity.Both the plots should be placed in the same figure. Note : I do not want to do combined grouping.
Plot 2 boxplots , each from different pandas dataframe in a figure?
0
0
0
865
38,072,334
2016-06-28T09:30:00.000
1
0
1
0
python-2.7,machine-learning
38,074,981
3
false
0
0
Having solid knowledge on statistical background of machine learning, I think is more essential. Numpy, pandas, matplotlib, scikit-learn, would be some useful tools in python for machine learning.
1
0
0
I know the concepts of python programming. And I heard that with python machine learning is more compatible. So, I want to start the machine learning using python. I am a novice in machine learning.(just want to start from scratch) How will I start towards this??
How to start Machine Learning with python programming?
0.066568
0
0
3,175
38,072,956
2016-06-28T09:56:00.000
1
0
0
0
python,nginx,flask,uwsgi
38,080,033
1
false
1
0
As an option you can do the following: Separate the heavy logic from the function which is being called upon @route and move it into a separate place (a file, another function, etc) Introduce Celery to run that pieces of heavy logic (it will be processed in a separate thread from the @route-decorated functions). A quick way of doing this is using Redis as a message broker. Schedule the time-consuming functions from your @route-decorated functions in Celery (it is possible to pass parameters as well) This way the HTTP requests won't be blocked for the complete function execution time.
1
1
0
Am running an app with Flask , UWSGI and Nginx. My UWSGI is set to spawn out 4 parallel processes to handle multiple requests at the same time. Now I have one request that takes lot of time and that changes important data concerning the application. So, when one UWSGI process is processing that request and say all others are also busy, the fifth request would have to wait. The problem here is I cannot change this request to run in an offline mode as it changes important data and the user cannot simply remain unknown about it. What is the best way to handle this situation ?
Handling time consuming requests in Flask-UWSGI app
0.197375
0
0
750
38,074,069
2016-06-28T10:45:00.000
0
0
0
0
python,google-sheets,google-api,google-sheets-api,google-api-python-client
61,849,385
8
false
0
0
For those whose solving this renaming using NodeJS. Just use the batchRequest API. Indicate in the sheetID the sheet id youre editing and the title field the new title. Then indicate "title" in the fields.
1
30
0
I have been trying/looking to solve this problem for a long while. I have read the documentation for gspread and I cannot find that there is a way to rename a worksheet. Any of you know how to? I would massively appreciate it! There is indeed worksheet.title which gives the name of the worksheet, but I cannot find a way to rename the actual sheet. Thank you in advance!
How do I rename a (work)sheet in a Google Sheets spreadsheet using the API in Python?
0
0
0
13,927
38,075,251
2016-06-28T11:41:00.000
0
0
0
0
python-2.7,scapy
38,086,938
1
false
0
0
HTML text is written in the packet's Raw layer in scapy. In order to save a HTML file, simply write the pkt[Raw].load into a text file.
1
0
0
How can i use tcp to interact with a server and download his html file into my computer? I know that first you need to preform the 3 way handshake and then to send a GET request. But what then? Thank you
HTML download using tcp
0
0
1
30
38,077,570
2016-06-28T13:26:00.000
0
1
1
0
python,file,integer
38,079,966
1
false
0
0
I don't think you can do better than opening the file and reading the size of each recorde then use struct.unpack('<i', buff) for each integer you want to read, file.read(2), will get you 2 integers.
1
0
0
I´ve a "binary" file with variable size record. Each record is composed of an amount of little endians 2 byte-sized integer numbers. I know the start position of each record and it´s size. What´s the fastest way to read this to a Python array of integer?
Reading integers from file using Python
0
0
0
82
38,078,299
2016-06-28T13:59:00.000
0
0
0
0
python,matrix,ode
38,833,074
1
false
0
0
Since the mass matrix is singular, this is a "differential-algebraic equation". You can find off-the-shelf solvers for DAEs, such as the IDA solver from the SUNDIALS library. SUNDIALS has python bindings in the scikit.odes package.
1
0
1
I have a problem M*y' = f(y) that are going to be solved in Python, where M is the mass matrix, y' the derivative and y is a vector, such that y1, y2 etc. refers to different points in r. Have anyone used a mass matrix on a similar problem in Python? The problem is a 2D-problem in r- and z-direction. The r-direction is discretized to reduce the problem to a 1D-problem. The mass matrix is a diagonal matrix with ones and zeros on the diagonal.
Implicit DAE Mass Matrix Python
0
0
0
680
38,079,853
2016-06-28T15:05:00.000
0
0
0
0
python,machine-learning,xgboost
54,546,005
10
false
0
0
To paulperry's code, If change one line from "train_split = round(len(train_idx) / 2)" to "train_split = len(train_idx) - 50". model 1+update2 will changed from 14.2816257268 to 45.60806270012028. And a lot of "leaf=0" result in dump file. Updated model is not good when update sample set is relative small. For binary:logistic, updated model is unusable when update sample set has only one class.
1
51
1
The problem is that my train data could not be placed into RAM due to train data size. So I need a method which first builds one tree on whole train data set, calculate residuals build another tree and so on (like gradient boosted tree do). Obviously if I call model = xgb.train(param, batch_dtrain, 2) in some loop - it will not help, because in such case it just rebuilds whole model for each batch.
How can I implement incremental training for xgboost?
0
0
0
48,270
38,081,695
2016-06-28T16:34:00.000
0
1
0
0
python,arduino,raspberry-pi,spi,raspberry-pi3
38,116,678
1
true
0
0
No you can't. MCP3008 is an analog-to-digital converter. It is an input device.
1
0
0
I would like to use the mcp3008 to drive motors, or switch on led arrays for example, until now i only found how to read analog sensor using raspberry pi gpio. thanks in advance
could i use a MCP3008 to output?
1.2
0
0
94
38,083,176
2016-06-28T17:58:00.000
0
0
1
0
python,numpy,pandas,anaconda
38,084,884
2
false
0
0
I was able to resolve this issue using conda to remove and reinstall the packages that were failing to import. I will leave the question marked unanswered to see if anyone else has a better solution, or guidance on how to prevent this in the future.
1
2
1
I'm running Python 3.5.1 on a Windows 7 machine. I've been using Anaconda without issue for several months now. This morning, I updated my packages (conda update --all) and now I can't import numpy (version 1.11.0) or pandas(version 0.18.1). The error I get from Python is: Syntax Error: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape. This error occurs when the import statement is executed. I'm able to import other packages, some from anaconda's bundle and some from other sources without issue. Any thoughts on how to resolve this?
Python 3.5.1 Unable to import numpy after update
0
0
0
604
38,085,315
2016-06-28T20:03:00.000
1
0
1
0
python-2.7,volttron
38,085,531
1
false
0
0
VOLTTRON uses a virtual environment to isolate itself from the system python. Once activated VOLTTRON keeps its created packages in $VOLTTRON_HOME/packaged. If you are asking about the regular python packages aka things that are installed from pypy you can install using pip install and those are in the env/lib/python2.7 folder under the volttron repository.
1
1
0
How can I install Python packages in VOLTTRON Python interpreter? Which folders does VOLTTRON python interpreter check for Python packages?
Install Python packages in VOLTTRON
0.197375
0
0
99
38,089,277
2016-06-29T02:19:00.000
2
0
1
0
python-2.7,pyqt4,pyinstaller
38,112,580
2
false
0
0
Has solved. This is a bug of pyinstaller3.2, the new in the git has solved this bug. Down the newest source in the github, erverything works fine.
2
1
0
As title, Build successful, but the exe can't run. can not found msvcr100.dll. I can put msvcr100.dll with exe in the same dir, the exe can run. But I just want only one exe file. Anyone know how to do?
pyinstaller 3.2 build pyqt4/python2.7 to onefile exe, can not run missing msvcr100.dll?
0.197375
0
0
1,132
38,089,277
2016-06-29T02:19:00.000
2
0
1
0
python-2.7,pyqt4,pyinstaller
40,601,355
2
false
0
0
Has solved. This is a bug of pyinstaller3.2, the new one in the git has solved this bug. Down the newest source in the GitHub, everything works fine. This is correct, I cant tell you how much that answer helped me out. I have been trying to build a single exe Exploit to execute on Windows XP with-out it crashing for my OSCP Labs/Exam. I followed so many tutorials and nothing seems to work. I was able to build the EXE but could not get it to run under a single EXE. If anyone who reads this is getting "This Program cannot be run in DOS mode" try running it from another machine with the same build (Windows XP). There is not much info out there on how to solve that from a Reverse Shell on a End Of Life Operating System using an EXE exploit built with Pyinstaller. (Lots of Trial and Error and determination) Microsoft Visual C++ 2008 Redistributable Package (or some other version depending on python version) is needed in any case, python27.dll requires it I was also receiving an error about msvcr100.dll when ran from the GUI on my build machine(WinXP SP2). This is corrected in the 3.3 Dev version on GitHub. I installed the C++ 2008 Package but this didn't solve my problem when I re-built the EXE, the 3.3 Dev Pyinstaller was the solution. What I did was: Zip down the Dev version of Pyinstaller 3.3 Dev(GitHub) is the newest for 11/14/16 that I could tell. Make sure you have Python 2.7.x (I used 2.7.11) and pywin32 installed that matches (Python 2.7.x) version. (And it does matter if its 64-bit or 32-bit) Use the setup.py to install Pyinstaller, make sure you do not have a previous version already installed, if so use pip or etc. to remove. I installed with pip first and this was my whole issue. I was able to get all of my 32-bit Single EXE Exploits to run on 64-bit/32-bit Windows machines up to Windows 10. Once that is completed, make sure Pyinstaller is in your $PATH and follow the standard tutorials on creating a --onefile EXE. Copy to your Windows Target machine and it should work with-out error. I did not need to pull any dependencies over but you may have to include some with the --hidden command. Its greatly detailed in the Pyinstaller documentation on how to include hidden .dlls If this still doesn't work for you try using py2exe. Its a little more complicated but it your determined you will figure it out. If you have code written in python 2.x.x and 3.x.x you can have multiple environments of Python and have Pyinstaller installed in each. This is in the documentation as well. Thank you jim ying. Your 2 sentence answer was exactly what I needed.
2
1
0
As title, Build successful, but the exe can't run. can not found msvcr100.dll. I can put msvcr100.dll with exe in the same dir, the exe can run. But I just want only one exe file. Anyone know how to do?
pyinstaller 3.2 build pyqt4/python2.7 to onefile exe, can not run missing msvcr100.dll?
0.197375
0
0
1,132
38,090,643
2016-06-29T04:50:00.000
1
1
0
0
python,python-2.7,smtp
38,090,686
2
false
0
0
It is absolutely possible for a bot to be made that creates gmail accounts, in fact many already exist. The main problem is how to solve the captcha that is required for each new account, however there are services already built to handle this. The only problem then is being willing to violate googles terms of services, as I'm sure this does in one way or another.
1
0
0
I need to send email with same content to 1 million users. Is there any way to do so by writing script or something? Email Id's are stored in Excel format.
Automatic email sending from a gmail account using script
0.099668
0
1
113
38,093,351
2016-06-29T07:45:00.000
2
0
1
0
python,python-2.7,concurrency,lmdb
39,633,808
1
false
0
0
According to lmdb documentation lmdb handles the concurrent writes on its own. When multiple read_write transactions are opened at once.Except the active write transaction Lmdb makes all the other write transactions to wait until the current active write transaction commits. Thus it handles concurrent writes.
1
2
0
I know LMDB does not support concurrent writes. I have an application where concurrent write attempts are very rare, but they may occasionally happen. How should this be managed in a Python application? Specifically: does a concurrent write attempt raise an exception in the Python LMDB binding, so that it would be possible to schedule a retry in the exception handler? is trying a concurrent write even safe? or, is there any other or better way to handle concurrent write attempts?
How to manage concurrent LMDB writes in Python?
0.379949
0
0
1,467
38,094,042
2016-06-29T08:19:00.000
1
0
1
0
python,database,openerp
38,094,868
1
false
0
0
I suggest you two different ways. First : Create different classes to represents your structure : - class driver - class map (to represent the set of towns) - class town The map can be represented by a graph (with number of driver 'roots') with distance (in time) as weighting (for example 1(day) between A and B if there is a driver that can make the travel (Monday-Tuesday)). You start at a possible node (A-Town for driver X for example) and search a path from your start point to your target. I let you find how to represent the others informations and how to make the graph. Second (And better I think if I well understood your problem): Make a linear program to represent this one. With CPLEX, you will have the optimal solution to make the transport. In fact, this way isn't incompatible with the last. You could represent the situation by a graph and create constraints with this one and call cplex with (even in python via API or by system call)
1
0
0
Say I have a delivery route (eg. Monday, Tuesday,etc..). Each route has a list of cities/suburbs that are visited on that route (A-Town, B-Town, C-Town), and a list of drivers that do each city (eg Driver X takes A-Town and B-town, DriverY takes C-Town). What is the most adaptable way to represent this in terms of designing the classes, such that it would be possible to have overlap between drivers? So that a driver make share parts of a route but on different days. Each town may have multiple delivery days, with different drivers on each of those days. EG A-Town may have Driver X on Monday, but Driver Y on Tuesday. I have a delivery address, and I'd search for the next possible weekday that I could deliver. Right now I have a text file where each driver has a list of cities done per each weekday like so: [MONDAY] DRIVER X: A-Town, B-Town, G-Town DRIVER Y: A-Town, C-Town, Q-Town I feel like there is a smarter way of achieving kind of structure. I'm using Python 2.7 (openERP/Odoo). Any suggestions would be greatly appreciated. Note that the distance between the towns/mapping/most efficient route is irrelevant, as we promise certain cities are done on certain days rather than optimize a delivery schedule.
Best way to represent delivery route, city, and driver
0.197375
0
0
334
38,094,764
2016-06-29T08:56:00.000
0
1
0
1
python,c++,caching,memory-management,gem5
43,447,954
1
false
0
0
IIUC, you are trying to track misses to a phyAddr across cache levels. I think you can do that by modifying appropriate Request/Response in /src/mem/protocol/*-msg.sm
1
1
0
Im trying to modify the ~/src/mem/cache/ scripts and code to make a region base cache system for the ARM architecture. so far I have managed to change the SConscript so that a copy of cache.cc,cache.hh and Cache.py is built in the scons but I dont know where I should redirect the memory accessees to the region caches. In other words: I want to be able to direct some mem ref.s based on their mem. address to access D-cacheA and the rest to D-cacheB while cache A & B are the same.
How can I create a region cache in gem5
0
0
0
228
38,095,689
2016-06-29T09:33:00.000
6
0
0
0
python,django,visual-studio-2015,django-rest-framework,django-cors-headers
38,126,584
3
false
1
0
After so many struggle I found one solution, want to share it with you. Hope you will like it. open <your python location>\Lib\site-packages\django\core\management\commands\runserver.py and find one code where it deal with self.addr. if not self.addr: self.addr = '::1' if self.use_ipv6 else '127.0.0.1' It sets default address to 127.0.0.1 change it to '0.0.0.0'. Now, if you run your server with only command ./manage.py runserver It will run on 0.0.0.0, even from visual studio. Good luck.
1
2
0
I am developing a django rest framework application using visual studio 2015, python 2.7, django 1.9. I have enabled CORS. I can access it from other origin when I run it through command prompt as python manage.py runserver 0.0.0.0:8086. But, in visual studio auto debug, it runs on 127.0.0.0. I want to configure visual studio to run the server on specified ip (ie. 0.0.0.0). So, that debugging will be easy. I have tried with setting default port and address from site-packages\django\core\management\commands\runserver.py. and also able to set default port in visual studio property of the project. But, unable to set the default ip. Can any one help me to configure the ip 0.0.0.0 as default not the default one (127.0.0.1) in visual studio. Thanks in advance.
how to set default ip as 0.0.0.0 for a django project to debug in visual studio 2015?
1
0
0
5,943
38,099,994
2016-06-29T12:46:00.000
0
0
1
0
python,multithreading,python-2.7,python-3.x,multiprocessing
38,122,716
1
false
0
0
Usually Multi-Thread is faster but it can give problems when you have shared objects/variables. Multiprocess avoid this case with an appropriated synchronization, with semaphores for example.
1
1
0
I want to run concurrent processes for this flow: table A has rows and each row has : source and destination connections with start-time and time-interval for querying the source to insert into destination. Now, the time_interval can be different like 1 day or 1 month for different rows. So, each process fetches these details, queries the source, and inserts into destination. The start-time now becomes start-time + interval. I want to run every row of A concurrently. What should be the best way to go about it, using threading / multiprocessing / rq / gevents/ any other implementation, for example, multithreading with queues Thanks.
What to use: threading or multiprocessing or rq or gevents for a database related use case
0
0
0
363
38,100,133
2016-06-29T12:51:00.000
3
0
1
0
python,installation,pip,spacy
46,600,755
5
false
0
0
I also met some problems while installing spacy using pip. I have two tips for you: Try to pip uninstall spacy and reinstall it again. using conda install spacy instead of pip which works for me fine.
1
2
0
In need of a python module that features a good POS tagger for the German language SpaCy was recommended to me. On my Win10 64Bit with Python 3.4 I tried installing SpaCy as stated on its homepage, first preparing the virtualenv (installed partially, windows failed at source .env/bin/activate), then using pip install. As this was not quite working, I cloned SpaCy from github via the git cmd and then continued in the windows console using virtualenv .env && source .env/bin/activate which again failed at "source". After that pip install -r requirements.txt fails aswell. I tried pip install -U spacy then which seemed to work fine (no error messages) but further commands for using SpaCy (installing a model for example), using python -m spacy.en.download --force all fail with Error while finding spec for 'spacy.en.download' (<class 'ImportError'>: No module named 'spacy.attrs'). What did I do wrong? How can I ensure a smooth install of SpaCy? It's at least in the list when I try pip list. Thank you all in advance!
SpaCy Install (extended) fails with pip install
0.119427
0
0
3,695
38,100,722
2016-06-29T13:15:00.000
0
0
0
0
mysql,python-2.7,mysql-workbench,mariadb
38,116,416
1
false
0
0
MySQL Workbench only works with MySQL servers, with the exception of migration sources (which can be Postgres, Sybase and others). What you can do however is first to migrate to a MySQL server and then dump the imported data and import that in MariaDB. Might require a few adjustments then.
1
0
0
I am trying to migrate database from SQL Server is in 172.16.12.116 to MariaDB (Windows) is in 172.16.12.107 through MySQL Workbench 6.1.4. Source selection got succeeded. But when I am trying to connect to target I am getting this error: Error during Check target DBMS connection: MySQLError("Host '172.16.12.116' is not allowed to connect to this MariaDB server (code 1130)"): error calling Python module function DbMySQLFE.connect What possibly could be a problem?
Error during Check target DBMS connection
0
1
0
806
38,103,354
2016-06-29T15:01:00.000
0
0
0
0
python,listview,gtk,pygtk,gtk2hs
40,487,030
1
true
0
1
It finally seems that IconView has not such a feature right now, as Thunar uses its own control from libexo, while Caja/Nautilus use their own controls from other libraries.
1
1
0
I'm now practicing with Gtk by developing a file manager application similar to Thunar, and I simply can't figure out how to make the IconView items flow vertically instead of horizontally, like in Thunar or Nautilus' Compact View mode, as well as in Windows Explorer's List View Mode. Should I use TreeView istead? I'm practicing in Haskell bindings, the Gtk2Hs, but I'm also familiar with native C library and Python bindings (PyGtk), so explanations using these languages are also acceptable.
How to make GtkListView items flow from top to bottom, like in Thunar or Nautilus Compact View Mode?
1.2
0
0
113
38,103,569
2016-06-29T15:10:00.000
1
0
1
0
python
38,103,739
1
false
0
0
There is no real way to do this other than reading all of the possible code paths that can be taken in that function and looking to see what exceptions can be raised there. I suppose some sort of automated tool could be written to do this, but even that is pretty tricky because due to python's dynamic nature, just about any exception could be raised from anywhere (If I really wanted to, I can always patch a dependency function with a different function that raises something else entirely). Monkey Patching aside, To actually get it right, you'd need a really good type inferencer (maybe astroid could help?) to infer various TypeError or AttributeError that could be raised from accessing non-existent members or calling functions with the wrong arguments, etc. ValueError is particularly tricky because it can still get raised when you pass something of the correct type. In other cases, we tried to just catch Exception, because that part of the code is so critical, that it should never break on an Exception, but should rather try again. The problem here is, that it even caught the KeyboardInterrupt. This feels like a bad idea to me. For one, retrying code should only be done for exceptions that might give you a different result if you retry it (weird connectivity issues, etc.). For your ValueError case, you'll just raise the ValueError again. The best case scenario here is that the ValueError is allowed to propagate out of the exception handler on the second call -- The worst case is that you end up in an infinite loop (or RecursionError) that you don't really get much information to help debug. Catching Exception should be a last resort (and it shouldn't catch KeyboardInterrupt or SystemExit since those don't inherit from Exception) and should probably only format some sort of error message that somebody can use to track down the issue and fix it.
1
4
0
We are working on a medium-sized commercial Python project and have one reccuring problem when using functions from the standard library. The documentation of the standard library often does not list all (or even any) exceptions that a function can throw, so we try all the error cases that we can come up with, have a look through the source of the library and then catch whatever is plausible. But quite often we miss that one random error that can still happen, but that we didn't come up with. For example, we missed, that json.loads() can raise a ValueError, if any of the built-in constants are spelled the wrong way (e.g. True instead of true). In other cases, we tried to just catch Exception, because that part of the code is so critical, that it should never break on an Exception, but should rather try again. The problem here is, that it even caught the KeyboardInterrupt. So, is there any way to find all exceptions that a function can raise, even if the documentation does not say anything about that? Are there any tools that can determine what exceptions can be raised?
Finding all exceptions that a function can raise
0.197375
0
0
176
38,105,605
2016-06-29T16:45:00.000
0
0
0
0
python,matlab,python-2.7,cplex
38,106,001
1
true
0
0
So turns out, that on occasion the code will run if some variables aren't specified as double, and in other cases it integer division or what not, results false results. I have no idea how this correlates to the input as it really shouldn't but I just went and specified all variables in the relevant section of code to be doubles and that fixed it. So tl;dr: Even if it runs, just enforce all variables are doubles and the problem is solved. Really something Mathworks should fix in their python api
1
0
1
So I have a rather comlicated matlab function (it calls a simulation that in turn calls an external optimisation suite (cplex or gurobi)). And for certain settings and inputs the MATLAB function and the python function called from Matlab give the same result but for others they differ (correct answer is ~4500) python sometimes returns 0.015... or 162381, so widely varying results I can't spot a pattern or correlation for. my guess would be either something with int/ float / double variable conversions, or some form of memory problem. The result comes straight from CPLEX so I'm little confused as to why it changes. On a side note, if I return a structure that contains a structure of arrays, that kills the python kernel. That makes debugging from python a little harder (I have pymatbridge and metakernel installed) Has anyone, had similar issues of unreliable matlab functions in python? Solution ideas other than, executing matlab from the console and reading in a results file?
Odd results from MATLAB function called in python
1.2
0
0
62
38,106,808
2016-06-29T17:51:00.000
2
0
0
0
python,sqlite,unicode
38,146,103
3
false
0
0
SQLite allows you to read/write Unicode text directly. u'O\u2083' is two characters u'O' and u'\u2083' (your question has a typo: 'u\2083' != '\u2083'). I understand that u\2083 is not being stored in sqlite database as unicode character but as 6 unicode characters (which would be u,\,2,0,8,3) Don't confuse u'u\2083' and u'\u2083': the latter is a single character while the former is 4-character sequence: u'u', u'\x10' ('\20' is interpreted as octal in Python), u'8', u'3'. If you save a single Unicode character u'\u2083' into a SQLite database; it is stored as a single Unicode character (the internal representation of Unicode inside the database is irrelevant as long as the abstraction holds). On Python 2, if there is no from __future__ import unicode_literals at the top of the module then 'abc' string literal creates a bytestring instead of a Unicode string -- in that case both 'u\2083' and '\u2083' are sequences of bytes, not text characters (\uxxxx is not recognized as a unicode escape sequence inside bytestrings).
1
0
0
I have a list of variables with unicode characters, some of them for chemicals like Ozone gas: like 'O\u2083'. All of them are stored in a sqlite database which is read in a Python code to produce O3. However, when I read I get 'O\\u2083'. The sqlite database is created using an csv file that contains the string 'O\u2083' among others. I understand that \u2083 is not being stored in sqlite database as unicode character but as 6 unicode characters (which would be \,u,2,0,8,3). Is there any way to recognize unicode characters in this context? Now my first option to solve it is to create a function to recognize set of characters and replace for unicode characters. Is there anything like this already implemented?
Reading unicode characters from file/sqlite database and using it in Python
0.132549
1
0
1,263
38,109,860
2016-06-29T20:55:00.000
0
0
1
0
python,python-2.7,opencv
38,110,151
1
true
0
0
Try to reinstall it by sudo apt-get install python-opencv, But first you may check out something you might be skipping. Make sure the script you are running on the terminal is on the same python version/ location as on IDLE. Maybe your IDLE is running on a different interpreter(different location). Open IDLE and check the path of cv2 module by cv2.__file__. Or you can check the executables path by sys.path. Then check the executable python path by running the script from the terminal, it must be same or else you need to explicitly set the PYTHONPATH to executable path shown in IDLE. Edit: According to the comments the problem you are facing is with the execution path, add the idle execution path to the environment variable path - on Windows. You can do it on the go by SET PATH=%PATH%;c:\python27 on cmd Change the path to your context (of the IDLE).
1
0
1
Even though I believe I have installed correctly OpenCV, I cannot overcome the following problem. When I start a new python project from IDLE (2.7) the cv2 module is imported successfully. If I close IDLE and try to run the .py file, an error message is displayed that says "ImportError: No module named cv2". Then if I create a clean project through IDLE it works until I close it. What could be the problem? P.S. I am using Python 2.7 and OpenCV 3.1, but tried also with 2.4.13 on Windows 10.
OpenCV Python - cv2 Module not found
1.2
0
0
2,805
38,111,340
2016-06-29T22:54:00.000
1
0
1
1
python,python-3.x,python-3.5,pyperclip
65,168,753
3
false
0
0
For those working in a venv, make sure that you have pyperclip installed in the directory that your venv is running in. Eg. C:\MY_PROJECT\venv\Lib\site-packages should include the pyperclip module. If you don't find it here, have a look at where you installed Python and you'll find it there. Eg. C:\Users\Username\AppData\Local\Programs\Python\Python39\Lib\site-packages
2
3
0
I am having trouble importing Pyperclip in IDLE. I am running windows 7 (64-bit). I have Python 3.5.2 Installed on: C:\Python\Python35. I opened command prompt and initiated the install by typing pip install pyperclip after changing directory to C:\Python\Python35\Scripts. It successfully installed Pyperclip-1.5.27. I then went to IDLE and typed in import pyperclip but the following error is showing up: Traceback (most recent call last): File "", line 1, in import pyperclip ImportError: No module named 'pyperclip' I tried to fix this by adding "C:\Python\Python35" to the end of the "Path" variable, in the systems environmental variables.
Can't import Pyperclip
0.066568
0
0
18,433
38,111,340
2016-06-29T22:54:00.000
6
0
1
1
python,python-3.x,python-3.5,pyperclip
38,127,468
3
true
0
0
It unpacked pyperclip in the wrong directory. I copied the entire pyperclip folder and put it in C:/python/python35, now it works as it should. Seems like a noob mistake on my part, but it took me a long time to figure this out. I hope this helps someone in the future.
2
3
0
I am having trouble importing Pyperclip in IDLE. I am running windows 7 (64-bit). I have Python 3.5.2 Installed on: C:\Python\Python35. I opened command prompt and initiated the install by typing pip install pyperclip after changing directory to C:\Python\Python35\Scripts. It successfully installed Pyperclip-1.5.27. I then went to IDLE and typed in import pyperclip but the following error is showing up: Traceback (most recent call last): File "", line 1, in import pyperclip ImportError: No module named 'pyperclip' I tried to fix this by adding "C:\Python\Python35" to the end of the "Path" variable, in the systems environmental variables.
Can't import Pyperclip
1.2
0
0
18,433
38,113,656
2016-06-30T03:47:00.000
0
0
0
1
python-2.7,sockets,arduino-uno,crossbar,wamp-protocol
38,119,608
1
true
0
0
Unfortunately you cannot do so directly at the moment. For the time being, you need to connect the Uno to some component which accepts messages from the Uno and can talk WAMP as well. We are working on a C library for lower-end devices, but as far as I can tell (I'm not directly involved) something with the specs of the Uno will remain out of the scope of WAMP even then since the initial plan is that the library itself will consume about 8k of RAM.
1
0
0
I'm new using WAMP protocol and CrossbarIO servers that are based on the WAMP protocol. The problem is. I have and Arduino Uno + EthernetShield and I want to send the information to the CrossbarServer. The Arduino Uno has not support for Autobahn or WAMP or Crossbar. I just can send normal packages via UDP and websocket with an UNO+Ethernet. Is there someway that I can read this UDP packet in the CrossbarServer from the arduino?
Receiving an UDP Packet from Arduino in a CrossbarServer in Python
1.2
0
1
130
38,115,108
2016-06-30T06:05:00.000
1
0
0
0
python,opencv,apache-spark,pyspark,databricks
63,753,350
2
false
0
0
Try to download numpy first followed by opencv-python it will work. Steps: Navigate to Install Library-->Select PyPI----->In Package--->numpy (after installation completes, proceed to step 2) Navigate to Install Library-->Select PyPI----->In Package--->opencv-python
1
0
1
i want to install pythons library CV2 on a spark cluster using databricks community edition and i'm going to: workspace-> create -> library , as the normal procedure and then selecting python in the Language combobox, but in the "PyPi Package" textbox , i tried "cv2" and "opencv" and had no luck. Does anybody has tried this? Do you know if cv2 can be installed on the cluster through this method? and if so, which name should be used in the texbox?
Install python CV2 on spark cluster(data bricks)
0.099668
0
0
1,543
38,116,078
2016-06-30T07:04:00.000
2
0
0
0
python,scikit-learn
38,124,167
1
false
0
0
These 2 are different things and you normally need them both in order to make a good SVC model. 1) The first one means that in order to scale (normalize) the X data matrix you need to divide with the L2 norm of each column, which is just this : sqrt(sum(abs(X[:,j]).^2)) , where j is each column in your data matrix X . This ensures that none of the values of each column become too big, which makes it tough for some algorithms to converge. 2) Irrespective of how scaled (and small in values) your data is, there still may be outliers or some features (j) that are way too dominant and your algorithm (LinearSVC()) may over trust them while it shouldn't. This is where L2 regularization comes into play , that says apart from the function the algorithm minimizes, a cost will be applied to the coefficients so that they don't become too big . In other words the coefficients of the model become additional cost for the SVR cost function. How much cost ? is decided by the C (L2) value as C*(beta[j])^2 To sum up, first one tells with which value to divide each column of the X matrix. How much weight should a coefficient burden the cost function with is the second.
1
1
1
here is two method of normalize : 1:this one is using in the data Pre-Processing: sklearn.preprocessing.normalize(X, norm='l2') 2:the other method is using in the classify method : sklearn.svm.LinearSVC(penalty='l2') i want to know ,what is the different between them? and does the two step must be used in a completely model ? is it right that just use a method is enough?
python sklearn: what is the different between "sklearn.preprocessing.normalize(X, norm='l2')" and "sklearn.svm.LinearSVC(penalty='l2')"
0.379949
0
0
378
38,118,942
2016-06-30T09:24:00.000
0
0
0
0
python,pandas,memory,pydev
38,119,245
2
false
0
0
If all you need is a virtualization of the disk as a large RAM memory you might set up a swap file on the system. The kernel will then automatically swap pages in and out as needed, using heuristics to figure out what pages should be swapped and which should stay on disk.
1
5
1
I'd like to know if there's a method or a Python Package that can make me use a large dataset without writing it in RAM. I'm also using pandas for statistical function. I need to have access on the entire dataset because many statistical functions needs the entire dataset to return credible results. I'm using PyDev (with interpreter Python 3.4) on LiClipse with Windows 10.
Use hard drive instead of RAM in Python
0
0
0
5,299
38,120,223
2016-06-30T10:17:00.000
0
0
0
0
linux,macos,python-3.x,webkit,gtk
48,846,011
2
false
0
1
For GTK3 brew install pygobject3 Otherwise brew install pygobject
1
3
0
I want to make a browser with Python GTK and Webkit for education purposes. I have GTK and it works, but I can't find how to get webkit for Mac OS X. I tried brew, pip3, easy_install. And I'm not sure if PyQT webkit port is the same as webkit.
How to get Webkit for Mac OS X
0
0
0
2,977
38,125,246
2016-06-30T13:54:00.000
1
0
1
1
python,pip
38,127,300
1
true
0
0
You need to add C:\Python27\Scripts to your path as well. That is where the pip executable lives by default. Also, remember to close and reopen your command shell after you change your path variable to make sure that the update is loaded.
1
0
0
I am trying to install matplotlib into python27 but am running into issues with pip. As stated when I try any 'pip' command I get 'pip' is not recognized as an internal or external command. I checked my path variables and they point to the location of my OSGeo4W Python27 install. get-pip.py works and it even says I am up to date on my pip install. I am wondering if the issue is that I have more than one Python installed on my PC. Arc Desktop decided I needed C:\Python27 and C:\Python34 on top of the OSGeo4W install in the C:\OSGeo4W64\apps\Python27 Anyone know what the issue might be? Can I consolidate my python installs without breaking anything?
cannot get pip to be recognized as a command
1.2
0
0
729
38,128,164
2016-06-30T15:59:00.000
0
0
1
0
python,matlab,debugging,numpy,pycharm
57,539,584
2
false
0
0
You need to ensure that after you "view as array" you then enter the correct slice. I.e. if you view a color image which has shape (500, 1000, 3) as an array, the default slicing option will be image[0]. This is the first row of pixels and will appear as a (1000, 3) array. In order to see one of the three color channels you must change the slicing option to image[:, :, color], then you will see one of the three color channels slices appear as a (500, 1000) array.
2
6
1
First of all, sorry if it's not the place to post this question, I know it is more related to the software I'm using to program than programming itself, but I figured someone here would probably know the answer. I often use PyCharm (currently on version 2016.1.2) and its useful debugger to code in Python. I'm currently translating Matlab code to Python code and I often need to compare outputs of functions. In PyCharm's debugger, I can right click on a variable in the variable space and then press « View as array ». This gives me a nice grid view of my array (Excel kind of grid) and I can easily compare with my array in Matlab, which can also be displayed in a grid. However, sometimes, this option won't work in PyCharm and I don't know why! For example, I have a variable of type numpy.ndarray containing 137 by 60 floats and when I press « view as array », it displays the window, but instead of showing the grid, it shows « Nothing to show ». Curiously, I tried to copy the first 30 lines in another variable and this time PyCharm was able to show me the grid associated with this new variable. Usually, the number doesn't seem to be a problem. I tried to display a 500 by 500 array containing floats and it did just fine. If someone could tell me why this happens and how I can overcome this problem, I'd be very glad. Also, if anyone has another way to display a matrix in Python in an elegant way, I'd take it too since it could also help me in my task! Thanks!
Pycharm debugger, view as array option
0
0
0
2,669
38,128,164
2016-06-30T15:59:00.000
6
0
1
0
python,matlab,debugging,numpy,pycharm
41,962,870
2
false
0
0
I encountered the same problem when I tried to view a complex arrays with 'Color' check box checked. Unchecking the check box showed the array. Maybe some inf or nan value present in you array which does not allow to show colored array.
2
6
1
First of all, sorry if it's not the place to post this question, I know it is more related to the software I'm using to program than programming itself, but I figured someone here would probably know the answer. I often use PyCharm (currently on version 2016.1.2) and its useful debugger to code in Python. I'm currently translating Matlab code to Python code and I often need to compare outputs of functions. In PyCharm's debugger, I can right click on a variable in the variable space and then press « View as array ». This gives me a nice grid view of my array (Excel kind of grid) and I can easily compare with my array in Matlab, which can also be displayed in a grid. However, sometimes, this option won't work in PyCharm and I don't know why! For example, I have a variable of type numpy.ndarray containing 137 by 60 floats and when I press « view as array », it displays the window, but instead of showing the grid, it shows « Nothing to show ». Curiously, I tried to copy the first 30 lines in another variable and this time PyCharm was able to show me the grid associated with this new variable. Usually, the number doesn't seem to be a problem. I tried to display a 500 by 500 array containing floats and it did just fine. If someone could tell me why this happens and how I can overcome this problem, I'd be very glad. Also, if anyone has another way to display a matrix in Python in an elegant way, I'd take it too since it could also help me in my task! Thanks!
Pycharm debugger, view as array option
1
0
0
2,669
38,129,077
2016-06-30T16:47:00.000
1
1
1
1
python,anaconda
38,129,078
1
false
0
0
Well, there is a -s flag in python executable to disable searching the user site directory (`~/.local/lib/python2.7/site-packages etc). That solves the problem above!
1
0
0
I have some packages installed under my ~/.local/lib/python2.7/site-packages/ subdir, which was for use with system python (/usr/bin/python). Now I have just installed Anaconda python (which is also python 2.7, but minor version 11). The whole idea of Anaconda distro is to have a self-containing python environment, such that EVERY module resides within anaconda install tree. But what annoys me is that for some reason I cannot disable inclusion of ~/.local/lib/python2.7/site-packages/ from sys.path although I did not have PYTHONPATH environment variable. Is it possible to run python executable (in this case, Anaconda's python executable) without having to implicitly add ~/.local/lib/python2.7/site-packages/ and the eggs underneath it in the python search path? Why this problem? Unfortunately the ~/.local/lib/python2.7/site-packages/easy-install.pth also contains a reference to /usr/lib/python2.7/dist-packages, which causes this system-wide dist-packages to still be searched for.
How to run python without including ~/.local/lib/pythonX.Y/site-packages in its module search path
0.197375
0
0
645
38,130,008
2016-06-30T17:44:00.000
3
0
0
0
python,matlab,numpy,scipy
38,130,043
3
false
0
0
Python has a built-in function called itertools.permutations. You can call it on any iterable in python and it returns all full length permutations.
1
0
1
Is there an equivalent method in numpy or scipy for matlab's perms function? In matlab, perms returns a matrix of all possible permutations of the input in reverse lexicographical order.
Python equivalent for matlab's perms
0.197375
0
0
865
38,130,483
2016-06-30T18:12:00.000
0
0
1
1
python,visual-studio,visual-c++,pip,setup.py
38,182,957
1
false
0
0
Instead of setting VS100COMNTOOLS=%VS110COMNTOOLS% in cmd, i did SET VS100COMNTOOLS=C:\Program Files\Microsoft Visual Studio 11.0\Common7\Tools\ and it was picking correctly but again thrown another pile of errors as VS11 compiler is different and cannot compile Python 3.4 code properly. I uninstalled VS11, Installed VS10 and it worked.
1
0
0
I was trying to install Airflow in windows through command prompt using pip. The python is 3.4.2, pip included. I am getting the below error. distutils.errors.DistutilsError: Setup script exited with error: Microsoft Visual C++ 10.0 is required (Unable to find vcvarsall.bat). I have installed Visual studio 2012 but Python 3.4 looks for VS10 by default. I tried to trick Python to use the newer visual studio by Executing the command set VS100COMNTOOLS=%VS110COMNTOOLS%. Adding new system variable VS100COMNTOOLS as variable name and gave the value as VS110COMNTOOLS. Both tricks did not work. I am still getting the same old error. The file vcvarsall.bat is present in C:\Program Files\Microsoft Visual Studio 11.0\VC what is missing here? how can I get rid of this error?
Error while installing Airflow using pip in windows- Unable to find vcvarsall.bat
0
0
0
689
38,130,962
2016-06-30T18:42:00.000
3
1
0
0
python
38,131,236
2
false
0
0
StringIO is for text. You use it when you have text in memory that you want to treat as coming from or going to a file. BytesIO is for bytes. It's used in similar contexts as StringIO, except with bytes instead of text.
2
5
0
What is the difference between StringIO and ByteIO? And what sorts of use cases would you use each one for?
What is the difference between StringIO and ByteIO?
0.291313
0
0
2,153
38,130,962
2016-06-30T18:42:00.000
5
1
0
0
python
38,131,261
2
true
0
0
As the name says, StringIO works with str data, while BytesIO works with bytes data. bytes are raw data, e.g. 65, while str interprets this data, e.g. using the ASCII encoding 65 is the letter 'A'. bytes data is preferable when you want to work with data agnostically - i.e. you don't care what is contained in it. For example, sockets only transmit raw bytes data. str is used when you want to present data to users, or interpret at a higher level. For example, if you know that a file contains text, you can directly interpret the raw bytes as text.
2
5
0
What is the difference between StringIO and ByteIO? And what sorts of use cases would you use each one for?
What is the difference between StringIO and ByteIO?
1.2
0
0
2,153
38,134,361
2016-06-30T22:38:00.000
5
0
0
0
python,macos,pycharm,libraries
38,134,383
2
true
0
0
You need to setup your project in PyCharm to use the Python interpreter that has your libraries: Go to: file->settings->project->project interpreter And select the appropriate interpreter from the dropdown. After selecting an interpreter, the window displays a list of libraries installed on that interpreter; this should further help you make the right selection.
2
2
0
If I do something like "import selenium" (or any other kind of third party library) in a .py file and then run it from the terminal, it works just fine. But if I make a new file in PyCharm CE and do the same thing, it can't find the library / module. How can I fix this or get it to point in the right location? I use a Macbook Pro.
Why won't PyCharm see my libraries?
1.2
0
1
2,916
38,134,361
2016-06-30T22:38:00.000
1
0
0
0
python,macos,pycharm,libraries
64,860,059
2
false
0
0
I've faced a similar issue on Pop!_OS after installing PyCharm via Flatpak. I think the installation is somehow incomplete, as I've had these issues (among others): Installer could not create the menu shortcut due to the lack of credentials. Unlike during a typical installation, it wouldn't ask for the password and instead I had to uncheck that option altogether. Built-in terminal defaulted to sh. Even after changing to bash, it would not read my .bashrc and many commands were missing. After changing the interpreter into a local virtualenv, it would just default to Python 3.7 (even though the version was actually 3.8) and it didn't see any of my installed libraries. When I've tried to use a Docker Compose environment, IDE failed to detect Docker Compose installation. I've eventually uninstalled PyCharm and downloaded it directly from Jetbrains website to make it work correctly.
2
2
0
If I do something like "import selenium" (or any other kind of third party library) in a .py file and then run it from the terminal, it works just fine. But if I make a new file in PyCharm CE and do the same thing, it can't find the library / module. How can I fix this or get it to point in the right location? I use a Macbook Pro.
Why won't PyCharm see my libraries?
0.099668
0
1
2,916
38,137,882
2016-07-01T05:45:00.000
0
0
1
0
python
38,138,014
4
false
0
0
I can tell you a way to go about it. Strip the quotes and the outer brackets. Then split the string using spaces. Iterate over the list obtained and check for any opening brackets. Keep a count of the number of opening brackets, join all the list items as a string with spaces between each such item until you encounter an equal number of closing brackets. The remaining items remain as is. You could try implementing it. If you face any issues, I'll help you with the code. @chong's answer is a neater way to go about it.
1
0
0
I have string value like: a='[-sfdfj aidjf -dugs jfdsif -usda [[s dfdsf sdf]]]' I want to transform "a" into dictionary: the strings with preceding "-" character should be keys and what goes after the space should be values of the key preceding it. If we are working with "a", then what I want is the resulting dictionary like: dict_a={'-sfdfj': 'aidjf', '-dugs': 'jfdsif', '-usda': '[[s dfdsf sdf]]'} This would be simple if not the last value('[[s dfdsf sdf]]'), it contains the spaces. Otherwise I would just strip the external brackets and split the "a", then convert the resulting list into dict_a, but alas the reality is not on my side. Even if I get the list like: list_a=['-sfdfj', 'aidjf', '-dugs', 'jfdsif', '-usda', '[[s dfdsf sdf]'] this would be enough. Any help will be appreciated.
not standard splitting
0
0
0
55
38,143,219
2016-07-01T10:35:00.000
0
0
0
0
python,python-3.x,scapy,diameter-protocol,dpkt
40,066,101
1
false
0
0
I would suggest you to use tshark. Using tshark you can convert the pcap files to text files containing the AVPs that you are interested in. Once you have the text file, I believe it would be easy to extract the information using python.
1
2
0
I have a diameter packet capture pcap file (using tcpdump) containing some AVPs. I'd like to parse the pcap file and access/retrieve the AVPs. I'm using python3.5.1. The dpkt library apparently supports diameter well but it's not yet available for python3. I tried converting it via 2to3-3.5 script but the conversion isn't full-proof and I'm hitting unicode errors while parsing the pcap. I am trying to use scapy now. I need some help/examples in how to use scapy to: parse a pcap file retrieve/parse AVPs from the pcap. Any help would be appreciated. Regards Sharad
How to parse and retrieve diameter AVPs in python?
0
0
0
1,867
38,143,991
2016-07-01T11:13:00.000
1
0
0
0
python,numpy,import,module,scikit-learn
38,144,150
2
false
0
0
Numpy conveniently imports its submodules in its __init__.py file and adds them to __all__. There's not much you can do about it when using a library - it either does it or not. sklearn apparently doesn't.
1
2
1
My question is specific to scikit-learn python module, but I had similar issues with matplotlib as well. When I want to use sklearn, if I just do 'import sklearn' and then call whatever submodule I need, like ' sklearn.preprocessing.scale()', I get an error "AttributeError: 'module' object has no attribute 'preprocessing'" On the other hand, when I do 'from sklearn import preprocessing' and then use 'preprocessing.scale()' it works normally. When I use other modules like Numpy, it is sufficient to just 'import numpy' and it works well. Therefore, I would like to ask if anyone can tell me why is this happening and if I am doing something wrong? Thanks.
importing whole python module doesn't allow using submodules
0.099668
0
0
1,297
38,144,825
2016-07-01T11:56:00.000
7
0
1
0
python,pandas
38,145,044
2
true
0
0
No, Python is just a language and doesn't really do anything on its own. A particular Python library might implement caching, but the standard functions you use to open and read files don't do so. The higher-level file-loading functions in Pandas and the CSV module don't do any caching either. The operating system might do some caching of its own, but you can't control that from within Python.
1
5
1
I was wondering if Python is smart enough enough to cache repeatedly accessed files, e.g. when reading the same CSV with pandas or unpickling the same file multiple times. Is this even Python's responsibility, or should the operating system take care of it?
Does Python cache repeatedly accessed files?
1.2
0
0
1,776
38,145,048
2016-07-01T12:08:00.000
0
0
0
0
python,sql-server,database,oracle,python-3.x
38,146,099
1
false
0
0
I may be missing something here. Why don't you connect to your Oracle database as a SQL Server linked server (or the other way around) ?
1
0
0
i have been trying to connect to SQL Server (I have SQL Server 2014 installed on my machine and SQL Native Client 11.0 32bit as driver) using Python and specifically pyodbc but i did not manage to establish any connection. This is the connection string i am using: conn = pyodbc.connect('''DRIVER={SQL Server Native Client 11.0}; SERVER=//123.45.678.910; DATABASE=name_database;UID=blabla;PWD=password''') The error message i am getting is this: Error: ('08001', '[08001] [Microsoft][SQL Server Native Client 11.0]Named Pipes Provider: Could not open a connection to SQL Server [161]. (161) (SQLDriverConnect)') Now, is this caused by the fact that both Python (i have version 3.5.1) and pyodbc are 64bit while the SQL Driver is 32bit? If yes, how do i go about solving this problem? How do i adapt pyodbc to query a 32bit database? I am experiencing the same problem with Oracle database OraCLient11g32_home1 For your information, my machine runs Anaconda 2.5.0 (64-bit). Any help would be greatly appreciated.Thank you very much in advance.
Database Connection SQL Server / Oracle
0
1
0
228
38,146,607
2016-07-01T13:23:00.000
0
0
0
1
python,caching,tornado,requesthandler
38,439,059
2
true
0
0
Depends on how and where you want to be able to access this cache in the future, and how you want to handle invalidation. If the CSV files don't change then this could be as simple as @functools.lru_cache or a global dict. If you need one cache shared across multiple processes then you could use something like memcached or redis, but then you'll still have some parsing overhead depending on what format you use. In any case, there's not really anything Tornado-specific about this.
1
0
1
I want to cache a pandas dataframe into tornado requesthandler. So i don't want to repeat the pd.read_csv() for every hit to that particular url.
Where can i cache pandas dataframe in tornado requesthandler
1.2
0
0
906
38,146,821
2016-07-01T13:33:00.000
0
0
0
0
python,opencv,matrix,computer-vision
57,898,590
2
false
0
0
Yes, Computing Fundamental Matrix gives a different matrix every time as it is defined up to a scale factor. It is a Rank 2 matrix with 7DOF(3 rot, 3 trans, 1 scaling). The fundamental matrix is a 3X3 matrix, F33(3rd col and 3rd row) is scale factor. You make ask why do we append matrix with constant at F33, Because of (X-Left)F(x-Right)=0, This is a homogenous equation with infinite solutions, we are adding a constraint by making F33 constant.
2
2
1
I am estimating the fundamental matrix and the essential matrix by using the inbuilt functions in opencv.I provide input points to the function by using ORB and brute force matcher.These are the problems that i am facing: 1.The essential matrix that i compute from in built function does not match with the one i find from mathematical computation using fundamental matrix as E=k.t()FK. 2.As i vary the number of points used to compute F and E,the values of F and E are constantly changing.The function uses Ransac method.How do i know which value is the correct one?? 3.I am also using an inbuilt function to decompose E and find the correct R and T from the 4 possible solutions.The value of R and T also change with the changing E.More concerning is the fact that the direction vector T changes without a pattern.Say it was in X direction at a value of E,if i change the value of E ,it changes to Y or Z.Y is this happening????.Has anyone else had the same problem.??? How do i resolve this problem.My project involves taking measurements of objects from images. Any suggestions or help would be welcome!!
Estimation of fundamental matrix or essential matrix from feature matching
0
0
0
1,100
38,146,821
2016-07-01T13:33:00.000
1
0
0
0
python,opencv,matrix,computer-vision
38,615,979
2
false
0
0
Both F and E are defined up to a scale factor. It may help to normalize the matrices, e. g. by dividing by the last element. RANSAC is a randomized algorithm, so you will get a different result every time. You can test how much it varies by triangulating the points, or by computing the reprojection errors. If the results vary too much, you may want to increase the number of RANSAC trials or decrease the distance threshold, to make sure that RANSAC converges to the correct solution.
2
2
1
I am estimating the fundamental matrix and the essential matrix by using the inbuilt functions in opencv.I provide input points to the function by using ORB and brute force matcher.These are the problems that i am facing: 1.The essential matrix that i compute from in built function does not match with the one i find from mathematical computation using fundamental matrix as E=k.t()FK. 2.As i vary the number of points used to compute F and E,the values of F and E are constantly changing.The function uses Ransac method.How do i know which value is the correct one?? 3.I am also using an inbuilt function to decompose E and find the correct R and T from the 4 possible solutions.The value of R and T also change with the changing E.More concerning is the fact that the direction vector T changes without a pattern.Say it was in X direction at a value of E,if i change the value of E ,it changes to Y or Z.Y is this happening????.Has anyone else had the same problem.??? How do i resolve this problem.My project involves taking measurements of objects from images. Any suggestions or help would be welcome!!
Estimation of fundamental matrix or essential matrix from feature matching
0.099668
0
0
1,100
38,147,124
2016-07-01T13:47:00.000
2
0
0
1
python,windows,curl
38,149,152
1
false
0
0
Take a step back and do some sanity checks. Here are some steps to try. Copy the path in your script, and paste it into the file explorer (remove the escape characters) and verify that the path is indeed correct. Verify that you have proper permissions to the path * Copy the executable (curl.exe) to a same location your python script is at, this eliminates the need to specify a path (sanity check) import subprocess path = 'curl.exe' subprocess.call([path]) If this works, you can then move it your expected path and verify
1
0
0
I am trying to access the curl executable on my computer using a subprocess call, but when I do so, I get the following error: WindowsError: [Error 2] The system cannot find the file specified my code looks as follows path = 'C:\\Users\\Username\\AppData\\Local\\Continuum\\Anaconda2\\Library\\bin\\curl.exe subprocess.call([path]) I know the path is correct, is there a reason that my script is balking at this? As you can see, I am running the Anaconda2 Python Interpreter, not the standard one from Python.org
How to access the cURL executable with Python subprocess module
0.379949
0
0
196
38,149,341
2016-07-01T15:44:00.000
1
0
1
1
python,midi,python-3.5
38,149,378
1
false
0
0
Installing a module for one python version does not install it for the others. If you're using pip, run something like pip3 install mido. pip defaults to installing for Python 2, so you'll need to explicitly call pip3.
1
4
0
I'm trying to use mido with Python 3.5.1. I have successfully installed it (it says "successfully installed mido-1.1.14" in command prompt) but when I tried to import it in python, it gives me the message ImportError: No module named 'mido' I understand that mido targets Python 2.7 and 3.2, but does it really not work with 3.5 at all? Or am I missing something here? I'm using Windows 8.1, 64-bit; but my Python 3.5 is 32-bit version. Any help will be appreciated. Thanks!
mido on Python 3.5.1: No module named 'mido'
0.197375
0
0
2,563
38,151,292
2016-07-01T17:49:00.000
1
0
1
0
ipython,pycharm
70,563,736
2
false
0
0
I've had the same concern and right now I remembered that you can just write #%% (for code cell) or #%% md (for markdown cell) anywhere you want and it will create a new cell
1
2
0
I just started using Ipython in Pycharm. What's the shortcut for insert a cell for Ipython in Pycharm? To insert a cell between the 2nd and 3rd cell. To insert a cell at the end of code According to Pycharm documentation, way to add cell as follows. But it doesn't work for me. Anyone find the same issue? Since the new cell is added below the current one, click the cell with import statement - its frame becomes green. Then on the toolbar click add (or press Alt+Insert).
Shortcut for insert a cell below for Ipython in Pycharm?
0.099668
1
0
2,299
38,155,039
2016-07-01T23:34:00.000
39
0
1
0
python,numpy
38,156,630
3
false
0
0
There are several major differences. The first is that python integers are flexible-sized (at least in python 3.x). This means they can grow to accommodate any number of any size (within memory constraints, of course). The numpy integers, on the other hand, are fixed-sized. This means there is a maximum value they can hold. This is defined by the number of bytes in the integer (int32 vs. int64), with more bytes holding larger numbers, as well as whether the number is signed or unsigned (int32 vs. uint32), with unsigned being able to hold larger numbers but not able to hold negative number. So, you might ask, why use the fixed-sized integers? The reason is that modern processors have built-in tools for doing math on fixed-size integers, so calculations on those are much, much, much faster. In fact, python uses fixed-sized integers behind-the-scenes when the number is small enough, only switching to the slower, flexible-sized integers when the number gets too large. Another advantage of fixed-sized values is that they can be placed into consistently-sized adjacent memory blocks of the same type. This is the format that numpy arrays use to store data. The libraries that numpy relies on are able to do extremely fast computations on data in this format, in fact modern CPUs have built-in features for accelerating this sort of computation. With the variable-sized python integers, this sort of computation is impossible because there is no way to say how big the blocks should be and no consistentcy in the data format. That being said, numpy is actually able to make arrays of python integers. But rather than arrays containing the values, instead they are arrays containing references to other pieces of memory holding the actual python integers. This cannot be accelerated in the same way, so even if all the python integers fit within the fixed integer size, it still won't be accelerated. None of this is the case with Python 2. In Python 2, Python integers are fixed integers and thus can be directly translated into numpy integers. For variable-length integers, Python 2 had the long type. But this was confusing and it was decided this confusion wasn't worth the performance gains, especially when people who need performance would be using numpy or something like it anyway.
1
54
1
Can you please help understand what are the main differences (if any) between the native int type and the numpy.int32 or numpy.int64 types?
What is the difference between native int type and the numpy.int types?
1
0
0
41,529
38,156,726
2016-07-02T05:14:00.000
0
0
1
0
python,python-3.x,passwords,password-protection
38,157,983
5
false
0
0
Best way to do that would be as @cdarke offered, but a faster way would be to store the .py file in a hidden, password-protected folder.
3
4
0
I wrote an automation program which logs in to a website to perform a certain task. The .py file will be given to a number of users but I don't want them to be able to read the code and see the password used for logging in. How do I make sure that they can only execute the file but not read it?
How do I lock a python (.py) file for editing?
0
0
0
8,420
38,156,726
2016-07-02T05:14:00.000
2
0
1
0
python,python-3.x,passwords,password-protection
38,156,746
5
false
0
0
You can't do it. If you give a password to your users, no matter how much you try to hide it, it's always possible to find it out. You can make it slightly more difficult to find out with encryption and obfuscation, but that only stops non-tech-savvy users. And those users probably wouldn't think to read through a bunch of code looking for a plaintext password anyways. The correct way is to make it so that it's OK if users know their own passwords. Make the server side bit block people from doing things they're not supposed to do (if you don't have one, you need to make one). Use separate accounts for each user so you can separate deactivate them if needed.
3
4
0
I wrote an automation program which logs in to a website to perform a certain task. The .py file will be given to a number of users but I don't want them to be able to read the code and see the password used for logging in. How do I make sure that they can only execute the file but not read it?
How do I lock a python (.py) file for editing?
0.07983
0
0
8,420
38,156,726
2016-07-02T05:14:00.000
0
0
1
0
python,python-3.x,passwords,password-protection
38,156,927
5
false
0
0
One possibility is to have a daemon (service) running which holds the password. That would be running under a restricted user to which normal security has been applied. The users should not be able to access anything under the daemon's user. Users have a python program which communicates a login request to the daemon via an IPC mechanism, you could use a socket, named-pipe, etc. The daemon performs the task on behalf of the user and communicates the results back. How practical that is depends on the amount of communication between the user and the server. There would be performance issues if this was an interactive task. The daemon would probably have to be multi-threaded, depending on volumes, so this could be a lot of work. A similar possibility is that the daemon could be a web server (using, say, Apache), and then the users access using a browser. That could be easier to program and maintain, but it depends on your environment if that is feasible.
3
4
0
I wrote an automation program which logs in to a website to perform a certain task. The .py file will be given to a number of users but I don't want them to be able to read the code and see the password used for logging in. How do I make sure that they can only execute the file but not read it?
How do I lock a python (.py) file for editing?
0
0
0
8,420
38,156,827
2016-07-02T05:30:00.000
0
0
0
0
python,python-2.7,opencv
38,158,929
4
false
0
0
Your question is way too general. Feature matching is a very vast field. The type of algorithm to be used totally depends on the object you want to detect, its environment etc. So if your object won't change its size or angle in the image then use Template Matching. If the image will change its size and orientation you can use SIFT or SURF. If your object has unique color features that is different from its background, you can use hsv method. If you have to classify a group of images as you object,for example all the cricket bats should be detected then you can train a number of positive images to tell the computer how the object looks like and negative image to tell how it doesn't, it can be done using haar training.
2
0
1
I'm a beginner in opencv using python. I have many 16 bit gray scale images and need to detect the same object every time in the different images. Tried template matching in opencv python but needed to take different templates for different images which could be not desirable. Can any one suggest me any algorithm in python to do it efficiently.
opencv-python object detection
0
0
0
905
38,156,827
2016-07-02T05:30:00.000
0
0
0
0
python,python-2.7,opencv
38,674,476
4
false
0
0
u can try out sliding window method. if ur object is the same in all samples
2
0
1
I'm a beginner in opencv using python. I have many 16 bit gray scale images and need to detect the same object every time in the different images. Tried template matching in opencv python but needed to take different templates for different images which could be not desirable. Can any one suggest me any algorithm in python to do it efficiently.
opencv-python object detection
0
0
0
905
38,157,567
2016-07-02T07:15:00.000
0
1
0
0
python
38,157,658
4
false
0
0
Probably a silly, yet valid way of doing this is: Save the URL in a string and scan it from back to front. As soon as you come across a full stop, scrap everything from 3 spaces ahead. I believe urls do not have full stops after the domain names. Please correct me if I am wrong.
1
1
0
I've been trying to extract the domain names from a list of urls, so that http://supremecosts.com/contact-us/ would become http://supremecosts.com. I'm trying to find a clean way of doing it that will be adaptable to various gtlds and cctlds.
Extract domain name only from url, getting rid of the path (Python)
0
0
1
282
38,160,577
2016-07-02T13:15:00.000
1
0
0
0
python,django
38,171,865
1
true
1
0
Try using the atexit module to catch the termination. It should work for everything which acts like SIGINT or SIGTERM, SIGKILL cannot be interrupted (but should not be sent by any auto-restart script without sending SIGTERM before).
1
0
0
I am using a custom Django runserver command that is supposed to run a bunch of cleanup functions upon termination. This works fine as long as I don't use the autoreloader: by server catches the KeyboardInterrupt exception properly and exits gracefully. However, if I use Django's autoreloader, the reloader seems to simply kill the server thread without properly terminating it (as far as I can tell, it doesn't have any means to do this). This seems inherently unsafe, so I can't really believe that there's not a better way of handling this. Can I somehow use the autoreloader functionality without having my server thread be killed uncleanly?
Graceful exit server when using Django's autoreloader
1.2
0
0
150
38,160,597
2016-07-02T13:17:00.000
1
0
0
0
python-2.7,file-io,knime
38,161,395
2
false
1
0
There are multiple options to let things work: Convert the files in-memory to a Binary Object cells using Python, later you can use that in KNIME. (This one, I am not sure is supported, but as I remember it was demoed in one of the last KNIME gatherings.) Save the files to a temporary folder (Create Temp Dir) using Python and connect the Pyhon node using a flow variable connection to a file reader node in KNIME (which should work in a loop: List Files, check the Iterate List of Files metanode). Maybe there is already S3 Remote File Handling support in KNIME, so you can do the downloading, unzipping within KNIME. (Not that I know of, but it would be nice.) I would go with option 2, but I am not so familiar with Python, so for you, probably option 1 is the best. (In case option 3 is supported, that is the best in my opinion.)
1
1
0
I'm using Knime 3.1.2 on OSX and Linux for OPENMS analysis (Mass Spectrometry). Currently, it uses static filename.mzML files manually put in a directory. It usually has more than one file pressed in at a time ('Input FileS' module not 'Input File' module) using a ZipLoopStart. I want these files to be downloaded dynamically and then pressed into the workflow...but I'm not sure the best way to do that. Currently, I have a Python script that downloads .gz files (from AWS S3) and then unzips them. I already have variations that can unzip the files into memory using StringIO (and maybe pass them into the workflow from there as data??). It can also download them to a directory...which maybe can them be used as the source? But I don't know how to tell the ZipLoop to wait and check the directory after the python script is run. I also could have the python script run as a separate entity (outside of knime) and then, once the directory is populated, call knime...HOWEVER there will always be a different number of files (maybe 1, maybe three)...and I don't know how to make the 'Input Files' knime node to handle an unknown number of input files. I hope this makes sense. Thanks!
Python in Knime: Downloading files and dynamically pressing them into workflow
0.099668
0
1
1,032
38,164,635
2016-07-02T21:29:00.000
0
0
0
0
python,selenium,firefox,selenium-webdriver,selenium-chromedriver
53,867,150
4
false
1
0
I have experienced similar issue and destroying that driver my self (i.e setting driver to None) prevent those memory leaks for me
2
11
0
So I've been working on scraper that goes on 10k+pages and scrapes data from it. The issue is that over time, memory consumption raises drastically. So to overcome this - instead of closing driver instance only at the end of scrape - the scraper is updated so that it closes the instance after every page is loaded and data extracted. But ram memory still gets populated for some reason. I tried using PhantomJS but it doesn't load data properly for some reason. I also tried with the initial version of the scraper to limit cache in Firefox to 100mb, but that also did not work. Note: I run tests with both chromedriver and firefox, and unfortunately I can't use libraries such as requests, mechanize, etc... instead of selenium. Any help is appreciated since I've been trying to figure this out for a week now. Thanks.
Selenium not freeing up memory even after calling close/quit
0
0
1
10,312
38,164,635
2016-07-02T21:29:00.000
2
0
0
0
python,selenium,firefox,selenium-webdriver,selenium-chromedriver
38,164,741
4
false
1
0
Are you trying to say that your drivers are what's filling up your memory? How are you closing them? If you're extracting your data, do you still have references to some collection that's storing them in memory? You mentioned that you were already running out of memory when you closed the driver instance at the end of scraping, which makes it seem like you're keeping extra references.
2
11
0
So I've been working on scraper that goes on 10k+pages and scrapes data from it. The issue is that over time, memory consumption raises drastically. So to overcome this - instead of closing driver instance only at the end of scrape - the scraper is updated so that it closes the instance after every page is loaded and data extracted. But ram memory still gets populated for some reason. I tried using PhantomJS but it doesn't load data properly for some reason. I also tried with the initial version of the scraper to limit cache in Firefox to 100mb, but that also did not work. Note: I run tests with both chromedriver and firefox, and unfortunately I can't use libraries such as requests, mechanize, etc... instead of selenium. Any help is appreciated since I've been trying to figure this out for a week now. Thanks.
Selenium not freeing up memory even after calling close/quit
0.099668
0
1
10,312
38,164,976
2016-07-02T22:29:00.000
1
0
0
0
python,metrics,influxdb,grafana
38,212,500
1
false
0
0
one work-around would be store the starting time point as time stamp, and the duration of the interval as a value. If you intervals are all evenly-spaced and continuous, there's probably no need to store the duration at all.
1
0
0
I collecting metrics samples from 'clients' in agregated by the time interval format like e.g: { 'interval': 19:50-19:55, 'hits': 55, 'missed': 45} { 'interval': 19:55-20:00, 'hits': 23, 'missed': 15} How can I store and use it in influxdb? I looked examples of influxdb usage and notice that always used specific time of sampes, e.g. 19:55:01, not interval.
How can I store already aggregated by time samples in influxdb?
0.197375
0
0
70
38,166,331
2016-07-03T02:55:00.000
1
1
0
0
python,server,xmpp,host
38,188,784
2
true
0
0
PythonAnywhere dev here: I wouldn't recommend our consoles as a place to run an XMPP server -- they're meant more for exploratory programming. AWS (like Adam Barnes suggests) or a VPS somewhere like Digital Ocean would probably be a better option.
1
0
0
I wrote a python app that manage gcm messaging for an android chat app, where could I host this app to be able to work 24/7, it's not a web app, Is it safe and reliable to use PythonAnywhere consoles?
Where to host a python chat server?
1.2
0
1
239
38,167,293
2016-07-03T06:21:00.000
4
0
1
0
python,python-2.7
38,167,330
2
true
0
0
Iterators/generators don't have any way to get the current value. You should either keep a reference to it or create some wrapper that holds onto it for you.
1
11
0
I know you can use c = cycle(['a', 'b', 'c']) to cycle between the elements using c.next(), but is there away to get the iterator's current element? for example if c.next() returned 'c', it means that iterator was at 'b' before. Is there a way I can get 'b' without using next()?
Getting python's itertools cycle current element
1.2
0
0
3,064
38,167,303
2016-07-03T06:23:00.000
0
0
1
0
python,error-handling
38,167,318
1
false
0
0
Definitely, unit testing is to test your own program's logic. If you have a program with functions doing different tasks agnostic to data coming from a user or another function it should be tested. The two points you bring up on if it should be tested or not doesn't affect unit testability.
1
0
0
When designing an application that is static, where no input is coming from outside the program, is it worth while to have error handling even when using a language like python that doesn't need to be compiled? Is it just a best practice? I use python as an example because of its duck-typing nature.
Should you use error handling even if your program is static?
0
0
0
37
38,174,216
2016-07-03T20:49:00.000
0
0
0
0
python,django
53,273,173
6
false
1
0
I think you should try these step: create new admin user python manage.py createsuperuser use new account log in admin site reset password for your account and remeber it log in admin site with your original account
4
10
0
I was working through the polls tutorial and everything was fine until I tried to log in to the admin site; it just said "Please enter the correct username and password for a staff account. Note that both fields may be case-sensitive." So I Googled the issue and tried everything I could. Here are all the problems I investigated: Database not synced: I synced it and nothing changed. No django_session table: I checked; it's there. Problematic settings: The only change I made to the settings was the addition of'polls.apps.PollsConfig', to INSTALLED_APPS. User not configured correctly: is_staff, is_superuser, and is_active are all True. Old sessions: I checked the django_session table and it's empty. Oversized message cookie: I checked and I don't even have one. Created superuser while running web server: I did this, but after stopping the web server, creating a new user, and restarting the web server, and trying to log in with the new user, it still didn't work. Missing or wrong URL pattern: Currently I have url(r"^admin/", admin.site.urls) in mysite/urls.py. Entering wrong username: The username I created was "admin" and that's the same one I'm typing in. Wrong server command: I'm using python manage.py runserver. Something wrong with database: I tried deleting the database and then reapplying the migrations, but nothing changed. Is there anything I haven't tried yet, or am I missing something?
Can't log in to admin site in Django
0
0
0
8,739
38,174,216
2016-07-03T20:49:00.000
0
0
0
0
python,django
68,123,164
6
false
1
0
my problem was using the correct settings module because I use different databases for local/local_proxy and production DJANGO_SETTINGS_MODULE=serverless_django.settings.local_proxy python manage.py createsuperuser worked for me
4
10
0
I was working through the polls tutorial and everything was fine until I tried to log in to the admin site; it just said "Please enter the correct username and password for a staff account. Note that both fields may be case-sensitive." So I Googled the issue and tried everything I could. Here are all the problems I investigated: Database not synced: I synced it and nothing changed. No django_session table: I checked; it's there. Problematic settings: The only change I made to the settings was the addition of'polls.apps.PollsConfig', to INSTALLED_APPS. User not configured correctly: is_staff, is_superuser, and is_active are all True. Old sessions: I checked the django_session table and it's empty. Oversized message cookie: I checked and I don't even have one. Created superuser while running web server: I did this, but after stopping the web server, creating a new user, and restarting the web server, and trying to log in with the new user, it still didn't work. Missing or wrong URL pattern: Currently I have url(r"^admin/", admin.site.urls) in mysite/urls.py. Entering wrong username: The username I created was "admin" and that's the same one I'm typing in. Wrong server command: I'm using python manage.py runserver. Something wrong with database: I tried deleting the database and then reapplying the migrations, but nothing changed. Is there anything I haven't tried yet, or am I missing something?
Can't log in to admin site in Django
0
0
0
8,739
38,174,216
2016-07-03T20:49:00.000
1
0
0
0
python,django
68,131,885
6
false
1
0
Check your is_active model field. You may have set its default value to False, hence the reason it might not let you log in. If it's like this -- is_active = models.BooleanField(default=False) -- change it to True, or inspect the database and change the value in is_active for the created superuser to 1.
4
10
0
I was working through the polls tutorial and everything was fine until I tried to log in to the admin site; it just said "Please enter the correct username and password for a staff account. Note that both fields may be case-sensitive." So I Googled the issue and tried everything I could. Here are all the problems I investigated: Database not synced: I synced it and nothing changed. No django_session table: I checked; it's there. Problematic settings: The only change I made to the settings was the addition of'polls.apps.PollsConfig', to INSTALLED_APPS. User not configured correctly: is_staff, is_superuser, and is_active are all True. Old sessions: I checked the django_session table and it's empty. Oversized message cookie: I checked and I don't even have one. Created superuser while running web server: I did this, but after stopping the web server, creating a new user, and restarting the web server, and trying to log in with the new user, it still didn't work. Missing or wrong URL pattern: Currently I have url(r"^admin/", admin.site.urls) in mysite/urls.py. Entering wrong username: The username I created was "admin" and that's the same one I'm typing in. Wrong server command: I'm using python manage.py runserver. Something wrong with database: I tried deleting the database and then reapplying the migrations, but nothing changed. Is there anything I haven't tried yet, or am I missing something?
Can't log in to admin site in Django
0.033321
0
0
8,739
38,174,216
2016-07-03T20:49:00.000
1
0
0
0
python,django
68,274,827
6
false
1
0
This could also happen if the database being used is the default sqlite3 database and the settings.py has the DATABASES property referring to a db.sqlite3 file that is NOT in the same directory as manage.py is.
4
10
0
I was working through the polls tutorial and everything was fine until I tried to log in to the admin site; it just said "Please enter the correct username and password for a staff account. Note that both fields may be case-sensitive." So I Googled the issue and tried everything I could. Here are all the problems I investigated: Database not synced: I synced it and nothing changed. No django_session table: I checked; it's there. Problematic settings: The only change I made to the settings was the addition of'polls.apps.PollsConfig', to INSTALLED_APPS. User not configured correctly: is_staff, is_superuser, and is_active are all True. Old sessions: I checked the django_session table and it's empty. Oversized message cookie: I checked and I don't even have one. Created superuser while running web server: I did this, but after stopping the web server, creating a new user, and restarting the web server, and trying to log in with the new user, it still didn't work. Missing or wrong URL pattern: Currently I have url(r"^admin/", admin.site.urls) in mysite/urls.py. Entering wrong username: The username I created was "admin" and that's the same one I'm typing in. Wrong server command: I'm using python manage.py runserver. Something wrong with database: I tried deleting the database and then reapplying the migrations, but nothing changed. Is there anything I haven't tried yet, or am I missing something?
Can't log in to admin site in Django
0.033321
0
0
8,739
38,176,645
2016-07-04T03:42:00.000
0
0
0
0
python,pandas
38,176,821
1
false
0
0
The way to check is len(df). This will give you the number of lines in DataFrame. Then, you need to check the csv file for lines. On linux, use wc -l. Or, you may need to use something else to findout lines in csv. Notepad++, Nano, SumlineText.
1
0
1
I want to know when I use pandas.read_csv('file.csv') function to read csv file, did it load all data of file.csv into DataFrame?
Does pandas.read_csv loads all data at once?
0
0
0
106
38,178,028
2016-07-04T06:22:00.000
0
0
0
0
python,sas,regression
38,181,950
1
false
0
0
In SAS, apart from the correlation (Pearson index) you can use a ranking index like the Spearman coefficient (proc corr). In addition, supposing you have the correct modules (STAT/MINER) licensed you can use: a linear (logistic) regression on standardized regressors and compare the betas a tree and compare the variables on one of the available metrics (Gini, Chi2)
1
0
1
Is there any way in SAS , Python to find the most influential variables in rank order apart from correlation ? I might be missing something and any suggestion would be appreciated how to interpret it.
Regression: Variable influence
0
0
0
64
38,180,458
2016-07-04T08:52:00.000
1
0
0
1
python,django,outlook,msg
38,212,655
2
false
1
0
Why not create an EML file? It is MIME, so there are hundreds of libraries out there. Outlook will be able to open an EML file just fine. In your particular case, create a MIME file with the vTodo MIME part as the primary MIME part.
1
1
0
Is there any chance to create a outlook .msg file without having outlook installed. We use a django backend and need to create a msg file containing a task for importing in outlook. Since we use unix-based servers we dont have any chance to get outlook installed (except wine etc..) Is there a component to generate such .msg files in any programming language without having outlook installed?
Create .msg file with task without having outlook installed
0.099668
0
0
2,600
38,180,999
2016-07-04T09:17:00.000
-1
0
1
0
python
38,181,265
3
false
0
0
You must parse these numbers out from text (depends encoding), use example some parser combinator or similar. Then you can calculate decimals.
1
2
0
I am pulling text from a site and would like to convert the fractions into decimal. The fractions are in subscript though and look like the following 1¼,4½ is it possible to get around this?
Convert fractions into decimal (fractions are in superscript)
-0.066568
0
0
198
38,182,846
2016-07-04T10:48:00.000
1
0
1
0
python,git,jupyter,jupyter-notebook
46,800,750
2
false
0
0
you can also grab the notebook with a wget using the raw github link (note if the link contains a token you may have to delete the portion after the .ipynb extension before opening the file with jupyter notebook) for example: click on the notebook's download button in github, then open the terminal and type something like wget https://raw.githubusercontent/the_notebook_i_want.ipynb
2
0
0
I am trying to download a Jupyter Notebook from git. I downloaded the notebook by right clicking on the file and saving it. It is an ipynb file, but the file size seems a bit big to me for the content that it should contain (114 kb). When I click on the Notebook in Jupyter I get the following error: Unreadable Notebook: C:\filename.ipynb NotJSONError("Notebook does not appear to be JSON: '\n\n how can this error be solved and can I be able to open the Notebook?
Python git downloading Jupyter Notebook
0.099668
0
0
1,833
38,182,846
2016-07-04T10:48:00.000
3
0
1
0
python,git,jupyter,jupyter-notebook
38,183,170
2
true
0
0
You probably downloaded the html github uses to display a notebook. To download the notebook itself, you can use the "raw" file link in github.
2
0
0
I am trying to download a Jupyter Notebook from git. I downloaded the notebook by right clicking on the file and saving it. It is an ipynb file, but the file size seems a bit big to me for the content that it should contain (114 kb). When I click on the Notebook in Jupyter I get the following error: Unreadable Notebook: C:\filename.ipynb NotJSONError("Notebook does not appear to be JSON: '\n\n how can this error be solved and can I be able to open the Notebook?
Python git downloading Jupyter Notebook
1.2
0
0
1,833
38,187,544
2016-07-04T14:54:00.000
0
0
0
0
python,pyspark
38,209,473
1
false
0
0
Be patient spark take much time to compile and build source manually.
1
0
0
I have just installed the pyspark in my pc. I have used the command 'sbt assembly' to build it but it's showing some progress for more than an hour in the terminal but didn't complete the process.
Command 'sbt assembly' taking long to build Spark
0
0
0
63
38,191,403
2016-07-04T19:49:00.000
0
1
0
1
php,python-2.7,exec,wapiti
38,191,469
1
false
0
0
Look at the php.ini to see if there's anything set for disable_functions. If you're using PHP CLI for the script, you can execute the following command in the shell: php -i | grep disable_functions Also make sure wap.py has execute permissions.
1
0
0
I have a python script (wap.py) from which I am calling wapiti asynchronously using Popen. Command for it in wap.py: p = Popen("python wapiti domainName", shell = True) When I am running wap.py, it is executing completely fine. But when I am running it using php exec, it doesn't work. Command from php file : exec("python wap.py")
php exec not working along with wapiti
0
0
0
72
38,192,172
2016-07-04T21:00:00.000
0
0
1
0
python,windows-7,easy-install,imdbpy
38,193,742
1
false
0
0
I was able to bypass using easy_install by changing the directory to each package's installation folder and running "python setup.py install" in command prompt.
1
0
0
I'm trying to install the IMDbPY module to Python using easy_install. However, I've never used Python before and kept getting stuck on using easy_install or pip install. Since I'm using Windows 7, I tried running the following code in command prompt: easy_install IMDbPY A flashing cursor then appears on the next line, but nothing happens after a long wait. I tried installing other packages such as SQLObject using easy install and pip install as well, but the same result occurs. It seems that whenever I try to use easy_install, cmd just freezes and never actually finishes the installation. Am I using easy_install incorrectly? If so, what should I do?
Cannot easy_install IMDbPY
0
0
0
191
38,197,879
2016-07-05T07:43:00.000
25
0
0
1
python,openerp,odoo-8
38,198,005
2
true
1
0
You need to install pywin32. Either use pip install pywin32 or download from GitHub https://github.com/mhammond/pywin32/releases
1
5
0
I am using odoo8 with python 2.7.9 (64 bit) on eclipse IDE. Python software got corrupted so I had to reinstall it.Now I am facing this new problem ImportError: No module named win32service
ImportError: No module named win32service
1.2
0
0
32,335
38,203,983
2016-07-05T12:52:00.000
1
0
0
0
python-2.7,scikit-learn,tf-idf
38,220,858
2
false
0
0
There is no reason why idf would give more information for a classification task. It performs well for search and ranking, but classification needs to gather similarity, not singularities. IDF is meant to spot the singularity between one sample vs the rest of the corpus, what you are looking for is the singularity between one sample vs the other clusters. IDF smoothens the intra-cluster TF similarity.
2
0
1
I am working on a multilabel text classification problem with 10 labels. The dataset is small, +- 7000 items and +-7500 labels in total. I am using python sci-kit learn and something strange came up in the results. As a baseline I started out with using the countvectorizer and was actually planning on using the tfidf vectorizer which I thought would work better. But it doesn't.. with the countvectorizer I get a performance of a 0,1 higher f1score. (0,76 vs 0,65) I cannot wrap my head around why this could be the case? There are 10 categories and one is called miscellaneous. Especially this one gets a much lower performance with tfidf. Does anyone know when tfidf could perform worse than count?
TF-IDF vectorizer doesn't work better than countvectorizer (sci-kit learn
0.099668
0
0
1,238
38,203,983
2016-07-05T12:52:00.000
1
0
0
0
python-2.7,scikit-learn,tf-idf
38,204,179
2
false
0
0
The question is, why not ? Both are different solutions. What is your dataset, how many words, how are they labelled, how do you extract your features ? countvectorizer simply count the words, if it does a good job, so be it.
2
0
1
I am working on a multilabel text classification problem with 10 labels. The dataset is small, +- 7000 items and +-7500 labels in total. I am using python sci-kit learn and something strange came up in the results. As a baseline I started out with using the countvectorizer and was actually planning on using the tfidf vectorizer which I thought would work better. But it doesn't.. with the countvectorizer I get a performance of a 0,1 higher f1score. (0,76 vs 0,65) I cannot wrap my head around why this could be the case? There are 10 categories and one is called miscellaneous. Especially this one gets a much lower performance with tfidf. Does anyone know when tfidf could perform worse than count?
TF-IDF vectorizer doesn't work better than countvectorizer (sci-kit learn
0.099668
0
0
1,238